Copy Link
Add to Bookmark
Report

AIList Digest Volume 8 Issue 038

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest             Friday, 5 Aug 1988       Volume 8 : Issue 38 

Today's Topics:

Dual encoding, propostional memory and the epistemics of imagination
Are all Reasoning Systems Inconsistent?
AI and the future of the society
global self reference

----------------------------------------------------------------------

Date: Tue, 26 Jul 88 10:10:40 BST
From: Gilbert Cockton <gilbert%cs.glasgow.ac.uk@NSS.Cs.Ucl.AC.UK>
Subject: Dual encoding, propostional memory and the epistemics of
imagination


>Now think of all the
>other stuff your episodic memory has to be able to represent. How is this
>representing done? Maybe after a while following this thought you will begin
>to see McCarthys footsteps on the trail in front of you.
>
>Pat Hayes

Watch out for the marsh two feet ahead though :-)
Computationalists who are bound to believe in propositional
representations (yes, I encode all my knowledge of a scene into
little FOPC like tuples, honest) have little time for dual (or more)
coding theories of memory.

The dual coding theory, which normally distinguishes between iconic
and semantic memory, has caused endless debate, more often than not
because of the tenacity of researchers who MUST, rationally or
otherwise, believe in a single propositional encoding, or else admit
limitations to computational paradigms.

Any competent text book on cognitive psychology, and especially ones
on memory, will cover the debate on episodic, iconic and semantic
memory (as well as short term memory, working memory and other
gatherings of angels in restricted spaces). These books will lay
several trails in other directions to McCarthy's. The barbeque spots
on the way are better too.

Pat's argument hinges on the demand that we think about something
called representation (eh?) and then describe the encoding. The
minute you are tricked into thinking about bit level encoding
protocols, the computationalists have you. Sure enough, the best
thing you can imagine is something like formal logic. PDP networks
will work of course, but you can't of course IMAGINE the contents of
the network, and thus they cannot be a representation :-)

Since when did reality have anything to do with the quality of our
imagination, especially when the imaginands are rigged from the outset?
--
Gilbert Cockton, Department of Computing Science, The University, Glasgow
gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

------------------------------

Date: Tue, 26 Jul 88 10:42:06 EDT
From: mclean@nrl-css.arpa (John McLean)
Subject: Are all Reasoning Systems Inconsistent?

In AIList vol 8. issue 23, Jonathan Leivent presents the following argument
where P(x) asserts that the formula with Godel number x is provable
and the Godel number of S is n where S = (P(n) -> A):

>1. S = (P(n) -> A)
>2. P(n) -> S
>3. P(n) -> (P(n) -> A)
>4. P(n) -> P(n)
>5. P(n) -> (P(n) ^ (P(n) -> A))
>6. (P(n) ^ (P(n) -> A)) -> A
>7. P(n) -> A
>8. S
>9. P(n)
>10. A

What Jonathan is pointing out was proven by Tarksi in the 30's: a theory
is inconsistent if it contains arithmetic and has the property that for
all all formulae A we can prove P("A") --> A, where "A" is the Godel number
of A. [Tarski actually proved the theorem for any predicate T such that
T("A") <--> A, but it is easy to show that the provability predicate P
has the property that A --> P("A").] This is not so strange if we
realize that P(n) is really an existential formula (Ex)Bew(x,n),
where Bew(x,y) is derivable iff x is the Godel number of a proof whose
last line is the formula whose Godel number is y. It follows that if y
is the Godel number of a theorem then Bew(x,y) is derivable and hence,
so is P(n) by existential generalization. However, the converse is false.
(Ex)Bew(x,y) may be derivable when the formula corresponding to y is not.
In other words, arithmetic is not omega-complete. This does not affect our
proof theory, however, beyond showing that we cannot have a general proof
rule of the form P("A") --> A. We can assert P("A") --> A as a theorem
only when we derive it from the basic theorems of number theory and logic.

John McLean

------------------------------

Date: Wed, 27 Jul 88 15:55 O
From: Antti Ylikoski tel +358 0 457 2704
<YLIKOSKI%FINFUN.BITNET@MITVMA.MIT.EDU>
Subject: AI and the future of the society

I once heard an (excellent) talk by a person working with Symbolics.
(His name is Jim Spoerl.)

One line by him especially remained in my mind:

"What we can do, and animals cannot, is to process symbols.
(Efficiently.)"


In the human brain, there is a very complicated real-time symbol
processing activity going on, and the science of Artificial
Intelligence is in the process of getting to know and to model this
activity.

A very typical example of the human real-time symbol processing is
what happens when a person drives a car. Sensory input is analyzed
and symbols are formed of it: a traffic sign; a car driving in the
same direction and passing; the speed being 50 mph. There is some
theory building going on: that black car is in the fast lane and
drives, I guess, some 10 mph faster than me, therefore I think it's
going to pass me after about half a minute. To a certain extent, the
driver's behaviour is rule-based: there is for example a rule saying
that whenever you see a red traffic light in front of you you have to
stop the car. (I remember someone said in AIList some time ago that
rule-based systems are "synthetic", not similar to human information
processing. I disagree.)


How about a very wild 1984-like fantasy: if there were people who knew
the actual functioning of the human mind as a real-time symbol
processor very well then they would have unbelieveable power upon the
souls of the poor, ignorant people, whose feelings and thoughts could
be manipulated without them being aware of it. (Is this kind of thing
being done in the modern world or is this mere wild imagination?
Listen to of the Californian band Death Angel and their piece Mind
Rape, on the LP Frolic through the Park!) And, of course, anyone
possessing this kind of knowledge certainly would do everything in his
power to prevent others from inventing it ... and someone might make
it public to prevent minds from being manipulated ... oh, isn't this
farfetched.


Whether that fantasy of mine is interesting or just plan ridiculous,
it is a fact that AI opens frightening possibilities for those who
want to use the knowledge involving the human symbol processor as a
tool to manipulate minds.

Perhaps we will have to take care that AI will lead the future of the
human race to a democratic, not to a 1984-like society.


--- Andy Ylikoski

Disclaimer: just don't take me too seriously.

------------------------------

Date: Thu, 4 Aug 88 19:19:10 PDT
From: kk@Sun.COM (Kirk Kelley)
Subject: global self reference

To those who are familiar with literature on self reference,
I am curious about the theoretical nature of global self reference.
Consider the following question.

What is the positive rate of change to all models of the fate of that
rate?

Assume a model can be an image or a collection of references that
represent some phenomenon. A model of the fate of a phenomenon is a
model that can be used to estimate the phenomenon's lifetime. A
change to the model is any edit that someone can justify, to users of
the model, improves the model's validity. A positive rate of change
to a model is a rate that for a given discrete unit of time, contains
at least one change to the model. Hence, if the rate goes to 0, it
is the end of the lifetime of that positive rate.

Surely an interesting answer to this question falls in the realm of
what might be called global self reference: a reference R that refers
to all references to R. In our case, an interesting answer would be
a model M of all models that model M.

I have implemented such a model as a game that runs in STELLA (on the
Mac). I have played with it as a foundation for analyzing decisions
in the development of emerging technologies such as published
hypertext, and such as itself. So I have some practical experience
with its nature. My question is, what is the theoretical nature of
global self reference? What has been said about it? What can
be said about it?

I can show that the particular global self reference in the question
above has the curious property of including anything that attempts
to answer it.

-- kirk

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT