Copy Link
Add to Bookmark
Report
AIList Digest Volume 5 Issue 239
AIList Digest Monday, 19 Oct 1987 Volume 5 : Issue 239
Today's Topics:
Neuromorphic Systems - Terminology,
Opinion - The Success of AI,
Humor - Two Logician Jokes,
Philosophy - Flawed Human Minds
----------------------------------------------------------------------
Date: 15 Oct 87 14:16:55 GMT
From: trwrb!aero!venera.isi.edu!smoliar@ucbvax.Berkeley.EDU (Stephen
Smoliar)
Subject: Re: Neural Networks - Pointers to good texts?
In article <1465@ssc-vax.UUCP> dickey@ssc-vax.UUCP (Frederick J Dickey) writes:
>In article <4191@well.UUCP>, wcalvin@well.UUCP (William Calvin) writes:
>> We brain researchers sure get tired of hearing neural-like networks
>> referred to as "neural networks", an established subject for 25 years since
>> the days of Limulus lateral inhibition.
>
>I think the above says that "biological" neural nets have been studied as a
>formal discipline for 25 years and that this great ancestry gives biology
>prior claim to the term "neural nets". Assuming that this is a correct
>interpretation, let me make the following observation. In 1943, McCulloch
>and Pitts published a paper entitled "A logical calculus of the ideas
>immanent in neural nets". Minsky and Papert (Perceptrons) state that this
>paper presents the "prototypes of the linear threshold functions". This paper
>stikes me as clearly being in the "neural net-like" tradition. Now
>1987-1943 = 44. Also note that 44 > 25. Therefore, it apears that the
>"neural net-like" guys have prior claim to the term "neural net". :-).
Well . . . this is all rather silly. The PUBLISHED title of the classic
paper by McCullogh and Pitts is "A Logigal Calculus of the Ideas Immanent
in Nervous Activity." They NEVER use "neural net" as a technical term
(or in any other capacity) in the paper. They ARE, however, concerned
with a net model based on the interconnection of elements which they call
neurons--appealing to properties of neurons which were known at the time
they wrote the paper. Personally, I think Calvin has a point. Investigators
who are searching the literature will probably benefit from cues which
distinguish papers about actual physiological properties from those about
computational models of those properties.
------------------------------
Date: 17 Oct 87 23:58:18 GMT
From: ptsfa!well!wcalvin@ames.arpa (William Calvin)
Subject: Re: Neural Networks - Pointers to good texts?
I thank you all for the suggestions regarding renaming non-neural "Neural
Networks" -- perhaps we can continue the discussion in the newsgroup
comp.ai.neural-nets rather than here in comp.ai as such.
William H. Calvin
University of Washington NJ-15, Seattle WA 98195
[There is also the neuron%ti-csl.csnet@relay.cs.net list. -- KIL]
------------------------------
Date: 16 Oct 87 06:07:47 GMT
From: ucsdhub!jack!man!crash!gryphon!tsmith@sdcsvax.ucsd.edu (Tim
Smith)
Subject: The Success of AI
There is one humbling sense in which the work in AI in the
past 20 or so years will help considerably in the ultimate
understanding of human intelligence. If you look at concepts
of the brain in the recent past, you see that whatever was
the most current technological marvel served as a metaphor
for the brain. In the early 20th century the brain was a
telephone exchange. After WWII, the systems organization
metaphor was often used (the brain was a large corporation,
with a CEO, VPs, directors, etc.).
It wasn't until computers came along that there was a
metaphor for the brain powerful enough to be taken seriously.
Once people started to try to imitate their brains on
computers, some limitations became apparent. Interestingly
enough, the limitations are not so much in the technological
metaphor as in the present concept of the brain, or of the mind
in general.
There is no reason, in principle, that a very powerful
digital computer cannot imitate a mind, *as long as a mind
is some kind of abstract logic machine*. What AI has
discovered (though it is very unwilling to admit it) is that
this Cartesian (or even Platonic) concept of the mind is
hopelessly inadequate as a basis for understanding human
intelligence!
To conceive of the human mind as a disembodied logic machine
seemed like a great breakthrough to scientists and
philosophers. If it was this, it could be studied and
understood. If it wasn't this, then any scientific study of
the mind (hence, of intelligence) appeared to be fruitless.
The success rate in AI research (as well as most of cognitive
science) in the past 20 years is not very encouraging.
Predictions, based on very optimistic views of the problem
domain, have not been met. A few successful spin-offs have
occurred (expert systems, better programming tools and
environments), but in general the history is one of failure.
Computers do not process natural language very well, they cannot
translate between languages with acceptable accuracy, they
cannot prove significant, original mathematics theorems.
What AI researchers and other cognitive scientists now have to
face is fairly clear evidence that simulations of human
intelligence, where human intelligence is modelled as a
disembodied logic machine, are doomed to fail. Better hardware
is not the solution. Connection machines or simple silicon
neural nets are not the answer. A better concept of "mind" is
what is needed now. This is not to say that AI research should
halt, or that computers are not useful in studying human
intelligence. (They are indispensable.) What I think it does
mean is that one or more really original theoretical paradigms
will have to be developed to begin to address the problems.
One possible source of a new way of thinking about the problems
of modelling human intelligence might be found in a revolution
that is beginning in the cognitive sciences. This revolution is
of course not accepted by most cognitive scientists; many are
not even aware of it. It is difficult to characterize the
revolution, but it essentially rejects the Cartesian dualism of
mind and body, and recognizes that an adequate description of
human intelligence must take into account aspects of human
physiology, experience, and belief that cannot *now* be modelled
by simple logic (e.g., programs). For one example of this new
way of thinking, see the recent book by the linguist George
Lakoff, entitled "Women, Fire, and Dangerous Things." (Neither
the book nor the title are frivolous.)
I believe the great success of AI has been in showing that
the old dualistic separation of mind and body is totally
inadequate to serve as a basis for an understanding of human
intelligence.
--
Tim Smith
INTERNET: tsmith@gryphon.CTS.COM
UUCP: {hplabs!hp-sdd, sdcsvax, ihnp4, ....}!crash!gryphon!tsmith
UUCP: {philabs, trwrb}!cadovax!gryphon!tsmith
------------------------------
Date: 18 Oct 87 01:39:46 GMT
From: PT.CS.CMU.EDU!SPICE.CS.CMU.EDU!spe@cs.rochester.edu (Sean
Engelson)
Subject: Re: The Success of AI
Given a sufficiently powerful computer, I could, in theory, simulate
the human body and brain to any desired degree of accuracy. This
gedanken-experiment is the one which put the lie to the biological
anti-functionalists, as, if I can simulate the body in a computer, the
computer is a sufficiently powerful model of computation to model the
mind. I know, for example, that serial computers are inherently as
powerful computationally as parallel computers, though not as
efficient, as I can simulate parallel processing on essentially serial
machines. So we see, that if the assumption that the mind is an
inherent property of the body is accepted, we must also accept that a
computer can have a mind, if only by the inefficient expedient of
simulating a body containing a mind.
-Sean-
--
Sean Philip Engelson I have no opinions.
Carnegie-Mellon University Therefore my employer is mine.
Computer Science Department
----------------------------------------------------------------------
ARPA: spe@spice.cs.cmu.edu
UUCP: {harvard | seismo | ucbvax}!spice.cs.cmu.edu!spe
------------------------------
Date: Thu 15 Oct 87 16:36:24-CDT
From: David Throop <AI.THROOP@R20.UTEXAS.EDU>
Subject: Two Logician Jokes
#G-120-97A
I was hanging out at the Logicians Union Hall the other day and the place
was full of logicians, poring over logician's manuals and exchanging
gossip. Well, every so often, one of them would call out a number, and all
of the others would laugh real hard. Then they'd all go back to whatever
they were doing.
This seemed real odd behavior for such logical people. So I asked Robert,
who's a logician friend of mine there, what was going on.
"Hey, this is a hall for logicians," he said. "A while back, we collected
all of the jokes that we could prove were funny and put them in a catalog.
Everybody here's read it. Now when somebody wants to tell a joke, they
just call out its serial number." And he showed me the logical joke
catalog.
I thumbed through it for a while. Found a joke I liked. And at an
opportune time, I called it out: "G-120-97B!"
Nobody laughed.
I turned to Robert and said "So how come they didn't laugh?"
He shrugged. "You didn't tell it right."
=============================================================================
G-120-97C
I was hanging out at the Logicians Union Hall the other day and the place
was full of logicians, poring over logician's manuals and exchanging
gossip. Well, every so often, one of them would call out a number, and all
of the others would laugh real hard. Then they'd all go back to whatever
they were doing.
This seemed real odd behavior for such logical people. So I asked Robert,
who's a logician friend of mine there, what was going on.
"Hey, this is a hall for logicians," he said. "A while back, we collected
all of the jokes that we could prove were funny and put them in a catalog.
Everybody here's read it. Now when somebody wants to tell a joke, they
just call out its serial number." And he showed me the logical joke
catalog.
I thumbed through it for a while. Found a joke I liked. Actually, THIS
was the joke. This joke I'm telling you right now, it's numbered
G-120-97C. And here's where it gets hard. Because if the joke is funny,
then the logicians laugh, and that spoils the punchline. And the joke
isn't funny any more. But if the logicians will laugh at any funny joke.
So if they don't laugh, it's because the joke isn't funny. But then the
punchline works and its funny again.
So I can't tell you whether or not the logicians laughed. Either way, it
spoils the punchline.
------------------------------
Date: 16 Oct 87 12:56:21 GMT
From: ihnp4!homxb!genesis!odyssey!gls@ucbvax.Berkeley.EDU
(g.l.sicherman)
Subject: Re: The Job Hunt
> > > Mr X. goes to an employment interview and gets angry or flustered and
> > > says something that causes him to be rejected. Without knowing how his
> > > mind works you can conclude it was flawed.
> >
> > And you could be wrong. Most likely Mr. X. didn't want the job after
> > all. He only wanted you to think he wanted the job. Give him credit
> > for some intelligence!
>
> Also flawed from Mr. X's point of view. Sicherman argues that X only
> seemed to get angry or flustered, in order to make sure the company
> didn't make him an offer, because during the interview he decided he
> didn't want a job with them. If I attributed Mr. X's actions to
> intelligence I would expect him to conclude gracefully, let them make
> an offer, and reject the offer, without making a bad impression on
> somebody who later might be in a position to offer him a job in another
> company. And I don't care whether you blame emotions or habits.
You misunderstood me. I suggested not that X *seemed* to get angry, but
that he genuinely got angry. Emotions are not some kind of side effect--
they serve a constructive purpose. Anger, in particular, drives away
or destroys things that threaten your well-being.
Most likely Mr. X wants to avoid getting a job, but wants people in
general or certain people in particular to think he wants a job. It
happens all the time! You're wasting your time when you pontificate
to Mr. X. He's not going to tell a back-seat driver like you what he
really wants.
> > By this criterion, we are all flawed.
> That's exactly what I meant.
Well, it's a useless and insulting criterion.
--
Col. G. L. Sicherman
...!ihnp4!odyssey!gls
------------------------------
Date: 16 Oct 87 17:07:02 GMT
From: ihnp4!homxb!houdi!marty1@ucbvax.Berkeley.EDU (M.BRILLIANT)
Subject: Re: The Job Hunt
In article <333@odyssey.ATT.COM>, gls@odyssey.ATT.COM (g.l.sicherman) writes:
>
> You misunderstood me. I suggested not that X *seemed* to get angry, but
> that he genuinely got angry. Emotions are not some kind of side effect--
> they serve a constructive purpose. Anger, in particular, drives away
> or destroys things that threaten your well-being.
>
> Most likely Mr. X wants to avoid getting a job, but wants people in
> general or certain people in particular to think he wants a job. It
> happens all the time! You're wasting your time when you pontificate
> to Mr. X. He's not going to tell a back-seat driver like you what he
> really wants.
Do we need a definition of anger? Anger, as I understand it, is an
emotion that catalyzes physical actions but interferes with reason.
I agree that Mr. X may rationalize his action, but I don't believe
it was his best choice.
> > > By this criterion, we are all flawed.
>
> > That's exactly what I meant.
>
> Well, it's a useless and insulting criterion.
Pardon me. I thought what we all needed was a little humility. If
Col. G. L. Sicherman thinks either that he is perfect, or that I am
perfect, I disagree. Tentatively.
In my simplistic view, the mind is a complex system that came to be
what it is through variation and natural selection. It has
functions that we don't understand, adaptations for purposes we
don't understand, and adaptations for purposes that no longer exist.
If it's perfect, that's a marvelous coincidence.
If the aim of artificial intelligence is to model the human mind,
Col. Sicherman and I seem to agree that it's not enough. To model
anger, for instance, we also need artificial emotion. But if the
aim of artificial intelligence is to create a purely intelligent
entity without maladaptive emotions, Col. Sicherman and I would
disagree. I believe that at least some emotional responses are
maladaptive and would not exist in a perfect intelligence, while he
apparently believes the human mind is perfect and cannot be improved
upon.
So let us agree to disagree, and, as I suggested in an earlier
article, let some AI researchers model the human mind, while others
build something better adapted to specific tasks.
M. B. Brilliant Marty
AT&T-BL HO 3D-520 (201)-949-1858
Holmdel, NJ 07733 ihnp4!houdi!marty1
------------------------------
Date: 17 Oct 87 08:52:56 GMT
From: imagen!atari!portal!cup.portal.com!tony_mak_makonnen@ucbvax.Berk
eley.EDU
Subject: Re: Flawed human minds
There is strange and profound truth to the following statements
All the universe is the brain
All you know is the mind
The second statement is more daring than the first . It is necessitated
by the need to posit something that is more than the physical parts
of brain . Assume a completely isolated , closed system capable of
reflection . I submit that such a thinking thing could not posit a
essential flaw in its make up .We see here many individual manifestations
of mind talking about flaws that one can only assume must be attributed to the
brain .What is that which stands back and reflects
on the flawed function
of that very instrument without which it would be a "null"in this universe ?
Can we call it "I" or "mind" . But then some seem to posit other "I"s than can
stand back and look at the first "Iand so on . Very confusing once we leave
the safety of behavioral psych . What seems to the morale at this point ?
Accept the obvious fact that the brain is not very efficient at calculative
functions , and the equally true fact that it is capable of creating machines
That can do that much better . Forget about the other abstract stuff .
This mind knows the limts of some brain functions and compensates for
them . It has so far proved adequate for the primary directive "the
survival of the species and life " . I submit to the members of this
jury that we cannot yet say that it is flawed . However should we
reach the ultimate folly of self destruction then only the absence of
an audience and judge will prevent a definitive verdict .
------------------------------
End of AIList Digest
********************