Copy Link
Add to Bookmark
Report
AIList Digest Volume 5 Issue 245
AIList Digest Saturday, 24 Oct 1987 Volume 5 : Issue 245
Today's Topics:
Comments - Lenat's AM & The Success of AI & Dreyfus's Philosophy
----------------------------------------------------------------------
Date: 20 Oct 87 16:14:48 GMT
From: trwrb!aero!venera.isi.edu!smoliar@ucbvax.Berkeley.EDU (Stephen
Smoliar)
Subject: Re: The Success of AI
In article <9320@ut-sally.UUCP> brian@ut-sally.UUCP (Brian H. Powell) writes:
> If you read the summaries of AM, you think it's powerful. Once you read
>the entire dissertation, you realize it's not quite as great a program as you
>had thought, but you still think it's good research.
>
Actually, Lenat and John Seely Brown did something rather like this when they
wrote the paper "Why AM and Eurisko Appear to Work" for AAAI-83.
------------------------------
Date: Thu 22 Oct 87 08:35:35-PDT
From: Douglas Edwards <EDWARDS@WARBUCKS.AI.SRI.COM>
Subject: Reworking Lenat
The claim that Lenat's work has not been retested should not be
allowed to pass without being questioned. Not only should Weimin
Shen's work, already cited by Tom Dietterich, be taken into account,
but there is apparently another attempt to work with the same approach
going on at MIT. *Artificial Intelligence Abstracts* cites the MIT AI
Memo AIM-898, "Discovery Systems" by K. W. Haase Jr. (*AI Abstracts*,
volume 1 number 1, January 1987). I have not yet read (or even
obtained) this memo, but the abstract suggests that Haase has not only
reimplemented Lenat's work but also tried to discover a principled
explanation for why it works, and that Haase's explanation for AM's
success would be quite different from Dietterich's and Shen's. I look
forward to learning more about Haase's work. I don't know if Haase
reads AILIST; if he does, it would be interesting to hear his own
comments on the AM controversy.
--- Douglas D. Edwards
(edwards@ai.sri.com)
------------------------------
Date: 20 Oct 87 14:48:35 GMT
From: bpa!cbmvax!snark!eric@burdvax.prc.unisys.com (Eric S. Raymond)
Subject: Re: The Success of AI
In article <9320@ut-sally.UUCP>, brian@ut-sally.UUCP (Brian H. Powell) writes:
> I feel compelled to challenge this, but not necessarily the rest of your
> article.
> AM wasn't a theorem prover. From the July, 1976 dissertation:
Thanks for the correction, which I also received by email from another comp.ai
regular. I never saw Lenat's dissertation, just an expository paper in one of
journals. I guess maybe the reason I thought the sucker had a theorem prover
attached was that I was working on LISP support for a theorem prover at the
time, and my associative memory got a collision in its hash tables :-).
Nevertheless, I think my more general observations about AI's definitional
problem remain valid. Compilers are a 'success' of AI. So are heuristic-based
search-and-backtrack algorithms. So is the visual analysis preprocessing used
in seeing pick-and-place robots. So (most recently) are 'expert systems'.
In *each case*, these problem areas were defined out of the AI field as soon
as they spawned halfway-usable technologies and acquired their own research
communities.
I think the same thing is about to happen to neural nets, BTW...
--
Eric S. Raymond
UUCP: {{seismo,ihnp4,rutgers}!cbmvax,sdcrdcf!burdvax,vu-vlsi}!snark!eric
Post: 22 South Warren Avenue, Malvern, PA 19355 Phone: (215)-296-5718
------------------------------
Date: 21 Oct 87 23:33:18 GMT
From: psuvax1!vu-vlsi!swatsun!scott@rutgers.edu (Jay Scott)
Subject: Re: The Success of AI
> Nevertheless, I think my more general observations about AI's definitional
> problem remain valid. Compilers are a 'success' of AI. So are heuristic-based
> search-and-backtrack algorithms. So is the visual analysis preprocessing used
> in seeing pick-and-place robots. So (most recently) are 'expert systems'.
> In *each case*, these problem areas were defined out of the AI field as soon
> as they spawned halfway-usable technologies and acquired their own research
> communities.
I agree. And I want to understand better why it's so.
Here's one speculation: People see intelligence as mysterious, intrinsically
non-understandable. So anything understood can't be part of intelligence,
and can't be part of AI. I assume this was what Eric had in mind in a
previous article, when he mentioned "hidden vitalist premises".
Of course some people believe explicitly that intelligence is mystical,
and say so. But even AI people may implicitly feel that, oh, this algorithm
isn't clever enough, real intelligence has to be cleverer than that. And
so it goes.
Any other good ideas?
> Eric S. Raymond
> UUCP: {{seismo,ihnp4,rutgers}!cbmvax,sdcrdcf!burdvax,vu-vlsi}!snark!eric
> Post: 22 South Warren Avenue, Malvern, PA 19355 Phone: (215)-296-5718
Jay Scott
...bpa!swatsun!scott
------------------------------
Date: 20 Oct 87 16:04:07 GMT
From: trwrb!aero!venera.isi.edu!smoliar@ucbvax.Berkeley.EDU (Stephen
Smoliar)
Subject: Re: The Success of AI (Analysis of AI lack of progress).
Those who would like a taste of the Dreyfus style before embarking upon one of
his books in its entirely would do well to consult the Summer 1986 issue of
IEEE EXPERT. The article "Why Expert Systems Do Not Exhibit Expertise,"
by Hubert and Stuart Dreyfus, is an excerpt from MIND OVER MACHINE: THE
POWER OF HUMAN INTUITION AND EXPERTISE IN THE ERA OF THE COMPUTER. While
there is definitely merit to deflating exaggerated claims about expert systems
which have been made in the name of salesmanship, Hubert Dreyfus approaches
this issue as a philosopher. Consequently, the technical baggage he carries
is often not particularly timely and often inadequate. Were he to wage his
campaign on the battelground of the philosophy of mind, he might come away
with some notable victories; but by descending to the level of technology,
he often falls into traps of misconception.
Here is a sample passage:
Humans often think by forming images and comparing them
holistically. This process is quite different from the
logical, step-by-step operations that logic machines
perform.
There are several things wrong here. First of all, a holistic theory of
memory or reasoning remains a HYPOTHESIS. Claiming it as an observation
is a gross misrepresentation of the surrent state of cognitive science.
Second, the term "logic machine" has been introduced to capture a particular
machine architecture which lacks what Dreyfus wants it to lack. He does
not admit of the possibility of an alternative architecture for the
mechanization of thought which could model the holistic hypothesis.
Fortunately, more productive cognitive scientists HAVE pursued this
line of reasoning.
In any event, the text continues in an attempt to elaborate upon this point:
For instance, human beings use images to predict how certain
events will turn out.
This is, again, hypothesis. It rests on a weaker hypothsis which is never
cited: that human beings use MODELS to predict how certain events will
turn out. This is the whole "mental models" approach to cognition, for
which there is both subtantial literature and experiments in mechanical
implementation.
The text continues:
Programming a computer to analyze a scene has turned out to
be very difficult. Such programs require a great deal of
computation, and they work only in special cases with objects
whose characteristics the computer has been programmed to
recognize in advance.
Nevertheless, such programs may work better than people in those special
cases and can be used in factories. That is why industrial robotics has
become as effective as it has. I regard this as an instance of the situation
I raised regarding perpetual motion machines in an earlier note. I raised
the point that had Bessler's machine actually been put to work and found
to run for significantly long periods of time without energy input, it
would have been an impressive contribution even if its energy dissapated
very slowly, rather than not at all. Similarly, we would do better to
study special cases of scene analysis which are successes rather than
belabor the obstacles to a more general approach to the task.
It gets better:
But that is just the beginning of the problem. Computers
currently can make inferences only from lists of facts.
It's as if to read a newspaper you had to spell out each
word, find its meaning in the dictionary and diagram every
sentence.
This strikes me as a gross misrepresentation of mechanical reasoning, and
I think the crux of this misrepresentation is a confusion between reasoning
and representation. Fortunately, there are other philosophers who appreciate
that these are distinct issues; but they don't seem to attract as much
attention as Dreyfus.
One last jab in parting:
However, a computer cannot recognize emotions such as anger
in facial expressions, because we know of no way to break
down anger into elementary symbols. Therefore, logic machines
cannot see the similarity between two faces that are angry.
Yet human beings can discern the similarly almost instantly.
This strikes me as another example of sloppy thinking. Are we talking
about a GEDANKEN experiment here? If so, how are we to define it?
Are we looking at faces out of context in an attempt to infer emotion?
If so, then I would claim that humans are nowhere near as good as is
claimed. Indeed, man has been notorious for misreading emotion. The
lack of this skill has probably perpetrated many major historical events.
Seymour Papert used to accuse Dreyfus of committing the "superhuman human"
fallacy by assuming that an artrificial intelligence would surpass a human
one. Here is a situation where Dreyfus hasd gone out on a limb which he
should have left alone. Our understanding of how PEOPLE exhibit and
perceive emotion is sufficiently weak that, for the most part, artificial
intelligence seems to have to good sense to leave it in peace.
------------------------------
Date: 22 Oct 87 14:21:54 GMT
From: PT.CS.CMU.EDU!SPICE.CS.CMU.EDU!spe@cs.rochester.edu (Sean
Engelson)
Subject: The success of AI (misunderstandings)
A couple of clarifications in response to recent posts:
(a) My name is Engelson---NOT Engleson.
(b) I did not state that we could simulate the human body and brain at
this point in time. However, we could at some point, presumably, get
to the point where we know precisely how the body is constructed, and
construct a simulation of the physical processes that occur. This is
reasonable because the human body is finite in extent, and thus there
is a finite amount of information to discover, thus it can be
discovered in finite (although possibly very large) time. This is why
I say that computers are not a less-powerful model of computation than
the human brain, as the one can simulate the other. By 'as powerful'
I mean that the same computations may be performed by both; in the
same sense that a serial computer is as powerful as a parallel one, as
the one can simulate the other, although with a great loss of efficiency.
(c) No, it would not be neccesary to simulate the physical world in
our hypothetical super-computer. We could simulate the actions of the
sensory inputs by filtering such things as movie-camera output,
tactile sensors, etc., through a simulation of human sensory organs.
We know that that is theoretically possible through the same line of
reasoning as above.
-Sean-
--
Sean Philip Engelson I have no opinions.
Carnegie-Mellon University Therefore my employer is mine.
Computer Science Department
----------------------------------------------------------------------
ARPA: spe@spice.cs.cmu.edu
UUCP: {harvard | seismo | ucbvax}!spice.cs.cmu.edu!spe
------------------------------
Date: 23 Oct 87 09:00:36 GMT
From: mcvax!varol@uunet.uu.net (Varol Akman)
Subject: Re: The success of AI (misunderstandings)
In article <213@PT.CS.CMU.EDU> spe@spice.cs.cmu.edu (Sean Engelson) writes:
>
> ....................
>
>discovered in finite (although possibly very large) time. This is why
>I say that computers are not a less-powerful model of computation than
>the human brain, as the one can simulate the other. By 'as powerful'
> ---------------------------------
Congratulations, when are you going to receive your Nobel prize
for discovering that?
Varol Akman, CWI, Amsterdam
What is an individual? A very good question. So good, in fact, that
we should not try to answer it. - DANA SCOTT
------------------------------
Date: 23 Oct 87 17:59:07 GMT
From: uwslh!lishka@speedy.wisc.edu (Christopher Lishka)
Subject: Re: The success of AI (misunderstandings)
In article <213@PT.CS.CMU.EDU> spe@spice.cs.cmu.edu (Sean Engelson) writes:
>
>A couple of clarifications in response to recent posts:
>
>(b) I did not state that we could simulate the human body and brain at
>this point in time. However, we could at some point, presumably, get
>to the point where we know precisely how the body is constructed, and
>construct a simulation of the physical processes that occur. This is
>reasonable because the human body is finite in extent, and thus there
>is a finite amount of information to discover, thus it can be
>discovered in finite (although possibly very large) time. This is why
>I say that computers are not a less-powerful model of computation than
>the human brain, as the one can simulate the other. By 'as powerful'
>I mean that the same computations may be performed by both; in the
>same sense that a serial computer is as powerful as a parallel one, as
>the one can simulate the other, although with a great loss of efficiency.
>
I have some questions of Mr. Engelson (forgive me is I misspelled your
name in my last posting), that others on the net might answer also:
How do we know that a computer and a human are "as powerful" as each
other? How do we know that the same computations can be performed on
each "entity?" Referring back to the biological sciences (esp.
Neurobiology), it would seem that there is so much that is *not* known
that coming to conclusions about abstract things such as how a human
body computes (especially billions of computations that we are not
aware of) is a bit naive at this point. It seems like so many
mistakes that were made in the past about the human body and mind: the
brain as complex plumbing, the brain as a rather large telphone
network, etc. Can the assumption that the two are equal in their
power to compute really be made based on what humans know (and do not
know) about their own functioning? Just a thought (maybe I am looking
at this the wrong way...).
By the same reasoning as above, is the analogy between serial and
parallel computers (and a computer and human body) really a good one?
The differences between any computer and a human body (based on the
little we do know) is staggering. In theory, things appear to be the
same. But computers do not have hormones, neurotransmitters, internal
messengers, complex channels, etc. for each of their "basic"
constituents (which I am assuming are cells). Now, theoretically they
may not be necessary. In constructing a model, it is easy to overlook
what can be implemented and what is easy to implement. But
practically the mechanisms may be necessary. I don't know. No one
else knows. But I do know that my Professor of Neurobiology (whom I
think is a good source) as well as the Grad. Students I have spoken
with *all* warn me to beware of these oversights, because the small
details are what do make the difference. If these messenger molecules
and different neurotransmitters and sodium/potassium/calcium channels
and electrical vs. chemical channels were totally useless, why have
they survived millions of years of evolution? Are we then only
super-parallel processors when compared to parallel-processing
computers, just as parallel-processing computers are to serial
computers?
>(c) No, it would not be neccesary to simulate the physical world in
>our hypothetical super-computer. We could simulate the actions of the
>sensory inputs by filtering such things as movie-camera output,
>tactile sensors, etc., through a simulation of human sensory organs.
>We know that that is theoretically possible through the same line of
>reasoning as above.
Is this reasonable? Could we raise a human being properly be hooking
his retinal receptors to wires, his aural receptors to wires, his
tongue connections to a computer simulation, etc.? Would we get a
*normal* person? Personally, I don't think so, but then I don't know;
noone knows. And until someone such as Hitler comes along, the
question will probably remain unanswered. Now, I feel this applies to
computers because we would, in effect, be doing the same thing (given
that we could artificially create a model of a human in a computer).
You would still need to simulate the real world in the images that you
gave the machine. The images would need to respond to the machine.
When the machine wanted to move, all of the images and artificial
senses would need to reflect that. When the machine tried wanted to
ask a question while standing on its head, twiddling it fingers,
chewing gum, and computing pi to the fourth power, could the images
and artificial senses fed to it effectively simulate that? (I know,
it probably wouldn't have a head or do those things, so just insert
any funny little thing that a "child" computer-modelled human would do
at once.) Again, no small feat. Is this really possible in the
future?
>Sean Philip Engelson I have no opinions.
Just some thoughts of mine (the above are NOT intended to be flames).
I feel is a very interesting discussion, but in the end hinges on
one's personal beliefs and philosophies (but then, what doesn't ;-)
The usual disclaimer applies (including the bit about the cockatiels).
-Chris
--
Chris Lishka /lishka@uwslh.uucp
Wisconsin State Lab of Hygiene <-lishka%uwslh.uucp@rsch.wisc.edu
"What, me, serious? Get real!" \{seismo, harvard,topaz,...}!uwvax!uwslh!lishka
------------------------------
Date: 23 Oct 87 16:32:24 GMT
From: violet.berkeley.edu!ed298-ak@jade.Berkeley.EDU (Edouard
Lagache)
Subject: Clarifying Dreyfus's work (Re: The Success of AI).
I would like to clarify some of the aspects of Hubert Dreyfus's
work that were overlooked by Stephen Smoliar in his note. I won't try
to defend Dreyfus, since I doubt that many people on this SIG is really
open-minded enough to consider the alternative Dreyfus proposes, but
for the sake of correctness:
Most of Mr. Smoliar points are in fact dealt with in his first
book. His second book was intended more for the general public, thus
it glosses over a number of important arguments that are in the first
book. As a matter of opinion, I like the first book better, although
it is probably important to read both books to understand his full
position. The first book is:
What Computers Can't Do, The Limits of Artificial
intelligence, Harper and Row, 1979.
One point where Mr. Smoliar misses Dreyfus completely is in his
assumption that Dreyfus is taking about models. Dreyfus is far more
radical than that. He believes that humans don't make models, rather
they carry a collection of specific cases (images?)
Anyone who is at all honest in this field has to admit that there
are a lot of failures to be accounted for. While I feel that Dreyfus
is too pessimistic in his outlook, I feel that there is value in
looking at his perspective. I would hope that by reflecting on (and
reacting against) such skepticism, A.I. researchers would be able to
sharpen their understanding of both human and Artificial Intelligence.
Edouard Lagache
lagache@violet.berkeley.edu
------------------------------
End of AIList Digest
********************