Copy Link
Add to Bookmark
Report
AIList Digest Volume 8 Issue 065
AIList Digest Thursday, 25 Aug 1988 Volume 8 : Issue 65
Philosophy:
AI and the Vincennes incident
Animal Behavior and AI
Navigation and symbol manipulation
Dual encoding, propostional memory and...
Can we human being think two different things in parallel?
----------------------------------------------------------------------
Date: 19 Aug 88 1449 PDT
From: John McCarthy <JMC@SAIL.Stanford.EDU>
Subject: AI and the Vincennes incident
I agree with those who have said that AI was not involved in the
incident. The question I want to discuss is the opposite of those
previously raised. Namely, what would have been required so that
AI could have prevented the tragedy?
We begin with the apparent fact that no-one thought about the Aegis
missile control system being used in a situation in which discrimination
between civilian traffic and attacking airplanes would be required.
"No-one" includes both the Navy and the critics. There was a lot
of criticism of Aegis over a period of years before 1988. All the
criticism that I know about concerned whether it could stop multiple
missile attacks as it was designed to do. None of it concerned the
possibility of its being used in the situation that arose. Not even
after it was known that the Vincennes was deployed in the Persian
Gulf was the issue of shooting down airliners (or news helicopters) raised.
It would have been better if the issue had been raised, but it appears
that we Earthmen, regardless of political position, aren't smart
enough to have done so. Now that a tragedy has occurred, changes will
be made in operating procedures and probably also in equipment and
software. However, it seems reasonably likely in the future
additional unanticipated requirements will lead to tragedy.
Maybe an institutional change would bring about improvement, e.g.
more brainstorming sessions about scenarios that might occur. The
very intensity of the debate about whether the Aegis could stop
missiles might have insured that any brainstorming that occurred
would have concerned that issue.
Well, if we Earthmen aren't smart enough to anticipate trouble,
let's ask if we Earthmen are smart enough and have the AI or other
computer technology to design AI systems
that might help with unanticipated requirements.
My conclusion is that we probably don't have the technology yet.
Remember that I'm not talking about explicitly dealing with the
problem of not shooting down civilian airliners. Now that the
problem is identified, plenty can be done about that.
Here's the scenario.
Optimum level of AI.
Captain Rogers: Aegis, we're being sent to the Persian Gulf
to protect our ships from potential attack.
Aegis (which has been reading the A.P. wire, Aviation Week, and
the Official Airline Guide on-line edition): Captain, there may
arise a problem of distinguishing attackers from civilian planes.
It would be very embarassing to shoot down a civilian plane. Maybe
we need some new programs and procedures.
I think everyone knowledgable will agree that this dialog is beyond
the present state of AI technology. We'd better back off and
ask what is the minimum level of AI technology that might have
been helpful.
Consider an expert system on naval deployment, perhaps not part
of Aegis itself.
Admiral: We're deploying an Aegis cruiser to the Persian Gulf.
System: What kinds of airplanes are likely to present within
radar range?
Admiral: Iranian military planes, Iraqi military planes, Kuwaiti
planes, American military planes, planes and helicopters hired
by oil companies, civilian airliners.
System: What is the relative importance of these kinds of airplanes
as threats?
It seems conceivable that such an expert system could have been
built and that interaction with it might have made someone think
about the problem.
------------------------------
Date: 22 Aug 88 18:42:53 GMT
From: zodiac!ads.com!dan@ames.arc.nasa.gov (Dan Shapiro)
Reply-to: zodiac!ads.com!dan@ames.arc.nasa.gov (Dan Shapiro)
Subject: Animal Behavior and AI
Motion control isn't the only area where studying animals has merit.
I have been toying with the idea of studying planning behavior in
various creatures; a reality check would add to the current debate
about "logical forethought" vs. "reactive execution" in the absence of
plan structures.
A wrinkle is that it would be very hard to get a positive fix on an
animal's planning capabilities since all we can observe is their
behavior (which could be motivated by a range of mechanisms).
My thought is to study what we would call "errors" in animal behavior
- behaviors that a more cognizant or capable planning engine would avoid.
It seems to me that there must be a powerful difference between animal
planning/action strategies and (almost all) current robotic
approaches; creatures manage to do something reasonable (they survive)
in a wide variety of situations while robots require very elaborate
knowledge in order to act in narrow domains.
------------------------------
Date: 23 Aug 88 06:05:43 GMT
From: jbn@glacier.stanford.edu (John B. Nagle)
Reply-to: glacier!jbn@labrea.stanford.edu (John B. Nagle)
Subject: Re: navigation and symbol manipulation
In a previous article, Stephen Smoliar writes:
>It is also worth noting that Chapter 8 of Gerald Edelman's NEURAL DARWINISM
>includes a fascinating discussion of the possible role of interaction between
>sensory and motor systems. I think it is fair to say that Edelman shares
>Nagle's somewhat jaundiced view of mathematical logic, and his alternative
>analysis of the problem makes for very interesting, and probably profitable,
>reading.
I do not take a "jaundiced view" of mathematical logic, but I
think its applicability limited. I spent some years on automated program
verification (see my paper in ACM POPL '83) and have a fairly good idea of
what can be accomplished by automated theorem proving. I consider mathematical
logic to be a very powerful technique when applied to rigidly formalizable
systems. But outside of such systems, it is far less useful. Proof is so
terribly brittle. There have been many attempts to somehow deal with
the brittleness problem, but none seem to be really satisfying. So,
it seems appropriate to accept the idea that the world is messy and go
from there; to seek solutions that can begin to cope with the messyness of
the real world.
The trouble with this bottom-up approach, of course, is that you
can spend your entire career working on problems that seem so utterly
trivial to people who haven't struggled with them. Look at Marc
Raibert's papers. He's doing very significant work on legged locomotion.
Progress is slow; first bouncing, then constrained running, last year a forward
flip, maybe soon a free-running quadruped. A reliable off-road runner is
still far away. But there is real progress every year. Along the way
are endless struggles with hydraulics, pneumatics, gyros, real-time control
systems, and mechanical linkages. (I spent the summer of '87 overhauling
an electrohydraulic robot, and I'm now designing a robot vehicle. I can
sympathise.)
How much more pleasant to think deep philosophical thoughts.
Perhaps, if only the right formalization could be found, the problems
of common-sense reasoning would become tractible. One can hope.
The search is perhaps comparable to the search for the Philosopher's Stone.
One succeeds, or one fails, but one can always hope for success just ahead.
Bottom-up AI is by comparison so unrewarding. "The people want epistemology",
as Drew McDermott once wrote. It's depressing to think that it might take
a century to work up to a human-level AI from the bottom. Ants by 2000,
mice by 2020 doesn't sound like an unrealistic schedule for the medium term,
and it gives an idea of what might be a realistic rate of progress.
I think it's going to be a long haul. But then, so was physics.
So was chemistry. For that matter, so was electrical engineering. We
can but push onward. Maybe someone will find the Philosopher's Stone.
If not, we will get there the hard way. Eventually.
John Nagle
------------------------------
Date: Tue, 23 Aug 88 10:54:36 BST
From: Gilbert Cockton <gilbert%cs.glasgow.ac.uk@NSS.Cs.Ucl.AC.UK>
Subject: Re: Dual encoding, propostional memory and...
In reply to Pat Hayes last posting
>Yes, but much of this debate has been between psychologists, and so has little
>relevance to the issues we are discussing here.
[psychologist's definition of different defined]
>That's not what the AI modeller means by `different', though.
>it isn't at all obvious that different behavior means different
>representations (though it certainly suggests different implementations).
How can we talk about representation and implementation being different
in the human mind. Are the two different in Physics, Physiology,
Neurobiology .... And why should AI and psychology differ here?
Aren't they adressing the same nature?
I'm sorry, but I for one can see how these categories from software
design apply to human information processing. Somewhere or other, some
neurotransmitters change, but I can't see how we can talk convincingly
about this physiological implementation having any corresponding
representation except itself.
Representation and implementation concern the design of artefacts, not
the structure of nature. AI systems, as artefacts, must make these
distinctions. But in the debate over forms of human memory, we are
debating nature, not artefact. Category mistake.
>It seems reasonable to conclude that these facts that they
>know are somehow encoded in their heads, ie a change of knowledge-state is a
>change of physical state. Thats all the trickery involved in talking about
>`representation', or being concerned with how knowledge is encoded.
I would call this implementation again (my use of the word 'encoding'
was deliberately 'tongue in cheek' :-u). I do not accept the need for
talk of representation. Surely what we are interested in are good
models for physical neurophysiological processes? Computation may be
such a model, but it must await the data. Again, I am talking about
encoding. Mental representations or models are a cognitive
engineering tool which give us a handle on learning and understanding
problems. They ae a conative convenience, relevant to action in the
world. They are not a scientific tool, relevant to a convincing modelling
of the mental world.
>what alternative account would you suggest for describing, for example,
>whatever it is that we are doing sending these messages to one another?
I wouldn't attempt anything beyond the literary accounts of
psychologists. There is a reasonable body of experimental evidence,
but none of it allows us to postulate anything definite about
computational structures. I can't see how anyone could throw up a
computational structure, given our present knowledge, and hope to be
convincing. Anderson's work is interesting, but he is forced to ignore
arguments for episodic or iconic memory because they suggest nothing
sensible in computational terms which would be consistent with the
evidence for long term memory of a non-semantic, non-propositional form.
Computer modelling is far more totalitarian than literary accounts.
Unreasonable restrictions on intellectual freedom result. Worse still,
far too many cognitive scientists confuse the inner loop detail of
computation with increased accuracy. Detailed inaccuracy is actually
worse than vague inaccuracy.
Sure computation forces you to answer questions which would otherwise
be left to the future. However, having the barrel of a LISP
interpreter pointing at your head is no greater guarantee of accuracy
than having the barrel of a revolver pointing at your head. Whilst
computationalists boast about their bravado in facing the compiler, I
for one think it a waste of time to be forced to answer unanswerable
questions by an inanimate LISP interpreter. At least human colleagues
have the decency to change the subject :-)
>If people who attack AI or the Computational Paradigm, simultaneously tell me
>that PDP networks are the answer
I don't. I don't believe either the symbolic or the PDP approach. I
have seen successes for both, but am not well enough read on PDP to
know it's failings. All the talk of PDP was a little tease, recalling
the symbolic camp's criticism that a PDP network is not a
representation. We certainly cannot imagine what is going on in a
massively parallel network, well not with any accuracy. Despite our
inability to say EXACTLY what is going on inside, we can see that
systems such as WISARD have 'worked' according to its design
criteria. PDP does not accurately model human action, but it gets
some low level learning done quite well, even on task requiring what
AI people call intelligence (e.g. spotting the apple under teddy's bottom).
>Go back and (re)read that old 1969 paper CAREFULLY,
Ah, so that's the secret of hermeneutics ;-]
------------------------------
Date: Tue, 23 Aug 88 11:42:49 bst
From: Ken Johnson <ken%aiva.edinburgh.ac.uk@NSS.Cs.Ucl.AC.UK>
Reply-to: "Ken Johnson,E32 SB x212E"
<ken%aiva.edinburgh.ac.uk@NSS.Cs.Ucl.AC.UK>
Subject: Re: Can we human being think two different things in
parallel?
In a previous article, Youngwhan Lee writes:
>Date: Sun, 14 Aug 88 16:54 EDT
>From: Youngwhan Lee <ywlee@p.cs.uiuc.edu>
>To: ailist-request@stripe.sri.com
>Subject: Can we human being think two different things in parallel?
>
>Can we human being think two different things in parallel?
I think most people have had the experience of suddenly gaining insight
into the solution of a problem they last deliberately chewed over a few
hours or days previously. I'd say this was evidence for the brain's
ability to work at two or more (?) high-order tasks at the same time.
But I look forward to reading what Real Psychologists say.
--
------------------------------------------------------------------------------
From: Ken Johnson (Half Man Half Bicycle)
Address: AI Applications Institute, The University, EDINBURGH
Phone: 031-225 4464 ext 212
Email: k.johnson@ed.ac.uk
------------------------------
Date: 23 Aug 88 17:41:12 GMT
From: robinson@pravda.gatech.edu (Steve Robinson)
Reply-to: robinson@pravda.gatech.edu (Steve Robinson)
Subject: Re: Can we human being think two different things in
parallel?
For those of you following Lee's, Hayes' and Norman's postings on
"parallel thinking" there is a short paper in this year's Cognitive
Science Society's Conference proceedings by Peter Norvig at UC-Berkeley
entitled "Multiple Simultaneous Interpretations of Ambiguous Sentences"
which you may find pertinent. The proceedings are published by LEA.
Since the conference was last week, it may be a while until they are
availble elsewhere. I heard Norvig's presentation and found it interesting.
Regards,
Stephen
------------------------------
End of AIList Digest
********************