Copy Link
Add to Bookmark
Report
AIList Digest Volume 4 Issue 186
AIList Digest Friday, 19 Sep 1986 Volume 4 : Issue 186
Today's Topics:
Cognitive Science - Commentaries on the State of AI
----------------------------------------------------------------------
Date: 29 Aug 86 01:58:30 GMT
From: hsgj@tcgould.tn.cornell.edu (Mr. Barbecue)
Subject: Re: Notes on AAAI '86
(not really a followup article, more of a commentary)
I find it very interesting that there is so much excitement generated over
parallel processing computer systems by the AI community. Interesting in
that the problems of AI (the intractability of: language, vision, and general
cognition to name a few) are not anywhere near limited by computational
power but by our lack of understanding. If somebody managed to create a
truely intelligent system, I think we would have heard about it by now,
even if the program took a month to run. Fact of the matter is that our
knowledge of such problems is minimal. Attempts to solve them leads to
researchers banging their heads against a very hard wall, indeed. So what
is happening? The field that was once A.I. is very quickly headed back to
it's origins in computer science and is producing "Expert Systems" by the
droves. The problem isn't that they aren't useful, but rather that they
are being touted as the A.I., and true insights into actual human thinking
are still rare (if not non-existant).
Has everybody given up? I doubt it. However, it seems that economic reality
has set in. People are forced to show practical systems with everyday appli-
cations. Financers can't understand why we would be overjoyed if we could
develop a system that learns like a baby, and so all the money is being
siphoned away and into robotics, Expert Systems, and even spelling checkers!
(no, I don't think that welding cars together requires a great deal of true
intelligence, though technically it may be great)
So what is one to do? Go into cog-psych? At least psychologists are working
on the fundamental problems that AI started, but many seem to be grasping at
straws, trying to find a simple solution (i.e., familly resemblance, primary
attribute analysis, etc.)
What seems to be lacking is a cogent combination of theories. Some attempts
have been made, but these authors basically punt on the issue, stating
like "none of the above theories adequately explain the observed phenomena,
perhaps the solution is a combination of current hypothesis". Very good, now
lets do that research and see if this is true!
My opinion? Well, some current work has dealt with computer nervous systems,
(Science --sometime this summer). This is similar in form to the hypercube
systems but the theory seems different. Really the work is towards computer
neurons. Distributed systems in which each element contributes a little to
the final result. Signals are not binary, but graded. They combine with other
signals from various sources and form an output. Again, this could be done
with a linear machine that hold partial results. But, I'm not suggesting that
this alone is a solution, it's just interesting. My real opinion is that
without "bringing baby up" so to speak, we won't get much accomplished. The
ultimate system will have to be able to reach out, grasp (whether visually or
physically, or whatever) and sense it's world around it in a rich manner. It
will have to be malleable, but still have certain guidelines built in. It
must truely learn, forming a myriad of connections with past experiences and
thoughts. In sum, it will have to be a living animal (though made of sand..)
Yes, I do think that you need the full range of systems to create a truely
intelligent system. Hellen Keller still had touch. She could feel vibrations,
and she could use this information to create a world that was probably
perceptually much different than ours. But, she had true intelligence.
(I realize that the semantics of all these words and phrases are highly
debated, you know what I'm talking, so don't try to be difficult!) :)
Well, that's enough for a day.
Ted Inoue.
Cornell
--
ARPA: hsgj%vax2.ccs.cornell.edu@cu-arpa.cs.cornell.edu
UUCP: ihnp4!cornell!batcomputer!hsgj BITNET: hsgj@cornella
------------------------------
Date: 1 Sep 86 10:25:25 GMT
From: ulysses!mhuxr!mhuxt!houxm!hounx!kort@ucbvax.Berkeley.EDU (B.KORT)
Subject: Re: Notes on AAAI '86
I appreciated Ted Inoue's commentary on the State of AI. I especially
agree with his point that a cogent combination of theories is needed.
My own betting card favors the theories of Piaget on learning, coupled
with the modern animated-graphic mixed-initiative dialogues that merge
the Socratic-style dialectic with inexpensive PC's. See for instance
the Mind Mirror by Electronic Arts. It's a flashy example of the clever
integration of Cognitive Psychology, Mixed Initiative Dialogues, Color
Animated Graphics, and the Software/Mindware Exchange. Such illustrations
of the imagery in the Mind's Eye can breathe new life into the relationship
between silicon systems and their carbon-based friends.
Barry Kort
hounx!kort
------------------------------
Date: 4 Sep 86 21:39:37 GMT
From: tektronix!orca!tekecs!mikes@ucbvax.Berkeley.EDU (Michael Sellers)
Subject: transition from AI to Cognitive Science (was: Re: Notes on
AAAI '86)
> I find it very interesting that there is so much excitement generated over
> parallel processing computer systems by the AI community. Interesting in
> that the problems of AI (the intractability of: language, vision, and general
> cognition to name a few) are not anywhere near limited by computational
> power but by our lack of understanding. [...]
> The field that was once A.I. is very quickly headed back to
> it's origins in computer science and is producing "Expert Systems" by the
> droves. The problem isn't that they aren't useful, but rather that they
> are being touted as the A.I., and true insights into actual human thinking
> are still rare (if not non-existant).
Inordinate amounts of hype have long been a problem in AI; the only difference
now is that there is actually a small something there (i.e. knowledge based
systems), so the hype is rising to truly unbelievable heights. I don't know
that AI is returning to its roots in computer science, probably there is just
more emphasis on the area(s) where something actually *works* right now.
> Has everybody given up? I doubt it. However, it seems that economic reality
> has set in. People are forced to show practical systems with everyday appli-
> cations.
Good points. You should check out the book "The AI Business" by ...rats, it
escapes me (possibly Winston or McCarthy?). I think it was published in late
'84 or early '85, and makes the same kinds of points that you're making here,
talking about the hype, the history, and the current state of the art and the
business.
> So what is one to do? Go into cog-psych? At least psychologists are working
> on the fundamental problems that AI started, but many seem to be grasping at
> straws, trying to find a simple solution (i.e., familly resemblance, primary
> attribute analysis, etc.)
The Grass is Always Greener. I started out going into neurophysiology, then
switched to cog psych because the neuro research is still at a lower level than
I wanted, and then became disillusioned because all of the psych work being
done seemed to be either super low-level or infeasable to test empirically.
So, I started looking into computers, longing to get into the world of AI.
Luckily, I stopped before I got to the point you are at now, and found
something better (no, besides Amway :-)...
> What seems to be lacking is a cogent combination of theories. Some attempts
> have been made, but these authors basically punt on the issue, stating
> like "none of the above theories adequately explain the observed phenomena,
> perhaps the solution is a combination of current hypothesis". Very good, now
> lets do that research and see if this is true!
And this is exactly what is happening in the new field of Cognitive Science.
While there is still no "cogent combination of theories", things are beginning
to coalesce. (Pylyshyn described the current state of the field as Physics
searching for its Newton. Everyone agrees that the field needs a Newton to
bring it all together, and everyone thinks that he or she is probably that
person. The problem is, no one else agrees with you, except maybe your own
grad students.) Cog sci is still emerging as a separate field, even though
its beginnings can probably be pegged as being in the late '70s or early '80s.
It is taking material, paradigms, and techniques from AI, neurology, cog psych,
anthropology, linguistics, and several other fields, and forming a new field
dedicated to the study of cognition in general. This does not mean that
cognition should be looked at in a vacuum (as is to some degree the case with
AI), but that it can and should be examined in both natural and artificial
contexts, allowing for the difference between them. It can and should take
into account all types and levels of cognition, from the low-level neural
processing to the highly plastic levels of linguistic and social cognitive
interaction, researching and applying these areas in artificial settings
as it becomes feasable.
> [...] My real opinion is that
> without "bringing baby up" so to speak, we won't get much accomplished. The
> ultimate system will have to be able to reach out, grasp (whether visually or
> physically, or whatever) and sense it's world around it in a rich manner. It
> will have to be malleable, but still have certain guidelines built in. It
> must truely learn, forming a myriad of connections with past experiences and
> thoughts. In sum, it will have to be a living animal (though made of sand..)
This is one possibility, though not the only one. Certainly an artificially
cogitating system without many of the abilities you mention would be different
from us, in that its primary needs (food, shelter, sensory input) would not
be the same. This does not make these things a requirement, however. If we
would wish to build an artificial cogitator that had roughly the same sort of
world view as we have, then we probably would have to give it some way of
directly interacting with its environment through the use of sensors and
effectors of some sort.
I suggest that you find and peruse the last 5 or 6 years of the journal
Cognitive Science, put out by the Cognitive Science Society. Most of the
things that have been written in there are still fairly up-to-date, as the
field is still reaching "critical mass" in terms of theoretical quantity
and quality (an article by Norman, "Twelve Issues for Cognitive Science"
from 1980 in this journal (not sure which issue) discusses many of the things
you are talking about here).
Let's hear more on this subject!
> Ted Inoue.
> Cornell
--
Mike Sellers
UUCP: {...your spinal column here...}!tektronix!tekecs!mikes
INNING: 1 2 3 4 5 6 7 8 9 TOTAL
IDEALISTS 0 0 0 0 0 0 0 0 0 1
REALISTS 1 1 0 4 3 1 2 0 2 0
------------------------------
Date: 6 Sep 86 19:09:31 GMT
From: craig@think.com (Craig Stanfill)
Subject: Re: transition from AI to Cognitive Science (was: Re: Notes
on AAAI '86)
> I find it very interesting that there is so much excitement generated over
> parallel processing computer systems by the AI community. Interesting in
> that the problems of AI (the intractability of: language, vision, and general
> cognition to name a few) are not anywhere near limited by computational
> power but by our lack of understanding. [...]
For the last year, I have been working on AI on the Connection Machine,
which is a massively parallel computer. Depending on the application,
the CM is between 100 and 1000 times faster than a Symbolics 36xx. I
have performed some experiments on models of reasoning from memory
(Memory Based Reasoning, Stannfill and Waltz, TMC Technical Report).
Some of these experiments required 5 hours on a 32,000 processor CM. I,
for one, do not consider a 500-5000 hour experiment on a Symbolics a
practical way to work.
More substantially, having a massively parallel machine changes the way
you think about writing programs. When certain operations become 1000
times faster, what you put into the inner loop of a program may change
drasticly.
------------------------------
Date: 7 Sep 86 16:46:51 GMT
From: clyde!watmath!watnot!watdragon!rggoebel@CAIP.RUTGERS.EDU
(Randy Goebel LPAIG)
Subject: Re: transition from AI to Cognitive Science (was: Re: Notes
on AAAI '86)
Mike Sellers from Tektronix in Wilsonville, Oregon writes:
| Inordinate amounts of hype have long been a problem in AI; the only difference
| now is that there is actually a small something there (i.e. knowledge based
| systems), so the hype is rising to truly unbelievable heights. I don't know
| that AI is returning to its roots in computer science, probably there is just
| more emphasis on the area(s) where something actually *works* right now.
I would like to remind all that don't know or have forgotten that the notion
of a rational artifact as digitial computer does have its roots in
computing, but the more general notion of intelligent artifact has concerned
scientists and philosophers much longer than the lifetime of the digital
computer. John Haugeland's book ``AI: the very idea'' would be good reading
for those who aren't aware that there is a pre-Dartmouth history of ``AI.''
Randy Goebel
U. of Waterloo
------------------------------
End of AIList Digest
********************