Copy Link
Add to Bookmark
Report
AIList Digest Volume 5 Issue 260
AIList Digest Thursday, 5 Nov 1987 Volume 5 : Issue 260
Today's Topics:
Proposal - National Resource Center for Intelligent Systems Software &
Methodology - Sharing Software,
Comments - The Success of AI & Humanist Models of Mind
----------------------------------------------------------------------
Date: Tue, 3 Nov 87 11:59:50 EST
From: futrell%corwin.ccs.northeastern.edu@RELAY.CS.NET
Subject: National Resource Center for Intelligent Systems Software
I will soon be in Washington talking to the National Science
Foundation about the possibility of setting up a National Resource
Center for Intelligent Systems Software. The center would have as its
goal the timely and efficient distribution of contributed public
domain software in AI, NLP, and related areas. Below I have listed,
very briefly, some of the points that I will be covering. I would
like to hear reactions from all on this.
0. Goals/Philosophy: Distribute software. The motivations are practical
(easier on the original author and requester) and philosophical
(accumulating a base of shared techniques and experience for the field).
1. Scope: Limited in the beginning until acquisition and distribution
experience is built up.
2. Possible Initial Emphasis: Natural language processing, large lexicons,
small exemplary programs/systems for teaching AI.
3. Selection: Limited selection balancing importance vs. the robustness and
detailed documentation of the contributed software.
4. What to Distribute: Source code plus paper documentation, reprints,
theses related to the software.
5. Mode of Distribution: Small: e-mail distribution server. Large: S-mail.
6. Support of Distributed Items: The Center should not offer true
software "support", but it would assure that the software runs on one
or more systems before distribution (& see next item).
7. User Involvement: Users of the distributed items are a source of both
questions and of answers. So the Center would support national mailings and
forums on the nets so that problems could be resolved primarily by users.
If we don't partially shield the developer, important items may never
be contributed.
8. Languages: Common Lisp would be the dominant exchange medium.
Hopefully other standards will emerge (CLOS, X windows).
9. Hardware: The center would maintain or have access to a dozen or so
systems for testing, configuring, and hard(tape)copy production.
10. Compatibility Problems: The Center would work with developers and users
to deal with the many compatibility issues that will arise.
11. Staff: Two to three full-time equivalents.
12. Management: An advisory board (working via e-mail and phone)?
13. Cost to Users: E-mail free, hardcopy and tapes at near cost.
14. Licensing: A sticky issue. A standard copyright policy could be
instituted. Avoid software with highly restrictive licensing.
Where this is coming from:
Our college is rather new but has 30 faculty and a fair amount of
equipment, mostly Unix. We have a PhD program and a large number of
MS and undergrad students. I am involved in a major project to parse
whole documents to build knowledge bases. My focus is on parsing
diagrammatic material, something that has received little attention.
I teach grad courses on Intro to AI, AI methods, Vision, and Lisp.
I am very familiar with the National Science Foundation, their goals
and policies.
You can reach me directly at:
Prof. Robert P. Futrelle
College of Computer Science 161CN
Northeastern University
360 Huntington Ave.
Boston, MA 02115
(617)-437-2076
CSNet: futrelle@corwin.ccs.northeastern.edu
------------------------------
Date: Tuesday, 3 November 1987, 21:23-EST
From: Nick Papadakis <@EDDIE.MIT.EDU:nick@MC.LCS.MIT.EDU>
Subject: Re: Lenat's AM program [AIList V5 #257 - Methodology]
In article <774> tgd@ORSTCS.CS.ORST.EDU (Tom Dietterich) writes:
>>In the biological sciences, publication of an article reporting a new
>>clone obligates the author to provide that clone to other researchers
>>for non-commercial purposes. I think we need a similar policy in
>>computer science.
Shane Bruce <bruce@vanhalen.rutgers.edu> replies:
>The policy which you are advocating, while admirable, is not practical. No
>corporation which is involved in state of the art AI research is going to
>allow listings of their next product/internal tool to made available to the
>general scientific community, even on a non-disclosure basis. Why should
>they give away what they intend to sell?
This is precisely why corporations involved in state of the art AI
research (and any other form of research) will find it difficult to make
major advances. New ideas thrive in an environment of openness and free
interchange.
- nick
------------------------------
Date: 30 Oct 87 19:45:09 GMT
From: gatech!udel!sramacha@bloom-beacon.mit.edu (Satish Ramachandran)
Subject: Re: The Success of AI (continued, a
In article <8300008@osiris.cso.uiuc.edu> goldfain@osiris.cso.uiuc.edu writes:
>
>Who says that ping-pong, or table tennis isn't a sport? Ever been to China?
Rightly put! Ping-pong may not be a spectator sport in the West and hence,
maybe suspected to be a 'sport' where little skill is involved.
But if you read about it, you would find that the psychological aspect
of the game is far more intense than say, baseball or golf!
The points are 21 each game and very quickly done with...(often with the
serves themselves !)
Granting the intense psychological factors to be considered while playing
ping-pong (as in many other games), would it be easier to make a machine play
a game where there is a lot of time *real-time* to decide its next move
as opposed to making it play a game where things have to be decided
more quickly, relatively?
Satish
P.S. Btw, ping-pong is also a popular sport in Japan, India, England,
Sweden and France.
------------------------------
Date: 31 Oct 87 17:16:06 GMT
From: trwrb!cadovax!gryphon!tsmith@ucbvax.Berkeley.EDU (Tim Smith)
Subject: Re: The Success(?) of AI
In article <171@starfire.UUCP> merlyn@starfire.UUCP (Brian Westley) writes:
+=====
| ...I am not interested in duplicating or otherwise developing
| models of how humans think; I am interested in building machines that
| think. You may as well tell a submarine designer how difficult it is
| to build artificial gills - it's irrelevant.
+=====
The point at issue is whether anyone understands enough about
"thinking" to go out and build a machine that can do it. My claim (I
was the one who started this thread) was that we do not. The common
train of thought of the typical AI person seems to be:
(1) The "cognitive" people have surely figured out by now what
thinking is all about.
(2) But I can't be bothered to read any of their stuff, because they
are not programmers, and they don't know how computers work.
Actually, the "cognitive" people haven't figured out what thinking is
at all. They haven't a clue. Of course they wouldn't admit that in
print, but you can determine that for yourself after only a few
months of intensive reading in those fields.
Now there's nothing wrong with naive optimism. There are many cases
in the history of science where younger people with fresh ideas have
succeeded where traditional methods have failed. In the early days of
AI, this optimism prevailed. The computer was a new tool (a fresh
idea) that would conquer traditional fields. But it hasn't. The naive
optimism continues, however, for technological reasons. Computers keep
improving, and many people seem to believe that once we have
massively parallel architectures, or connection machines, or
computers based on neural nets, then, finally, we will be able to
build a machine that thinks.
BS! The point is that no one (NO ONE) knows enough about thinking to
design a machine that thinks.
Look, I am not claiming that AI should come to a grinding halt. All I
am pleading for is some recognition from AI people that the
top-level problems they are addressing are VERY complicated, and are
not going to be solved in the near future by programming. I have seen
very little of this kind of awareness in the AI community. What I
see instead is a lot of whining to the effect that once a problem is
"solved", it is removed from the realm of thinking (chess, compilers,
and medical diagnosis are the usual examples given).
Now if you believe that playing chess is like thinking, you haven't
thought very much about either of these things. And if you believe
that computers can diagnose diseases you are certainly not a
physician. (Please, no flames about MYCIN, CADUCEUS, and their
offspring--I know about these systems. They can be very helpful tools
for a physician, just as a word processor is a helpful tool for a
writer. But these systems do not diagnose diseases. I have worked in
a hospital--it's instructive. Spend some time there as an observer!)
I don't remember any of the pioneer artificial intelligentsia
(Newell, Simon, Minsky, etc.) ever claiming that compilers were
artificial intelligence (they set their sights much higher than
that).
I am not trying to knock the very real advances that AI work has made
in expert systems, in advanced program development systems, and in
opening up new research topics. I just get so damn frustrated when I
see people continually making the assumption that thinking, using
language, composing music, treating the sick, and other basic human
activities are fairly trivial subjects that will soon be accomplished
by computers. WAKE UP! Go out and read some psychology, philosophy,
linguistics. Learn something about these things that you believe are
so trivial to program. It will be a humbling, but ultimately
rewarding, experience.
--
Tim Smith
INTERNET: tsmith@gryphon.CTS.COM
UUCP: {hplabs!hp-sdd, sdcsvax, ihnp4, ....}!crash!gryphon!tsmith
UUCP: {philabs, trwrb}!cadovax!gryphon!tsmith
------------------------------
Date: 3 Nov 87 00:19:57 GMT
From: PT.CS.CMU.EDU!SPEECH2.CS.CMU.EDU!yamauchi@cs.rochester.edu
(Brian Yamauchi)
Subject: Re: The Success of AI
In article <137@glenlivet.hci.hw.ac.uk>, gilbert@hci.hw.ac.uk
(Gilbert Cockton) writes:
> This work is inherently superior to most work in AI because none of the
> writers are encumbered by the need to produce computational models.
> They are thus free to draw on richer theoretical orientations which
> draw on concepts which are clearly motivated by everyday observations
> of human activity. The work therefore results in images of man which
> are far more humanist than mechanical computational models.
I think most AI researchers would agree that the human mind is more than a
simple production system or back-propagation network, but the more basic
question is whether or not it is possible for human beings to understand
human intelligence. If the answer is no, then not only cognitive
psychologists, but all psychologists will be doomed to failure. If the
answer is yes, then it should be possible to use build a system that uses
that knowledge to implement human-like intelligence. The architecture of this
system may be totally unlike today's computers, but it would be man-made
("Artificial") and possessing human-like intelligence.
This may require some completely different model than those currently
popular in cognitive science, and it would have to account for
"non-computational" human behavior (emotions, creativity, etc.), but as long
as it was well-defined, it should be possible to implement the model in some
system.
I suppose one could argue that it will never be possible to perfectly
understand human behavior, so it will never be possible to make an AI which
perfectly duplicates human intelligence. But even if this were true, it
would be possible to duplicate human intelligence to the degree that it was
possible to understand human behavior.
> Furthermore, the common test of any
> concept of mind is "can you really imagine your mind working this way?"
This is a generally useful, if not always accurate, rule of thumb. (It is
also the reason why I can't see why anyone took Freudian psychology
seriously.)
Information-processing models (symbol-processing for the higher levels,
connectionist for the lower levels) seem more plausible to me than any
alternatives, but they certainly are not complete and to the best of my
knowledge, they do not attempt to model the non-computational areas. It
would be interesting to see the principles of cognitive science applied to
areas such as personality and creativity. At least, it would be interesting
to see a new perspective on areas usually left to non-cognitive
psychologists.
> Many of the pillars of human societies, like the freedom and dignity of
> democracy and moral values, are at odds with the so called 'Scientific'
> models of human behaviour; indeed the work of misanthropes like Skinner
> actively promote the connection between impoversihed models of man and
> immoral totalitarian socities (B.F. Skinner, Beyond Freedom and Dignity).
True, it is possible to promote totalitarianism based on behaviorist
psychology (i.e. Skinner) or mechanistic sociology (i.e. Marx), both of
which discard the importance of the individual. On the other hand, simply
understanding human intelligence does not reduce its importance -- an
intelligence that understands itself is at least as valuable as one that
does not.
Furthermore, totalitarian and collectivist states are often promoted on the
basis of so-called "humanistic" rationales -- especially for socialist and
communist states (right-wing dictatorships seem to prefer nationalistic
rationales). The fact that such offensive regimes use these justifications
does not discredit either science or the humanities.
______________________________________________________________________________
Brian Yamauchi INTERNET: yamauchi@speech2.cs.cmu.edu
Carnegie-Mellon University
Computer Science Department
______________________________________________________________________________
------------------------------
Date: 4 Nov 87 22:01:03 GMT
From: topaz.rutgers.edu!josh@rutgers.edu (J Storrs Hall)
Subject: Re: The Success of AI
Brian Yamauchi:
... the more basic
question is whether or not it is possible for human beings to understand
human intelligence. If the answer is no, then not only cognitive
psychologists, but all psychologists will be doomed to failure.
Actually, it is probably possible to build a system that is more
complex than any one person can really "understand". This seems to be
true of a lot of the systems (legal, economic, etc) at large in the
world today. The system is made up of the people each of whom
understands part of it. It is conjectured by Minsky that the mind is
a similar system. Thus it may be that AI is possible where psychology
is not (in the same sense that economics is impossible).
--JoSH
------------------------------
Date: 3 Nov 87 12:06 PST
From: hayes.pa@Xerox.COM
Subject: Humanist Models of Mind
Gilbert Cockton makes a serious mistake, in lumping AI models together
with all other `mechanical' or `scientific' models of mind on the wrong
side of C P Snows cultural fence:
>In short, mechanical concepts of mind and the values of a civilised
>society are at odds with each other. It is for this reason that modes
>of representation such as the novel, poetry, sculpture and fine art
>will continue to dominate the most comprehensive accounts of the
>human condition.
The most exciting thing about computational models of the mind is
exactly that they, alone among the models of the mind we have, ARE
consistent with humanist values while being firmly in contact with
results of the hardest of sciences.
Cockton is right to be depressed by many of the scientific views of man
that have appeared recently. We have fallen from the privileged bearers
of divine knowledge to the lowly status of naked apes, driven by
primitive urges; or even to mere vehicles used by selfish genes to
reproduce themselves. Superficial analogies between brains and machines
make people into blind bundles of mechanical links between inputs and
outputs, suitable inhabitants for Skinners New Walden, of whose minds -
if they have any - we are not permitted to speak. Physicists often
assume that people, like everything else, are physical machines governed
by physical laws, and therefore whose behavior must be describable in
physical terms: more, that this is a scientific truth, beyond rational
dispute. None of these pictures of human nature has any place for
thought, for language, culture, mutual awareness and human
relationships. Many scientists have given up and decided that the most
uniquely human attributes have no place in the world given us by
biology, physics and engineering.
But the computational approach to modelling mind gives a central place
to symbolic structures, to languages and representations. While firmly
rooted in the hard sciences, this model of the mind naturally
encompasses views of perception and thought which assume that they
involve metaphors, analogies,inferences and images. It deals right at
its center with questions of communication and miscommunication. I can
certainly imagine my mind ( and Gilberts ) working this way: I consist
of symbols, interacting with one another in a rich dynamic web of
inference, perceptual encoding and linguistic inputs ( and other
interactions, such as with emotional states ). This is a view of man
which does NOT reduce us to a meaningless machine, one which places us
naturally in a society of peers with whom we communicate.
Evolutionary biology can account for the formation of early human
societies in very general terms, but it has no explanation for human
culture and art. But computer modellers are not surprised by the
Lascaux cave paintings, or the univeral use of music, ritual and
language. People are organic machines; but if we also say that they
are machines which work by storing and using symbolic structures, then
we expect them to create representations and attribute meaning to
objects in their world.
I feel strongly about this because I believe that we have here, at last,
a way - in principle - to bridge the gap between science and humanity.
Of course, we havnt done it yet, and to call a simple program
`intelligent' doesnt help to keep things clear, but Cocktons pessimism
should not be alllowed to cloud our vision.
Pat Hayes
------------------------------
End of AIList Digest
********************