Copy Link
Add to Bookmark
Report
AIList Digest Volume 6 Issue 071
AIList Digest Monday, 18 Apr 1988 Volume 6 : Issue 71
Today's Topics:
Administrivia - New AIList Moderator(s) Needed,
Opinion - AI Goals
----------------------------------------------------------------------
Date: Fri 26 Feb 88 09:54:36-PST
From: Ken Laws <LAWS@IU.AI.SRI.COM>
Subject: New AIList Moderator(s) Needed
It seems that I will be taking leave of absence from SRI to
work at NSF for a couple of years, starting this July. (I
will report more details later.) AIList will thus need a
new moderator by mid June. Any volunteers?
I suspect that AIList has become too much for an inexperienced
moderator to handle alone. About half my effort has been spent
editing seminar and conference announcements, so I suggest
dropping those or spinning them off to another list. (Usenet
has a widely read conference list; perhaps it should be used
for AI notices also. I have had positive feedback about the
seminar notices, but they seem out of place in a discussion
list -- and provide little that one cannot get by scanning the
latest conference proceedings.)
Separate lists might also be created to handle AI-related
hardware/software queries and discussions, expert systems,
AI in business and engineering, logics, commonsense
reasoning, philosophy, psychology, cognitive science, etc.
We should try to follow the Usenet list structure where
possible, but the most important ingredient is an enthusiastic
moderator or administrator.
Setting up a discussion list is not terribly difficult. All
you need is the ability to remail messages to a bunch of people,
preferably in batch form to reduce the number of bounce messages
from broken connections. I can help with such mechanics as
determining return paths for messages having nonstandard header
syntax. The postmasters on the net have always been very
helpful, particularly Erik Fair at UCBVAX.
If no moderator is found, AIList will continue on the
unmoderated Usenet comp.ai stream. There are advantages to
this format, including fast turnaround and ease of saving or
replying to individual messages. (Disadvantages include lack
of thematic grouping and of editorial screening.) Someone
on the Arpanet (or other network?) could redistribute the
messages much as I have done. Submissions could be sent
directly to the comp.ai gateway, or to the Arpanet redistributor
if the gateway manager preferred it so. I am not sure whether
gatewaying to BITNET must go through the Arpanet, but something
can be worked out.
I have enjoyed being the moderator, and will continue to
participate in discussions. Thanks to all of you for making
this effort such a success.
-- Ken Laws
------------------------------
Date: Wed, 13 Apr 88 13:28:23 -0400
From: koomen@cs.rochester.edu
Subject: Re: Review - Spang Robinson Report, V4 N2 [AIList V6 #68]
> Canadian Pacific developed an expert system to analyze oil samples
> form a diesel locomotive. [...] A mechanic decided to disregard the
> recommendations of the system causing a $250,000 failure.
A dangerous bias. What about the times that the disregard was correct,
or the acceptance incorrect? If we do not allow for mistakes on the
part of the human operator, the "recommendations" will no longer be
recommendations but injunctions. And then where does that leave us?
Without further particulars about the case, the following comment would
have done equally well: "However, the system explained its reasons for a
recommendation so poorly that a mechanic decided to disregard it,
causing a $250,000 failure."
-- Hans
EMail: Koomen@CS.Rochester.Edu Paper: Johannes A. G. M. Koomen
Dept. of Computer Science
Phone: (716) 275-9499 [work] University of Rochester
(716) 442-4836 [home] Rochester, NY 14627
------------------------------
Date: Wed, 13 Apr 88 09:30 EST
From: INS_ATGE%JHUVMS.BITNET@CUNYVM.CUNY.EDU
Subject: AI -- Will we be "programming"?
Something which has recently struck me is how little programming
is actually done in neural networks.
An experiment was recently done by Dr. Sejnowski of JHU regarding the
interneurons or "hidden units" which develop in a neural network which is
trained to recognize concave vs. convex features on sight.
Now although I'm sure the researchers had some guesses as to the
"receptive" and "projective" areas of the hidden units, they never
"programmed" them. The neural network was trained (using back propogation
or possibly a local minima avoiding algorithm), and the hidden units
ended up looking like neurons found in the cat visual pathway which
correspond to a subclass of what were originally thought to be
edge-detection neurons (note: the concave vs. convex neruons are a subset
of the so called "complex" cells thought to be edge detectors--not all the
data on complex cells fits all the concave vs. convex hidden units, so
it appears they are a subclass (particularly, the subclass which do not
respond well in the center of field)).
In other words, although the experimenters tried to create something,
they did not "program" the entire system--it organized -itself-.
Not only that, but it added a new hypothesis to the area of neuroscience.
Thus this new area of science is labeled "Computational Neuroscience."
-Thomas Edwards
------------------------------
Date: Wed, 13 Apr 88 15:36 CDT
From: SHERRARD%CODSD1%eg.ti.com@RELAY.CS.NET
Subject: Silly Discussion
I seems to me to be silly to discuss things like "when a computer
passes the Turing test it will be intelligent." Intelligence is not
a binary, you have it or you don't thing. It is all a matter of degree.
I agree with Caroline Knight on her comment about a machine
having the human qualities of arrogance, ignorance and the like. For
the rest of you (Rob Wald), that believe a 'test' is required to solve
the have intelligence vs. the not have intelligence, I suggest you ask
a third grader if it takes any intelligence to multiply and divide.
If you look at intelligence the way I do you will see how
far we have come in creating machine intelligence.
-jeff sherrard
------------------------------
Date: 14 Apr 88 14:03:10 GMT
From: eniac.seas.upenn.edu!lloyd@super.upenn.edu (Lloyd Greenwald)
Subject: Re: Free Will
In Article 1646 channic@uiucdcsm.cs.uiuc.edu writes:
> I think AI by and large ignores
>the issue of free will as well as other long standing philoshical problems
I don't want to start a philosophical argument in this newsgroup, but
I would like to know why AI should be concerned with the issue of
"free will" considering that it is still an open question. It may
turn out that the complexities involved in what we call "free will"
are similar to that of natural language understanding or other AI
areas, which would reduce the problem to one of design. Admittedly,
this may not ever be accomplished, I just don't think we should draw
such a strong conclusion at this point.
====> Hi Elaine
Lloyd Greenwald
lloyd@eniac.seas.upenn.edu
------------------------------
Date: 13 Apr 88 03:20:00 GMT
From: channic@m.cs.uiuc.edu
Subject: Re: The future of AI - my opinion
In article <1348@hubcap.UUCP>, mrspock@hubcap.UUCP (Steve Benz) writes:
> In fact, I agree with it. I think that in order for a machine to be
>convincing as a human, it would need to have the bad qualities of a human
>as well as the good ones, i.e. it would have to be occasionally stupid,
>arrogant, ignorant, etc.&soforth.
>
> So, who needs that? Who is going to sit down and (intentionally)
>write a program that has the capacity to be stupid, arrogant, or ignorant?
Another way of expressing the apparent necessity for bad qualities "for a
machine to be convincing as a human" is to say that free will is fundamental
to human intelligence. I believe this is why the reaction to any "breakthrough"
in intelligent machine behavior is always "but its not REALLY intelligent,
it was just programmed to do that." Choosing among alternative problem
solutions is an entirely different matter than justifying or explaining
an apparently intelligent solution. In complex problems of politics, economics,
computer science, and I would even venture to say physics, there are no right
or wrong answers, only opinions (which are choices) which are judged as such
on the basis of creativity and how much it agrees with the choices of those
considered expert in the field. I think AI by and large ignores
the issue of free will as well as other long standing philoshical problems
(such as the mind/brain problem) which lie at the crux of developing machine
intelligence. Of course there is not much grant money available for addressing
old philosphy. This view are jaded, I admit, but five years of experience in
the field has led me to believe that AI is not the endeavor to make machines
that think, but rather the endeavor to make people think that machines can
think.
tom channic
uiucdcs.uiuc.dcs.edu
{ihnp4|decvax}!pur-ee!uiucdcs!channic
------------------------------
Date: Fri Apr 15 16:15:49 EDT 1988
From: sas@BBN.COM
Subject: AIList V6 #67 - Future of AI
- Watch those Pony Express arguments. Remember, the Pony Express was
a temporary hack. It ran for a bit under two years before it was
replaced by the transcontinental railroad.
- If you want a humorous account of the problems of really, really
understanding things I'll recommend Morris Zap's Semantics as Strip
Tease talk in David Lodge's novel Small World. Granted, he was
talking about the problems with deconstruction, but it's marvelously
applicable to AI.
- Phrenology was largely considered hokum in the last century, but
craniometry was highly regarded. Check out Gould's the Mismeasure of
Man.
- I think AI has already proven its worth by attacking and to some
extent solving certain classes of problems. Don't expect the
solutions to look as magical as the problems. Chess programs,
symbolic math programs, expert systems, robotics and the like are all
real technologies now. Materials science may not have all the glamor
of QCD but it's a pretty exciting field none the less.
- I finally, figured out what bothers me about nanotechnology. In a
sense it is irrelevant. If there is a way to make machines that can
translate Dickens into Turkish it doesn't really matter if they are as
big as a carwash, twice as ugly, and are limited by the speed of soapy
water in a vacuum. Once we know how to translate, we can always make
the machine smaller and faster. Nanotechnology seems to ignore the
hard part of the problem in favor of the easy part.
Seth
---- What do they call these things down at the bottom anyway?
Letterfeet? ----
------------------------------
Date: 16 Apr 88 16:49:46 GMT
From: uflorida!codas!novavax!maddoxt@gatech.edu (Thomas Maddox)
Subject: Re: Simulated Intelligence
In article <2051@mind.UUCP> eliot@mind.UUCP (Eliot Handelman) writes:
>Intelligence draws upon the resources of what Dostoevsky, in the "Notes from
>Underground", called the "advantageous advantage" of the individual who found
>his life circumscribed by "logarithms", or some form of computational
>determinism: the ability to veto reason. My present inclination is to believe
>that AI, in the long run, may only be a test for underlying mechanical
>constraints of of theories of intelligence, and therefore inapplicable to
>to the simulation of human intelligence.
If it's AI, it will incorporate irrationality. As you, after
Dostoevsky, imply, intelligence is a superset of reason. Think
of the human organism as a bag of perceptions and hormonal
interactions with the mind as the way station for the whole
exceedingly tangled perceptual/emotional/intellectual circus.
At present, we have only begun to understand the complexities
of the brain's neurotransmitter interactions, so we are only beginning
to know the brain, but we have already grasped that
that the mind's complexity far exceeds earlier estimates.
If you're interested in seeing my best shot at portraying
these ideas, look for a few sf stories: "The Mind like a Strange
Balloon" in the April, 1985 _Omni_, "Snake Eyes" in the April, 1986
_Omni_ (and in the _Mirrorshades_ anthology, coming out in paperback
almost instantly), and "The Robot and the One You Love" in the March,
1988 _Omni_. They are my attempts at thinking through these
problems.
------------------------------
Date: Wed, 13 Apr 88 09:17:53 EDI
From: prem@research.att.com
Subject: Prof. McCarthy's retort
This is a very cute, and compact retort, but not very convinving; it admits
of very many similar cute and compact retorts, one of which is given below
as an example :
"Why would I want to write a program in assembly language that figured out
how to stack colored blocks on a table, and very very slowly at that ?"
or,
Prem Devanbu
(A diehard lisp fan who would like to see a better argument for lisp,
even if it is less cute or compact)
------------------------------
Date: 16 Apr 88 16:33:18 GMT
From: steinmetz!ge-dab!codas!novavax!maddoxt@uunet.uu.net (Thomas
Maddox)
Subject: Re: The future of AI [was Re: Time Magazine -- Computers of
the Future]
In article <1134@its63b.ed.ac.uk> gvw@its63b.ed.ac.uk (G Wilson) writes:
>I think AI can be summed up by Terry Winograd's defection. His
>SHRDLU program is still quoted in *every* AI textbook (at least all
>the ones I've seen), but he is no longer a believer in the AI
>research programme (see "Understanding Computers and Cognition",
>by Winograd and Flores).
>
Using this same reasoning, one might given up quantum
mechanics because of Einstein's "defection." Whether a particular
researcher continues his research is an interesting historical
question (and indeed many physicists lamented the loss of Einstein),
but it does not call into question the research program itself, which
must stand or fall on its own merits.
AI will continue to produce results and remain a viable
enterprise, or it won't and will degenerate. However, so long as it
continues to feed powerful ideas and techniques into the various
fields it connects with, to dismiss it seems remarkably premature. If
you are one of the pro- or anti-AI heavyweights, i.e., someone with
power, prestige, or money riding on society's evaluation of AI
research, then you join the polemic with all guns firing.
The rest of us can continue to enjoy both the practical and
intellectual fruits of the research and the debate.
------------------------------
End of AIList Digest
********************