Copy Link
Add to Bookmark
Report
Neuron Digest Volume 06 Number 13
Neuron Digest Friday, 16 Feb 1990 Volume 6 : Issue 13
Today's Topics:
Bibliography: Neural networks and defense applications
Summary of Current Understanding of NN Capabilities
Searle and simulation
Errata
Study of the Neural Network Research Community
Sci. Am. Debate - Can Programs Think
Reviewers for Special Issue of Connection Science
TR Available - Connectionist Language Users
Wang Conference ATR Call for Papers
job announcement, please post
New ARTIFICIAL LIFE mailing list
Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"
Use "ftp" to get old issues from hplpm.hpl.hp.com (15.255.176.205).
------------------------------------------------------------
Subject: Bibliography: Neural networks and defense applications
From: gluck@psych.Stanford.EDU (Mark Gluck)
Date: Mon, 12 Feb 90 10:48:50 -0800
For the compilation of a bibliography on DEFENSE APPLICATIONS of neural
networks, lists of relevant published papers, conference papers, and
unpublished tech-reports are requested.
Please send full reference information (including address to write-to, if
reports are unpublished) to gluck@psych.stanford.edu. REFER format
preferable (i.e., %A..%D..%T etc) but any format with separate fields for
each component of the reference is fine.
TO RECIEVE A COPY:
------------------
We would be happy to redistribute hardcopy versions of this compilation
to anyone who requests it. Please send your *US MAIL ONLY* address to
the above netmail address or to:
Dr. Mark A. Gluck
Jordan Hall; Bldg. 420
Stanford University
Stanford, CA 94305-2130
------------------------------
Subject: Summary of Current Understanding of NN Capabilities
From: Paul McKeever <mckeever@cogsci.uwo.ca>
Date: Mon, 12 Feb 90 13:51:22 -0500
I am currently looking for a recent summary of 'what' neural nets are
capable of computing. I am not particularly interested in their known
applications (e.g., speech recognition) so much as their abilities to map
arrays (binary or real) of type X onto arrays of type Y, in temporally
dependent and independent ways. Does anyone know of one or more recent
publications of this sort?
Many "thank-you"s.
-P.X.M.
[[ Editor's note: Hmmm, good question. I've not seen the literature for
a while and so cannot help here. Readers? I assume more than just
Hopfield architectures have been mathematically characterised. Any
survey articles? -PM ]]
------------------------------
Subject: Searle and simulation
From: Seng-Beng Ho <hosb@nusdiscs.bitnet>
Date: Tue, 13 Feb 90 09:58:00 -0800
I read John Searle's recent article in Scientific American to see if the
most recent incarnation of his argument can convince me otherwise about
the possibility of thinking machines. He has not convinced me. His idea
that one should not take simulation for the real thing is fine, because
in simulation some details are ignored. ("After all, the whole point of
models is that they contain only certain features of the modeled domain
and leave out the rest" - Searle, Scientific American, January, 1990.)
But in simulation we try to capture the essense of the real thing, and
then we go ahead and BUILD the real thing. We do that a lot in, for
example, civil engineering, and successfully too. Of course, sometimes
we cannot capture enough in our simulation to make it work, as in the
case of weather prediction, but who is to say at this stage that simulating
and building intelligent machines is more like simulating and predicting
weather and less like simulating and building bridges?
One may insist that there is still a difference between simulating and
building bridges on the one hand and simulating and building intelligent
machines on a computer on the other because in the former you simulate on
a computer and then you move on to building the bridges with other
substances (i.e., the final product is made of a different "stuff" from
the simulation) while AI people seem to be simulating on a computer and
finally hoping to have a detailed enough simulation on the computer (von
Neumann or neural) so that the computer becomes the thinking machine itself.
(Hence Searle's contention, at some point in the article, that no matter
how detailed the simulation, it is still not the real thing. His article
is a bit confusing, because at some points he said a simulation is never
complete, while at some other he said it can be complete but it is still not
the real thing.)
However, since thought is a process, not a thing, there is no reason why
it cannot be resident on a substrate other than neural tissues.
So, Searle agrees, "This does not imply that only a biological system
could think, but it does imply that any alternative system, whether made
of silicon, beer cans or whatever, would have to have the relevant
causal capacities equivalent to those of brains." However, he went on to
conclude, "Any artifact that produced mental phenomena, any artificial
brain, would have to be able to duplicate the specific causal powers
of brains, and it could not do that JUST BY RUNNING A COMPUTER PROGRAM
(emphasis mine)." Now it is a puzzle to me WHY he said that, since computers
can NOT ONLY SIMULATE any causal processes BUT ALSO INSTANTIATE and
IMPLEMENT them - a computer in a fighter aircraft's cockpit is not there
to simulate an air combat (even though it can be made to do so), it
participates in it, and CAN BE BLOWN INTO PIECES along with the aircraft
and the pilot, or, if it is wise enough, bring the pilot and aircraft
happily home after a combat.
When a simulation becomes detailed enough, or as detailed as the "real"
thing, then the simulation and the real thing are indistinguishable.
Surely if you jump into a pool filled with Ping-Pong- ball models of
water molecules you are not expected to get wet but if we add to the
Ping-Pong-ball models all the necessary interatomic forces and we
ourselves are made up of the same models we WILL get wet when we jump
into the pool of Ping-Pong balls!
One can imagine a parallel universe that has the same number and types
of particles, and the same initial conditions as ours and we both would
be simulating each other. (Will some quantum conditions preclude them
from evolving along exactly the same path? But I think that is irrelevant
to the present argument.) Further, assume that one universe runs on a
faster clock so that the slower one can observe the faster one and see
its own future so that it can benefit from the other being a "simulation"
of itself. But who is to say which universe is the simulated and which
is the simulation? This calls to mind Douglas Adams' "Hitchhiker's Guide
to the Galaxy" where Earth and all its inhabitants were made to compute
the answer to the meaning of life. You may thing you are the real thing
but you don't know when you have become somebody else's simulation!
-- Seng-Beng Ho (hosb@nusdiscs.bitnet)
> Ho Seng Beng (Jnet%"HOSB@NUSDISCS")
> National University of Singapore
> Department of Information Systems and Computer Science
[[ Editor's note: While some may feel the recent Sci. Am. discussion is
only interesting philoshopical chatter, it certainly does bring up some
important issues of simulations in general whcih even the most jaded
practitioners cannot ignore. Further, much of the debate strikes at the
heart of connectionist models and their versimillitude to cognitive,
biological, or other functions. I think Searle did a disservice to the
computer science (ie., AI) by focussing on the Chinese Room example,
although he clearly tried to broaden and elucidate many of his
fundamental points in his recent article. Seng-Beng reposnse seems to
hinge on "thought is a process, not a thing" while Searle says (in
effect) "thought comes *from* a thing, and doesn't exist independently".
Unless I misstate either argument, it seems that neither are testable
hypotheses and must (per force) be used as unproven axioms under either
argument. -PM ]]
------------------------------
Subject: Errata
From: <HOSB%NUSDISCS.BITNET@CUNYVM.CUNY.EDU>
Date: Wed, 14 Feb 90 10:19:00 -0800
I would like to correct the second last sentence in my previous
submission entitled "Searle and simulation":
This calls to mind Douglas Adams' "Hitchhiker's Guide to the Galaxy"
where Earth and all its inhabitants were created to compute the
QUESTION to a certain important answer.
> Ho Seng Beng (Jnet%"HOSB@NUSDISCS")
> National University of Singapore
> Department of Information Systems and Computer Science
------------------------------
Subject: Study of the Neural Network Research Community
From: michael@AGRICOLA.MIT.EDU (Michael J. Wargo)
Date: Tue, 13 Feb 90 10:03:23 -0500
A colleague of mine here at MIT has asked me to post this to the list.
Please respond to me by e-mail and I'll forward everything to him.
Profs. Michael Rappa and Koenraad Debackere write:
We are currently undertaking a study of research life on the frontier of
new fields in science and technology, including neural networks.
If during the coming week you are one of several thousand researchers who
receive the survey in the US Mail, we would greatly appreciate your
participation. Those of you who would like to participate, and wish to
be certain to receive the survey, please forward your address to us.
Thanks,
michael@agricola.mit.edu
18.82.0.113
------------------------------
Subject: Sci. Am. Debate - Can Programs Think
From: DAVID%UCONNVM.BITNET@CUNYVM.CUNY.EDU
Date: Fri, 16 Feb 90 09:12:32 -0500
In reading recently the debates (Scientific American) about whether
or not a computer program could be constructed which would think, i.e.,
display intelligence, it occurred to me that one should attempt listing
the necessary (and maybe even the sufficient) conditions for finding
so-called artificial intelligence.
A robot which could think would be obviously intelligent if the
responses it gave were indistinguishable from those a human gave, for a
wide variety of events and prompts. This is the Turing test, I think,
phrased in an appropriate way for the discussion which follows. I do not
wish to imply that the robot's responses would be the same as any given
individual's response, only that the response was acceptable to a human
as being inside the universe of responses which one could expect from
humans. In fact, it does not even imply that the response is identical to
that which the robot would make under "identical" circumstances at some
future time.
Now comes the hard part. I have deliberately emphasized the word
response, rather than answer, because I suspect that thinking is
intimately tied to certain facts of biology which are not contained
solely in computer programs, and therefore are being excluded from the
arguments presented so far. Consider an embryonic brain, developing in a
fetus. If that brain were to be removed from its host, and supported in
the mode of "Donovan's Brain" a movie I vaguely remember had a
disembodied brain maintained by elaborate chemicals, then the question
is, would that brain think. Of course the first question one would have
to ask is how is one to measure the brain's thinking. An EKG might
indicate activity, but my understanding of present methods suggests that
there is little difference between an EKG of a thinking and a non
thinking person, so I doubt if this would be a viable test. Positron
emission tomography might show activity, but still not indicate
conclusively that the brain was thinking. Ultimately, I believe that one
is drawn to the conclusion that in order to ascertain the thinking
quality of an isolated brain, one needs to have an output device. Call it
a mouth.
Does anyone truly believe that a mouth equipped brain would talk? Of
course not, it has to be taught to talk. It could babble, in the sense
that a computer can be taught to produce nonsense sentences, but
producing random words or noises from its mouth would not constitute
thinking and communication to most observers.
So we would come, I believe, to the conclusion that this isolated
brain needed an input system as well as an output system, so that it
could absorb information from the outside world. In fact, I will suggest
later on that the brain needs at least two input systems, and two output
systems to develop consciousness, but for now, let us proceed assuming
that at least an input system is needed.
No one would now conclude that this semi-isolated brain would
spontaneously develop the ability to think. It needs more than the
ability to receive external stimuli and create external signals. It needs
to be taught what to do. Let us assume that the brain has a "mother",
someone (of us, I think) who would spend the time and effort teaching the
brain "things". What would that mother do? Stimulate the brain with an
external signal to its input, and hope for an output? And when the output
was received, would the mother then offer encouragement to the brain so
as to "reinforce" the learned activity? Why should the brain respond to
such encouragement?
It would appear to me that the essence of teaching the brain is that
the brain must have embedded in it some form of pleasure system (and some
form of pain system) which the brain senses internal to itself as being
pleasurable (or painful). The random responses that it gives to its
"mother" need to be correlated in the brain's neurons to the responses it
obtains from its "mother" in order for the brain to strengthen the
process of attaching sets of inputs to outputs. This means that a brain
under training needs a mechanism for feeling good (and for feeling bad).
The existence of enkephalins (pleasure) and bradykinin (pain) molecules
indicates that the brain has built into it mechanisms which the brain
interprets as being good (or bad) (in our language).
In order to develop a sense of self, the brain must be able to
notice itself. Other than the output device for signaling (the mouth) the
brain would need an output device for moving something into reach of the
input device, so it would appear as if the brain needs at least two
output devices, one an actuator and the other a communicator. Said in
more common terms, to distinguish between "self" and "other" the brain
needs tools which themselves present the possibility of "seeing" the
difference between "self" and "other".
If the brain only has one input device, then the nature of its
perception of the outside world is unimodal, not terribly rich in detail.
If it has two different inputs, then there can be a correlation between
them, when the two inputs are simultaneously stimulated under training.
Thus hearing the word 'cow' and seeing a 'cow' becomes, under training,
associating a set of learned sounds with a set of learned images. These
presumably are learned because the pleasure of learning them, and
transmitting that pleasure to the mother results in more pleasure (see
above). Concepts, rather than symbols, would appear therefore to be the
simultaneous juxtaposition of several inputs which give rise to neuronal
patterns which, upon being activated for some other reason, bring the
(possibly temporally altered) echoes (and possibly other echoes mixed in)
of these inputs into play again.
If the brain has at least two inputs and two outputs, then it can,
either from training or inadvertently, become aware of itself, and
depending on the nature of the inputs and the outputs, manipulate itself
and sense the results differently from "seeing" the result of
manipulating "others". I would suggest that the above constitutes a
minimal prescription for creating consciousness.
If this is approximately true, then the answer to the questions
posed in the debate about thinking machines appears to be as
un-resolvable as ever. What is the computer equivalent of pain? No amount
of thought on my part has yielded the slightest clue as to how to program
a neural net to not like something, or the converse. This represents, to
me, an insuperable obstacle to creating artificial intelligence.
The number of input and output functions is clearly simple, and will
become trivial in the future. It is the motivation problem that, were it
to be addressed and shown to be solvable, might lead to true artificial
intelligence.
Some criticism of these remarks would be appreciated.
Carl W. David
DAVID AT UCONNVM.bitnet
------------------------------
Subject: Reviewers for Special Issue of Connection Science
From: Lyn Shackleton <lyn%CS.EXETER.AC.UK@vma.CC.CMU.EDU>
Date: Tue, 13 Feb 90 13:17:34 +0000
The Journal, Connection Science, is soon to announce a call for papers
for a special issue on Simulations of Psychological Processes.
So far the special editorial board consists:
James A. Anderson
Walter Kintsch
Dennis Norris
David Rumelhart
Noel Sharkey
Others will be added to this list.
We are now calling for REVIEWERS for the special issue. We would like to
enlist volunteers from any area of psychology with experience in
connectionist modeling.
Please state name and area of expertise.
lyn shackleton
Centre for Connection Science JANET: lyn@uk.ac.exeter.cs
Dept. Computer Science
University of Exeter UUCP: !ukc!expya!lyn
Exeter EX4 4PT
Devon BITNET: lyn@cs.exeter.ac.uk.UKACRL
U.K.
------------------------------
Subject: TR Available - Connectionist Language Users
From: rba@flash.bellcore.com (Robert B Allen)
Date: Mon, 12 Feb 90 07:41:32 -0500
Connectionist Language Users
Robert B. Allen
Bell Communications Research
The Connectionist Language Users (CLUES) paradigm employs neural
learning algorithms to develop grounded reactive intelligent
agents. In much of the research reported here, these agents
"use" language to answer questions and to interact with their en-
vironment. The model is applied to simple examples of generating
verbal descriptions, answering questions, pronoun reference, la-
beling actions, and verbal interactions between agents. In addi-
tion the agents are shown to be able to model other intelligent
activities such as planning, grammars and simple analogies, and
an adaptive pedagogy is introduced. Overall, these networks pro-
vide a natural account of many aspects of language use.
______
Request reports from smk@bellcore.com, Selma Kaufman, 2M356, Bell
Communications Research, Morristown, NJ 07960-1910
------------------------------
Subject: Wang Conference ATR Call for Papers
From: mike@bucasb.bu.edu (Michael Cohen)
Date: Thu, 15 Feb 90 18:57:28 -0500
CALL FOR PAPERS
NEURAL NETWORKS FOR AUTOMATIC TARGET RECOGNITION
MAY 11--13, 1990
Sponsored by the Center for Adaptive Systems,
the Graduate Program in Cognitive and Neural Systems,
and the Wang Institute of Boston University
with partial support from
The Air Force Office of Scientific Research
This research conference at the cutting edge of neural network science
and technology will bring together leading experts in academe,
government, and industry to present their latest results on automatic
target recognition in invited lectures and contributed posters. Automatic
target recognition is a key process in systems designed for vision and
image processing, speech and time series prediction, adaptive pattern
recognition, and adaptive sensory- motor control and robotics. Invited
lecturers include:
JOE BROWN, Martin Marietta; GAIL CARPENTER, Boston Univ.;
NABIL FARHAT, Univ. Pennsylvania; STEPHEN GROSSBERG, Boston Univ.;
ROBERT HECHT-NIELSEN, HNC; KEN JOHNSON, Hughes Aircraft;
PAUL KOLODZY, MIT Lincoln Lab; MICHAEL KUPERSTEIN, Neurogen;
YANN LECUN, AT&T Bell Labs; CHRISTOPHER SCOFIELD, Nestor;
STEVEN SIMMES, Science Applications International Co.;
ALEX WAIBEL, Carnegie Mellon Univ.; ALLEN WAXMAN, MIT Lincoln Lab;
FRED WEINGARD, Booz-Allen and Hamilton; BARBARA YOON, DARPA;
CALL FOR PAPERS---ATR POSTER SESSION: A featured poster session on ATR
neural network research will be held on May 12, 1990. Attendees who wish
to present a poster should submit 3 copies of an extended abstract (1
single-spaced page), postmarked by March 1, 1990, for refereeing. Include
with the abstract the name, address, and telephone number of the
corresponding author. Mail to: ATR Poster Session, Neural Networks
Conference, Wang Institute of Boston University, 72 Tyng Road, Tyngsboro,
MA 01879. Authors will be informed of abstract acceptance by March 31,
1990.
SITE: The Wang Institute possesses excellent conference facilities on a
beautiful 220-acre campus. It is easily reached from Boston's Logan
Airport and Route 128.
REGISTRATION FEE: Regular attendee--$90; full-time student--$70.
Registration fee includes admission to all lectures and poster session,
meeting proceedings, one reception, two continental breakfasts, one
lunch, one dinner, daily morning and afternoon coffee service. STUDENTS
FELLOWSHIPS are available. For information, call (508) 649-9731.
TO REGISTER: By phone, call (508) 649-9731; by mail, write for further
information to: Neural Networks, Wang Institute of Boston University, 72
Tyng Road, Tyngsboro, MA 01879.
------------------------------
Subject: job announcement, please post
From: ULI%MPI01.MPI.KUN.NL@vma.CC.CMU.EDU
Date: Tue, 13 Feb 90 12:37:00 +0700
Position Available
The Max-Planck Institute for Psycholinguistics in Nijmegen, The Netherlands,
is looking for a programmer to participate in a project entitled,
``Computational modeling of lexical representation and processes''. The
task of the successful candidate will be to help develop and implement
software for studying and simulating the processes of human speech
perception and word recognition with artificial neural nets of different
types. A strong background in software development (good knowledge of C)
and a good understanding of the mathematical/technical principles
underlying neural nets are required.
The position is to be filled starting in March 1990 and is limited to two
years (up to BAT IIA on German salary scale). Qualified applicants (with
university degree: Ph.D.) should send their curriculum vitae and two
letters of recommendation by March 1, 1990 to:
Uli Frauenfelder or Peter Wittenburg
Max-Planck-Institute for Psycholinguistics
Wundtlaan 1
NL-6525-XD Nijmegen, The Netherlands
phone: 31-80-521-911
e-mail: uli@hnympi51.bitnet or pewi@hnympi51.bitnet.
------------------------------
Subject: New ARTIFICIAL LIFE mailing list
From: silver!efreeman@iuvax.cs.indiana.edu (Eric T. Freeman)
Organization: Indiana University, Bloomington
Date: 15 Feb 90 18:27:55 +0000
[[ Editor's Note: I would be very interested in anyone who attended the
Jan. conference to report on apsects which would of interest to readers
of Neuron Digest. Many folks have already looked at connections (so to
speak) between connectionist models (aka neural nets), cellular automata,
genetic algorithms, et alia. Any takers? Hmmm, I wonder what Searle would
have to say on this subject ;-) -PM ]]
New ARTIFICIAL LIFE Email Distribution List
-------------------------------------------
At the second workshop for Artificial Life in Santa Fe, February 1990,
the need emerged for an email distribution list to provide a forum for
discussion, ideas, comments and announcements in the area of Artificial
Life Research.
Artificial Life is the study of man-made systems that exhibit
behaviors characteristic of natural living systems. It
complements the traditional biological sciences concerned with
the analysis of living organisms by attempting to synthesize
life-like behaviors within computers and other
artificial media. By extending the empirical foundation
upon which biology is based beyond the carbon-chain life
that has evolved on Earth, Artificial Life can contribute
to theoretical biology by locating life-as-we-know-it
within the larger picture of life-as-it-could-be.
C. Langton 1989
The Artificial Life Research Group at Indiana University has been granted
the responsibility for maintaining and archiving this list.
Send all requests for additions/deletions/general business to:
alife-request@iuvax.cs.indiana.edu
Send all contributions to the alife list to:
alife@iuvax.cs.indiana.edu
Local re-distribution lists are welcome so long as the local distribution
information is available to the list maintainers and we are notified
beforehand.
Since the list maintainers wish to retain touch with the list subscribers
and maintain accountability for this list, any other re-distribution to
other lists/boards is prohibited.
Sincerely,
Elisabeth Freeman
Eric Freeman
Marek Lugowski
Artificial Life Research Group
Computer Science Department
Indiana University
Bloomington, IN 47405
U.S.A.
------------------------------
End of Neuron Digest [Volume 6 Issue 13]
****************************************