Copy Link
Add to Bookmark
Report
Neuron Digest Volume 01 Number 02
NEURON Digest 05 DEC 1986 Volume 1 Number 2
Topics in this digest --
Call for Commentators - Neuroethology of Vision
Courses - Parallel Architecture and AI (UPenn)
Seminars - The Capacity of Neural Networks (UPenn) &
BoltzCONS: Recursive Objects in a Neural Network (CMU) &
DSP Array for Neural Network Simulation (TI)
References - The definitive connectionist reference
Fun Facts - Simulating a Neural Network
Queries - Holography
[This is almost the last of my 'saved' excerpts from the AILIST
Digest. Sorry for the duplication (for many of you). MG]
-----------------------------------------------------------------
From: TILDE::"HARNAD%MIND@PRINCETON" 4-DEC-1986 22:41
To: neuron@ti-csl
Subj: Neuroethology of Vision: Call for Commentators
[The following call for commentators has already appeared in a few
other newsgorups.]
This is an experiment in using the Net to find eligible commentators
for articles in the Behavioral and Brain Sciences (BBS), an
international, interdisciplinary journal of "open peer commentary,"
published by Cambridge University Press, with its editorial office in
Princeton NJ.
The journal publishes important and controversial interdisciplinary
articles in psychology, neuroscience, behavioral biology, cognitive science,
artificial intelligence, linguistics and philosophy. Articles are
rigorously refereed and, if accepted, are circulated to a large number
of potential commentators around the world in the various specialties
on which the article impinges. Their 1000-word commentaries are the
co-published with the target article as well as the author's response
to each. The commentaries consist of analyses, elaborations,
complementary and supplementary data and theory, criticisms and
cross-specialty syntheses.
Commentators are selected by the following means: (1) BBS maintains a
computerized file of over 3000 BBS Associates; the size of this group
is increased annually as authors, referees, commentators and nominees
of current Associates become eligible to become Associates. Many
commentators are selected from this list. (2) The BBS editorial office
does informal as well as formal computerized literature searches on
the topic of the target articles to find additional potential commentators
from across specialties and around the world who are not yet BBS Associates.
(3) The referees recommend potential commentators. (4) The author recommends
potential commentators.
We now propose to add the following source for selecting potential
commentators: The abstract of the target article will be posted in the
relevant newsgroups on the net. Eligible individuals who judge that they
would have a relevant commentary to contribute should contact me at the
e-mail address indicated at the bottom of this message, or should
write by normal mail to:
Stevan Harnad
Editor
Behavioral and Brain Sciences
20 Nassau Street, Room 240
Princeton NJ 08542
"Eligibility" usually means being an academically trained professional
contributor to one of the disciplines mentioned earlier, or to related
academic disciplines. The letter should indicate the candidate's
general qualifications as well as their basis for wishing to serve as
commentator for the particular target article in question. It is
preferable also to enclose a Curriculum Vitae. (This self-nomination
format may also be used by those who wish to become BBS Associates,
but they must also specify a current Associate who knows their work
andis prepared to nominate them; where no current Associate is known
by the candidate, the editorial office will send the Vita to
approporiate Associates to ask whether they would be prepared to
nominate the candidate.)
BBS has rapidly become a highly read read and very influential forum in the
biobehavioral and cognitive sciences. A recent recalculation of BBS's
"impact factor" (ratio of citations to number of articles) in the
American Psychologist [41(3) 1986] reports that already in its fifth
year of publication BBS's impact factor had risen to become the highest of
all psychology journals indexed as well as 3rd highest of all 1300 journals
indexed in the Social Sciences Citation Index and 50th of all 3900 journals
indexed in the Science Citation index, which indexes all the scientific
disciplines.
The following is the abstract of the second forthcoming article on
which BBS invites self-nominations by potential commentators. (Please
note that the editorial office must exercise selectivity among the
nominations received so as to ensure a strong and balanced cross-specialty
spectrum of eligible commentators.)
-----
NEUROETHOLOGY OF RELEASING MECHANISMS: PREY-CATCHING IN TOADS
Joerg-Peter Ewert
Neuroethology Department, FB 19,
University of Kassel
D-3500 Kassel
Federal Republic of Germany
ABSTRACT:
"Sign stimuli" elicit specific patterns of behavior when an
organism's motivation is appropriate. In the toad, visually released
prey-catching involves orienting toward the prey, approaching,
fixating and snapping. For these action patterns to be selected and
released, the prey must be recognized and localized in space. Toads
discriminate prey from nonprey by certain spatiotemporal stimulus
features. The stimulus-response relations are mediated by innate
releasing mechanims (RMS) with recognition properties partly
modifiable by experience. Striato-pretecto-tectal connectivity
determines the RM's recognition and localization properties whereas
medialpallio-thlamo-tectal circuitry makes the system sensitive to
changes in internal state and to prior history of exposure to stimuli.
RMs encode the diverse stimulus conditions involving the same prey
object through different combinations of "specialized" tectal neurons,
involving cells selectively tuned to prey features. The prey-selective
neurons express the outcome of information processing in functional
units consisting of interconnected cells. Excitatory and inhibitory
interactions among feature-sensitive tectal and pretectal neurons
specify the perceptual operations involved in distinguishing prey
from its background, selecting its features, and discriminating it
from predators. Other connections indicate stimulus location. The
results of these analyses are transmitted by specialized neurons
projecting from the tectum to bulbar/spinal motor systems, providing a
sensorimotor interface. Specific combinations of projective neurons --
mdiating feature- and space-related messages -- form "command
releasing systems" that activate corresponding motor pattern
generators from appropriate prey-catching action patterns.
-----
Potential commentators should send their names, addresses, a description of
their general qualifications and their basis for seeking to comment on
this target article in particular to the address indicated earlier or
to the following e-mail address:
{allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet
Or to: Stevan Harnad
Editor
Behavioral & Brain Sciences
20 Nassau Street #240
Princeton NJ 08542
(609)-921-7771
-----------------------------
[Excerpt from AIList Digest V4 #278. MG]
Date: 1 Dec 86 19:31:49 EST
From: BORGIDA@RED.RUTGERS.EDU
Subject: Course - Parallel Architecture and AI (UPenn)
Posted-Date: Mon, 17 Nov 86 09:51 EST
From: Tim Finin <Tim@cis.upenn.edu>
Here is a description of a 1 and 1/2 day course we are putting on for
the Army Research Office. We are opening it up to some people from
other universities and nearby industry. We have set a modest fee of
$200. for non-academic attendees and is free for academic colleagues.
Please forward this to anyone who might be interested.
SPECIAL ARO COURSE ANNOUNCEMENT
COMPUTER ARCHITECTURES FOR PARALLEL PROCESSING IN AI APPLICATIONS
As a part of our collaboration with the Army Research Office, we are
presenting a three-day course on computer architectures for parallel
processing with an emphasis on their application to AI problems. Professor
Insup Lee has organized the course which will include lectures by
professors Hossam El Gindi, Vipin Kumar (from the University of Texas at
Austin), Insup Lee, Eva Ma, Michael Palis, and Lokendra Shastri.
Although the course is being sponored by the ARO for researchers from
various Army research labs, we are making it available to colleagues from
within the University of Pennsylvania as well as some nearby universities
and research institutions.
If you are interested in attending this course, please contact Glenda Kent
at 898-3538 or send electronic mail to GLENDA@CIS.UPENN.EDU and indicate
your intention to attend. Attached is some addiitonal information on the
course.
Tim Finin
TITLE Computer Architectures for Parallel Processing in AI
Applications
WHEN December 10-12, 1986 (from 9:00 a.m. 12/10 to 12:00 p.m.
12/12)
WHERE room 216, Moore School (33rd and Walnut), University of
Pennsylvania, Philadelphia, PA.
FEE $200. for non-academic attendees
PRESENTERS Hossam El Gindi, Vipin Kumar, Insup Lee, Eva Ma, Michael
Palis, Lokendra Shastri
POC Glenda Kent, 215-898-3538, glenda@cis.upenn.edu
Insup Lee, lee@cis.upenn.edu
INTENDED FOR Research and application programmers, technically oriented
managers.
DESCRIPTION This course will provide a tutorial on parallel
architectures, algorithms and programming languages, and
their applications to Artificial Intelligence problems.
PREREQUISITES Familiarity with basic computer architectures, high-level
programming languages, and symbolic logic, knowledge of
LISP and analysis of algorithms desirable.
COURSE CONTENTS This three day tutorial seminar will present an overview of
parallel computer architectures with an emphasis on their
applications to AI problems. It will also supply the
neccessary background in parallel algorithms, complexity
analysis and programming languages. A tentative list of
topics is as follows:
- Introduction to Parallel Architectures - parallel
computer architectures such as SIMD, MIMD, and
pipeline; interconnection networks including
ring, mesh, tree, multi-stage, and cross-bar.
- Parallel Architectures for Logic Programming -
parallelism in logic programs; parallel execution
models; mapping of execution models to
architectures.
- Parallel Architectures for High Speed Symbolic
Processing - production system machines (e.g,
DADO); tree machines (e.g., NON-VON); massively
parallel machines (e.g., Connection Machine,
FAIM).
- Massive Parallelism in AI - applications of the
connectionist model in the areas of computer
vision, knowledge representation, inference, and
natural language understanding.
- Introduction to Parallel Computational Complexity
- formal parallel computation models such as
Boolean circuits, alternating Turing machines,
parallel random-access machines; relations
between sequential and parallel models of
computation; parallel computational complexity of
AI problems such as tree, graph searches,
unification and natural language parsing.
- Parallel Algorithms and VLSI - interconnection
networks for VLSI layout; systolic algorithms and
their hardware implementations;
- Parallel Programming Languages - language
constructs for expressing parallelism and
synchronization; implementation issues.
COMPUTER ARCHITECTURES FOR PARALLEL PROCESSING
IN AI APPLICATIONS
COURSE OUTLINE
The course will consist of seven lectures, where each lecture is between
two to three hours.
The first lecture introduces the basic concepts of parallel computer
architectures. It explains the organization and applications of different
classes of parallel computer architectures such as SIMD, MIMD, and
pipeline. It then discusses the properties and design tradeoffs of various
types of interconnection networks for parallel computer architectures. In
particular, the ring, mesh, tree, multi-stage, and cross-bar will be
evaluated and compared.
The second and third lectures concentrate on parallel architectures for AI
applications. The second lecture overviews current research efforts to
develop parallel architectures for executing logic programs. Topics covered
will include potential for exploiting parallelism in logic programs,
parallel execution models, and mapping of execution models to
architectures. Progress made so far and problems yet to be solved in
developing such architectures will be discussed. The third lecture
overviews the state-of-the-art of architectures for performing high speed
symbolic processing. In particular, we will describe parallel architectures
for executing production systems such as DADO, tree machines (e.g.,
NON-VON), massively parallel machines (e.g., Connection Machine, FAIM).
The fourth lecture explains why the von Neuman architecture is
inappropriate for AI applications and motivates the need for pursuing the
connectionist approach. To justify the thesis, some specific applications
of the connectionist model in the areas of computer vision, knowledge
representation, inference, and natural language understanding will be
discussed. Although the discussion will vary at the levels of detail, we
plan to examine at least one effort in detail, namely the applicability and
usefulness of adopting a connectionist approach to knowledge representation
and limited inference.
The fifth lecture introduces the basic notions of parallel computational
complexity. Specifically, the notion of ``how difficult a problem can be
solved in parallel'' is formalized. To formulate this notion precisely, we
will define various formal models of parallel computation such as boolean
circuits, alternating Turing machines, and parallel random-access machines.
Then, the computational complexity of a problem is defined in terms of the
amount of resources such as parallel time and number of processors needed
to solve it. The relations between sequential and parallel models of
computation, as well as characterizations of ``efficiently parallelizable''
and ``inherently sequential'' problems are also given. Finally, the
parallel computational complexity of problems in AI (e.g., tree and graph
searches, unification and natural language parsing) are discussed.
The sixth lecture discusses how to bridge the gap between design of
parallel algorithms and their hardware implementations using the present
VLSI technology. This lecture will overview interconnection networks
suitable for VLSI layout. Then, different systolic algorithms and their
hardware implementations will be discussed. To evaluate their
effectiveness, we compare how important data storage schemes, like queue
(FIFO), dictionary, and matrix manipulation, can be implemented on various
systolic architectures.
The seventh lecture surveys various parallel programming languages. In
particular, the lecture will describe extensions made to sequential
procedural, functional, and logic programming languages for parallel
programming. Language constructs for expressing parallelism and
synchronization, either explicitly or implicitly, will be overviewed and
their implementation issues will be discussed.
------------------------------
[The following is an excerpt from the AIList Digest, V4 #260. - MG]
Date: Thu, 13 Nov 86 23:12 EST
From: Tim Finin <Tim@cis.upenn.edu>
Subject: Seminar - The Capacity of Neural Networks (UPenn)
CIS Colloquium
University of Pennsylvania
3pm Tuesday November 18
216 Moore School
THE CAPACITY OF NEURAL NETWORKS
Santosh S. Venkatesh
University of Pennsylvania
Analogies with biological models of brain functioning have led to fruitful
mathematical models of neural networks for information processing. Models of
learning and associative recall based on such networks illustrate how
powerful distributed computational properties become evident as collective
consequence of the interaction of a large number of simple processing
elements (the neurons). A particularly simple model of neural network
comprised of densely interconnected McCulloch-Pitts neurons is utilized in
this presentation to illustrate the capabilities of such structures. It is
demonstrated that while these simple constructs form a complete base for
Boolean functions, the most cost-efficient utilization of these networks
lies in their subversion to a class of problems of high algorithmic
complexity. Specializing to the particular case of associative memory,
efficient algorithms are demonstrated for the storage of memories as stable
entities, or gestalts, and their retrieval from any significant subpart.
Formal estimates of the essential capacities of these schemes are shown. The
ultimate capability of such structures, independent of algorithmic
approaches, is characterized in a rigourous result. Extensions to more
powerful computational neural network structures are indicated.
------------------------------
[The following is an excerpt from the AIList Digest, V4 #260. - MG]
Date: 12 November 1986 1257-EST
From: Masaru Tomita@A.CS.CMU.EDU
Subject: Seminar - BoltzCONS: Recursive Objects in a Neural Network (CMU)
Time: 3:30pm
Place: WeH 5409
Date: 11/18, Tuesday
BoltzCONS: Representing and Transforming Recursive
Objects in a Neural Network
David S. Touretzky, CMU CSD
BoltzCONS is a neural network in which stacks and trees are implemented as
distributed activity patterns. The name reflects the system's mixed
representational levels: it is a Boltzmann Machine in which Lisp cons cell-like
structures appear as an emergent property of a massively parallel distributed
representation. The architecture employs three ideas from connectionist symbol
processing -- coarse coded distributed memories, pullout networks, and variable
binding spaces, that first appeared together in Touretzky and Hinton's neural
network production system interpreter. The distributed memory is used to store
triples of symbols that encode cons cells, the building blocks of linked lists.
Stacks and trees can then be represented as list structures, and they can be
manipulated via associative retrieval. BoltzCONS' ability to recognize shallow
energy minima as failed retrievals makes it possible to traverse binary trees
of unbounded depth nondestructively without using a control stack. Its two
most significant features as a connectionist model are its ability to represent
structured objects, and its generative capacity, which allows it to create new
symbol structures on the fly.
A toy application for BoltzCONS is the transformation of parse trees from
active to passive voice. An attached neural network production system contains
a set of rules for performing the transformation by issuing control signals to
BoltzCONS and exchanging symbols with it. Working together, the two networks
are able to cooperatively transform ``John kissed Mary'' into ``Mary was kissed
by John.''
------------------------------
From: TILDE::"LAWS%SRI-STRIPE.ARPA@CSNET-RELAY.ARPA" 4-DEC-1986 23:01
To: ailist@sri-ai
Subj: Seminar - DSP Array for Neural Network Simulation (TI)
Speaker: Dr. P. Andrew Penz
Date: Thursday, December 10, 1986
Time 11:15 luncheon, meeting at noon
Location: Mini-auditorium S/C Building, TI, Dallas, TX
Neural network is one type of parallel computer architecture. It
contains the hypothesis that a useful conclusion about a complex pattern
can be obtained from linearly weighted sums of pattern measurements
in relation to threshold conditions. As such, the computationally
intensive part of most neural network architectures is a multiplication
of a vector of measurements by a matrix of weighting coefficients. Current
analog hardware is not available to perform this multiplication at low
cost, but digital signal processors (DSP) emulate these operations with high
accuracy at reasonable speed, e. g. the TMS 320 series. A single TMS 32020
can converge a 256 x 256 distributed (Hopfield) matrix from a seed vector
to the closest stored state in a time on the order of .1 sec. Large matrices
can be converged by a straightforward parallel array of DSPs, e. g. 1000 by
1000 requires 16 DSPs. Because of the programmability of the DSP accelerators,
they can applied to a variety of neural network formulations, e. g. designs
associated with Hecht-Neielsen, Fukushima, Kohonen, Hopfield, etc.
-----------------------------
[This is another selected highlight from AILIST Digest V4, #224. MG]
Date: 17 Oct 86 05:34:57 GMT
From: iarocci@eneevax.umd.edu (Bill Dorsey)
Subject: simulating a neural network
Having recently read several interesting articles on the functioning of
neurons within the brain, I thought it might be educational to write a program
to simulate their functioning. Being somewhat of a newcomer to the field of
artificial intelligence, my approach may be all wrong, but if it is, I'd
certainly like to know how and why.
The program simulates a network of 1000 neurons. Any more than 1000 slows
the machine down excessively. Each neuron is connected to about 10 other
neurons. This choice was rather arbitrary, but I figured the number of
connections would be proportional the the cube root of the number of neurons
since the brain is a three-dimensional object.
For those not familiar with the basic functioning of a neuron, as I under-
stand it, it functions as follows: Each neuron has many inputs coming from
other neurons and its output is connected to many other neurons. Pulses
coming from other neurons add or subtract to its potential. When the pot-
ential exceeds some threshold, the neuron fires and produces a pulse. To
further complicate matters, any existing potential on the neuron drains away
according to some time constant.
In order to simplify the program, I took several short-cuts in the current
version of the program. I assumed that all the neurons had the same threshold,
and that they all had the same time constant. Setting these values randomly
didn't seem like a good idea, so I just picked values that seemed reasonable,
and played around with them a little.
One further note should be made about the network. For lack of a good
idea on how to organize all the connections between neurons, I simply connect-
ed them to each other randomly. Furthermore, the determination of whether
a neuron produces a positive or negative pulse is made randomly at this point.
In order to test out the functioning of this network, I created a simple
environment and several inputs/outputs for the network. The environment is
simply some type of maze bounded on all sides by walls. The outputs are
(1) move north, (2) move south, (3) move west, (4) move east. The inputs are
(1) you bumped into something, (2) there's a wall to the north, (3) wall to
the south, (4) wall to the west, (5) wall to the east. When the neuron
corresponding to a particular output fires, that action is taken. When a
specific input condition is met, a pulse is added to the neuron corresponding
to the particular input.
The initial results have been interesting, but indicate that more work
needs to be done. The neuron network indeed shows continuous activity, with
neurons changing state regularly (but not periodically). The robot (!) moves
around the screen generally winding up in a corner somewhere where it occas-
ionally wanders a short distance away before returning.
I'm curious if anyone can think of a way for me to produce positive and
negative feedback instead of just feedback. An analogy would be pleasure
versus pain in humans. What I'd like to do is provide negative feedback
when the robot hits a wall, and positive feedback when it doesn't. I'm
hoping that the robot will eventually 'learn' to roam around the maze with-
out hitting any of the walls (i.e. learn to use its senses).
I'm sure there are more conventional ai programs which can accomplish this
same task, but my purpose here is to try to successfully simulate a network
of neurons and see if it can be applied to solve simple problems involving
learning/intelligence. If anyone has any other ideas for which I may test
it, I'd be happy to hear from you. Furthermore, if anyone is interested in
seeing the source code, I'd be happy to send it to you. It's written in C
and runs on an Atari ST computer, though it could be easily be modified to
run on almost any machine with a C compiler (the faster it is, the more
neurons you can simulate reasonably).
[See Dave Touretzky's message about connectionist references. -- KIL]
--
| Bill Dorsey |
| 'Imagination is more important than knowledge.' |
| - Albert Einstein |
| ARPA : iarocci@eneevax.umd.edu |
| UUCP : [seismo,allegra,rlgvax]!umcp-cs!eneevax!iarocci |
------------------------------
[The following is an excerpt from the AIList Digest, V4 #260. - MG]
Date: 15 Oct 86 21:12 EDT
From: Dave.Touretzky@A.CS.CMU.EDU
Subject: the definitive connectionist reference
The definitive book on connectionism (as of 1986) has just been published
by MIT Press. It's called "Parallel Distributed Processing: Explorations in
the Microstructure of Cognition", by David E. Rumelhart, James H. McClelland,
and the PDP research group. If you want to know about connectionist models,
this is the book to read. It comes in two volumes, at about $45 for the set.
For other connectionist material, see the proceedings of IJCAI-85 and the
1986 Cognitive Science Conference, and the January '85 issue of the
journal Cognitive Science.
-- Dave Touretzky
PS: NO, CONNECTIONISM IS NOT THE SAME AS PERCEPTRONS. Perceptrons were
single-layer learning machines, meaning they had an input layer and an
output layer, with exactly one learning layer in between. No feedback paths
were permitted between units -- a severe limitation. The learning
algorithms were simple. Minsky and Papert wrote a well-known book showing
that perceptrons couldn't do very much at all. They can't even learn the
XOR function. Since they had initially been the subject of incredible
amounts of hype, the fall of perceptrons left all of neural network
research in deep disrepute among AI researchers for almost two decades.
In contrast to perceptrons, connectionist models have unrestricted
connectivity, meaning they are rich in feedback paths. They have rather
sophistcated learning rules, some of which are based on statistical
mechanics (the Boltzmann machine learning algorithm) or information
theoretic measures (G-maximization learning). These models have been
enriched by recent work in physics (e.g., Hopfield's analogy to spin
glasses), computer science (simulated annealing search, invented by
Kirkpatrick and adapted to neural nets by Hinton and Sejnowski), and
neuroscience (work on coarse coding, fast weights, pre-synaptic
facilitation, and so on.)
Many connectionist models perform cognitive tasks (i.e., tasks related to
symbol processing) rather than pattern recognition; perceptrons were mostly
used for pattern recognition. Connectionist models can explain certain
psychological phenomena that other models can't; for an example, see
McClelland and Rumelhart's word recognition model. The brain is a
connectionist model. It is not a perceptron.
Perhaps the current interest in connectionist models is just a passing fad.
Some folks are predicting that connectionism will turn out to be another
spectacular flop -- Perceptrons II. At the other extreme, some feel the
initial successes of ``the new connectionists'' may signal the beginning of
a revolution in AI. Read the journals and decide for yourself.
------------------------------
From: NGSTL1::EVANS "Had we but world enough, and time..."
To: CRL2::GATELY
Subj: holography info. request
Does anyone have the latest scoop on the current state of the art
in holography? The most I know about it comes from an article in
the National Geographic a few years back, and from looking at my
credit card. What are its aims; what technologies are currently
being used; and what sort of applications do researchers have in
mind??
Eleanor J. Evans
Texas Instruments, AI Lab
EVANS%NGSTL1%TI-EG@CSNET-RELAY.ARPA
End of NEURON Digest
********************