Copy Link
Add to Bookmark
Report
Neuron Digest Volume 04 Number 11
Neuron Digest Tuesday, 4 Oct 1988 Volume 4 : Issue 11
Today's Topics:
Administrivia & apology
MIRRORS/II Connectionist Simulator Available
INNS membership
Re: INNS membership
Parsimonious error metric
NIPS conference registration
BrainMaker
Re: Excalibur's Savvy
RE: TEMPORAL DOMAIN IN VISION
Send submissions, questions, mailing list maintenance and requests for back
issues to "Neuron-request@hplabs.hp.com"
------------------------------------------------------------
Subject: Administrivia & apology
From: Peter Marvit, your Moderator <neuron-request@hplabs.hp.com>
Date: Tues, 4 Oct 88 17:33:37 pdt
[[ Many (most) of you recieved multiple copies of Vol 4 #9 & #10. My
apologies, but our local mailer decided to go on vacation without telling
anyone. It seems to be fixed now, but (alas!) modern technology can
multiply a small glitch manyfold. Don't worry, to my knowledge no one is
on the list twice.
With the next issue, the Digest will be up to date. Keep those submissions
coming. I'm especially interested to hear how people are using Neural Nets
in their own research. I am personally interested in neurobiological models
and models of cognitive processes.
Regarding format of the Digest, based on some responses from readers, I
will continue to cull articles from the UNIX USENET and post them here.
I'm also going to investigate moderating the USENET group so we can have a
perfect gateway. I'll be scanning some other mailing lists for appropriate
grist for the Digest mill as well. I'm going to try batching the
Conference and Paper announcements together, so readers can separate them
from the discussions.
Finally, several folks asked about the Elman reference. The full reference
is "Finding Structure in Time", by Jeff Elman, April 1988, CRL Technical
Report 8801. As the blurb states, "requests for reprints should be sent to
the Center for Research in Language, C-008; University of California, San
Diego, CA 92093-0108."
Thanks for your time. -PM ]]
------------------------------------------------------------
Subject: MIRRORS/II Connectionist Simulator Available
From: James A. Reggia <reggia@mimsy.umd.edu>
Date: Wed, 28 Sep 88 13:41:04 -0400
MIRRORS/II Connectionist Simulator Available
MIRRORS/II is a general-purpose connectionist simulator
which can be used to implement a broad spectrum of connec-
tionist (neural network) models. MIRRORS/II is dis-
tinguished by its support of an extensible high-level non-
procedural language, an indexed library of networks, spread-
ing activation methods, learning methods, event parsers and
handlers, and a generalized event-handling mechanism.
The MIRRORS/II language allows relatively inexperienced
computer users to express the structure of a network that
they would like to study and the parameters which will con-
trol their particular connectionist model simulation. Users
can select an existing spreading activation/learning method
and other system components from the library to complete
their connectionist model; no programming is required. On
the other hand, more advanced users with programming skills
who are interested in research involving new methods for
spreading activation or learning can still derive major
benefits from using MIRRORS/II. The advanced user need only
write functions for the desired procedural components (e.g.,
spreading activation method, control strategy, etc.). Based
on language primitives specified by the user MIRRORS/II will
incorporate the user-written components into the connection-
ist model; no changes to the MIRRORS/II system itself are
required.
Connectionist models developed using MIRRORS/II are not
limited to a particular processing paradigm. Spreading
activation methods, and Hebbian learning, competitive learn-
ing, and error back-propogation are among the resources
found in the MIRRORS/II library. MIRRORS/II provides both
synchronous and asynchronous control strategies that deter-
mine which nodes should have their activation values updated
during an iteration. Users can also provide their own con-
trol strategies and have control over a simulation through
the generalized event handling mechanism.
Simulations produced by MIRRORS/II have an event-
handling mechanism which provides a general framework for
scheduling certain actions to occur during a simulation.
MIRRORS/II supports system-defined events (constant/cyclic
input, constant/cyclic output, clamp, learn, display and
show) and user-defined events. An event command (e.g., the
input-command) indicates which event is to occur, when it is
to occur, and which part of the network it is to affect.
Simultaneously occurring events are prioritized according to
user specification. At run time, the appropriate event
handler performs the desired action for the currently-
occurring event. User-defined events can redefine the work-
ings of system-defined events or can create new events
needed for a particular application.
MIRRORS/II is implemented in Franz Lisp and will run
under Opuses 38, 42, and 43 of Franz Lisp on UNIX systems.
It is currently running on a MicroVAX, VAX and SUN 3. If
you are interested in obtaining more detailed information
about the MIRRORS/II system see D'Autrechy, C. L. et al.,
1988, "A General-Purpose Simulation Environment for Develop-
ing Connectionist Models," Simulation, 51, 5-19. The
MIRRORS/II software and reference manual are available for
no charge via tape or ftp. If you are interested in obtain-
ing a copy of the software send e-mail to
mirrors@mimsy.umd.edu
or
...!uunet!mimsy!mirrors
or send mail to
Lynne D'Autrechy
University of Maryland
Department of Computer Science
College Park, MD 20742
[[ If someone is keeping a canonical list of inexpenisive simulators, this
should be added to the PDP book and teh Rochester Simulator. -PM ]]
------------------------------
Subject: INNS membership
From: rravula@opie.wright.edu (R. Ravula)
Date: 28 Sep 88 18:31:40 +0000
I joined as a member of International Neural Network Society
(INNS) in June. Though they accepted me as a member and sent me a temporary
membership card, I haven't got anything from them, particularly their
journal Neural Networks. Is anybody out there in a similar position?? What
is hapenning inside INNS?? Any leads to solve the mystery?? A couple of my
friends are also on the same boat.
------------------------------
Subject: Re: INNS membership
From: dickey@ssc-vax.UUCP (Frederick J Dickey)
Date: 30 Sep 88 13:11:00 +0000
> I joined as a member of International Neural Network Society
> (INNS) in June. Though they accepted me as a member and sent me a
> temporary membership card, I haven't got anything from them,
> particularly thier journal Neural Networks. Is anybody out there in a
> similar position??
I joined several months ago and got a temp membership card plus I've been
getting the journal. Perhaps you haven't waited long enough for something
to happen.
------------------------------
Subject: Parsimonious error metric
From: al@gtx.com (Alan Filipski)
Date: 28 Sep 88 20:46:59 +0000
At the ICNN earlier this year in San Diego, Rumelhart gave a talk in which
he discussed the brilliant idea of incorporating a measure of the
size/complexity of a net into the error criterion being minimized. The
back-prop procedure would thus tend to seek out smaller net configurations
as well as more accurate ones.
I thought this was the most memorable talk of the conference.
Unfortunately, I did not copy down his formulas for the updating rules
under this criterion. I thought I could look them up later-- but alas,
they do not seem to be in the proceedings. Can anyone give a reference to
a paper that covers this technique and discusses the results of his
experiments?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
( Alan Filipski, GTX Corp, 8836 N. 23rd Avenue, Phoenix, Arizona 85021, USA )
( {allegra,decvax,hplabs,amdahl,nsc}!sun!sunburn!gtx!al (602)870-1696 )
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[[ I, too, saw his talk and a much earlier one where he was just starting
to develop these ideas. I certainly hope he publishes soon; I didn't make
enough notes, either. Rumelhart's ideas are deceptively simple. He made a
list of measures of network complexity (e.g., number of connection, number
of nodes, etc.) and then let the network itself minimize the parameter or
combination thereof. -PM]]
------------------------------
Subject: NIPS conference registration
From: kruschke@cogsci.berkeley.edu (John Kruschke)
Date: Thu, 29 Sep 88 18:10:30 -0700
I'm very interested in attending the NIPS conference in Denver at the end
of November. Must one register in advance, and if so, how much does it
cost and to whom do we write? If not, are crashers welcome or merely
tolerated? Important: Are there student rates?
If you don't have that info. handy, perhaps someone on the mail net can
fill us in.
Thanks. --John.
[[ I don't have the information. Could one of you readers help John out? -PM]]
------------------------------
Subject: BrainMaker
From: stein@premise.ZONE1.COM (Rich Epstein)
Date: 30 Sep 88 01:52:10 +0000
I would like to hear from anybody who has used BrainMaker by
California Scientific Software. (Its a Neural Network Simulation
System for the IBM PC). I have literature from the company and
would like to get some user experiences. Please e-mail and
I will post a summary of responses.
Thanks,
- --
- -Richard W. Epstein, Robin Computing, Inc
- -(guest at Premise)
------------------------------
Subject: Re: Excalibur's Savvy
From: tomh@proxftl.UUCP (Tom Holroyd)
Date: 30 Sep 88 14:14:40 +0000
In article <9332@swan.ulowell.edu> sbrunnoc@hawk.ulowell.edu (Sean Brunnock) writes:
> Having attended the INNS conference and trying out many of the exhibitor's
>products firsthand, I am not impressed with what is commercially available
>in terms of neural network products and programs.
>
> I believe that most of the products at the conference can be categorized
>into three groups: books, programs to allow people to build neural networks,
>and handwriting analyzers.
Hmm. I sort of liked Excalibur's Savvy system that did recognition of
photo-id badges in real time.
There were a lot of books there. There were also many systems designed to
help others design NN products. But there was a lot more there than just
handwriting recognition. Several other types of recognition were featured.
Process control was also common. Robotics experts there said that NN's
would play a major role in industry in the coming years.
>But Nestor's handwriting analyzer, after five days of training
>by various attendees of the coference, gave me a response of "re11o"
>after I wrote "hello".
Like you said, after five days of training. Some of that training included
random garbage. I watched a guy train it to respond to smiley faces, some
with triangular heads. Correctly trained systems are being readied for
real-word chores like recognizing the hand written numbers on checks
and credit card receipts (Nestor, BancTec, AmEx).
There are already several mortgage and loan analysis programs on the market
(Nestor, Adaptive Decision Systems).
What I came away with was a healthy respect for the speed with which many
groups have gone to market, and the wide range of applications.
Building a bee will probably require special parallel hardware, tho.
Tom Holroyd
UUCP: {uflorida,uunet}!novavax!proxftl!tomh
------------------------------
Subject: RE: TEMPORAL DOMAIN IN VISION
Date: Fri, 30 Sep 88 10:25:00 -0400
From: Richard A. Young (YOUNG@GMR.COM)
Re: temporal domain in vision
I take issue with two replies recently made to dmocsny@uceng.UC.EDU (daniel
mocsny) regarding the Science News article on the work of B. Richmond of NIMH
and L. Optican of the National Eye Institute on their multiplex filter model
for encoding data on neural spike trains:
L. Adrian Griffis (lag@cseg.uucp):
> I'm not an expert in this field, but this suggests to me that many of the
> special tricks that neurons of the eye employ may be attempts to overcome
> space limitations rather than to make other processing schemes possible.
James Wilbur Lewis ( jwl@ernie.Berkeley.EDU):
> Another disturbing loose end was the lack of discussion about how this
> information might be propagated across synapses...It's an interesting result,
> but I think they may have jumped the gun with the conclusion they drew.
Instead, I have a more positive view of Richmond and Optican's work after
reviewing their publications (see references at end), and talking with them
at the recent Neural Net meetings in Boston. I am impressed with their
approach and research. I think that the issue of temporal coding deserves
much more careful attention by vision and neural net researchers than it
has received over the years. Richmond and Optican have produced the first
hard published evidence I am aware of in the primate visual system that
temporal codes can carry meaningful information about visual form.
Their first set of papers dealt with the inferotemporal cortex, a high
level vision area (Richmond et al., 1987; Richmond and Optican, 1987;
Optican and Richmond, 1987). They developed a new technique using principal
component analysis of the neural spike density waveform that allowed them
to analyze the information content in the temporal patterns in a
quantifiable manner. Each waveform is expressed in terms of a few
coefficients -- the weights on the principal components. By looking at
these weights or "scores", it is much easier to see what aspects of the
stimulus might be associated with the temporal properties of the waveform
than has been previously possible. They used a set of 64 orthogonal
stimulus patterns (Walsh functions), that were each presented in a 400 msec
flash to awake fixating monkeys. Each stimulus was shown in two
contrast-reversed forms, for a total of 128 stimuli. They devised an
information theoretic measure which showed that "the amount of information
transmitted in the temporal modulation of the response was at least twice
that transmitted by the spike count" alone, which they say is a
conservative estimate. In other words, they could predict which stimuli
were present to a much better extent when the full temporal form of the
response was considered rather than just the total spike count recorded
during a trial. Their laboratory has since extended these experiments to
the visual cortex (Gawne et al., 1987) and the lateral geniculate nucleus
(McClurkin et al.) and found similar evidence for temporal coding of form.
The concept of temporal coding in vision has been around a long time
(Troland, 1921), but primarily in the area of color vision. Unfortunately
the prevailing bias in biological vision has been against temporal coding
in general for many years. It has been difficult to obtain funding or even
get articles published on the subject. Richmond and Optican deserve much
credit for pursuing their research and publishing their data in the face of
such strong bias (as does Science News for reporting it). The conventional
view is that all neural information is spatially coded. Such models are
variants of the "doctrine of specific nerve energies" first postulated by
Mueller in the nineteenth century. This "labelled-line" hypothesis assumes
that the particular line activated carries the information. From an
engineering viewpoint, temporal coding allows for more information to be
carried along any single line. Such coding allows more efficient use of the
limited space available in the brain for axons compared to cell bodies
(most of the brain volume is white matter, not grey!). In terms of
biological plausibility, it seems to me that the burden of proof should be
on those who maintain that such codes would NOT be used by the brain.
Anyone who has recorded post-stimulus time histograms from neurons observes
the large variations in the temporal pattern of the responses that occur
with different stimuli. The "accepted view" is that such variations do not
encode stimulus characteristics but represent epi-phenomena or noise. Hence
such patterns are typically ignored by researchers. Perhaps one difficulty
has been that there has not been a good technique to quantify the many
waveshape patterns that have been observed. It is indeed horrendously
difficult to try to sort the patterns out by eye -- particularly without
knowing what the significant features might be, if any. With the
application of the principal component technique to the pattern analysis
question, Richmond and Optican have made a significant advance, I believe
-- it is now possible to quantify such waveforms and relate their
properties to the stimulus in a simple manner.
The question raised by Lewis of whether the nervous system can actually
make use of such codes is a potentially rich area for research. Chung,
Raymond, and Lettvin (1970) have shown that branching at axonal nodes is an
effective mechanism for decoding temporal messages. Young (1977) was the
first to show that bypassing the receptors and inserting temporal codes
directly into a human nervous system could led to visual perceptions that
were the same for a given code across different observers.
Work on temporal coding has potentially revolutionary implications for both
physiological and neural net research. As was noted at the INNS neural net
meeting in Boston, temporal coding has not yet been applied or even studied
by neural net researchers. Neural nets today can obviously change their
connection strengths -- but the temporal pattern of the signal on the
connecting lines is not used to represent or transmit information. If it
were, temporal coding methods would seem to offer potentially rich rewards
for increasing information processing capabilities in neural nets without
having to increase the number of neurons or their interconnections.
References
- -----------
Chung, S. H., Raymond, S. & Lettvin, J. Y. (1970) Multiple meaning in single
visual units. Brain Behav. Evol. 3, 72-101.
Gawne, T. J. , Richmond, B. J., & Optican, L. M. (1987) Striate cortex neurons
do not confound pattern, duration, and luminance, Abstr., Soc. for Neuroscience
McClurkin, J.W., Gawne, Richmond, B.J., Optican, L. M., & Robinson, D. L.(1988)
Lateral geniculate nucleus neurons in awake behaving primates: I. Response to
B&W 2-D patterns, Abstract, Society for Neuroscience.
Optican, L. M., & Richmond, B. J. (1987) Temporal encoding of two-dimensional
patterns by single units in primate inferior temporal cortex. III. Information
theoretic analysis. J. Neurophysiol., 57, 147-161.
Richmond, B. J., Optican, L. M., Podell, M., & Spitzer, H. (1987) Temporal
encoding of two-dimensional patterns by single units in primate inferior
temporal cortex. I. Response characteristics. J. Neurophysiol., 57, 132-146.
Richmond, B.J., & Optican, L. M. (1987) Temporal encoding of two-dimensional
patterns by single units in primate inferior temporal cortex. II.
Quantification of response waveform. J. Neurophysiol., 57, 147-161.
Richmond, B. J., Optican, L. M., & Gawne, T. J. (accepted) Neurons use multiple
messages encoded in temporally modulated spike trains to represent pictures.
Seeing, Contour, and Colour, ed. J. Kulikowski, Pergamon Press.
Richmond, B. J., Optican, L. M., & Gawne, T. J. (1987) Evidence of an intrinsic
code for pictures in striate cortex neurons, Abstr., Soc. for Neuroscience.
Troland, L. T. (1921) The enigma of color vision. Am. J. Physiol. Op. 2, 23-48.
Young, R. A. (1977) Some observations on temporal coding of color vision:
Psychophysical results. Vision Research, 17, 957-965.
------------------------------
End of Neurons Digest
*********************