Copy Link
Add to Bookmark
Report
Neuron Digest Volume 06 Number 27
Neuron Digest Thursday, 26 Apr 1990 Volume 6 : Issue 27
Today's Topics:
Re: Neuron Digest V6 #26
Answers, Vol6 Issue 26
Re: Is there a clock in the brain?
Re: Is there a clock in the brain?
Temporal Pulse Coding
Re: Temporal Pulse Coding
Re: Temporal Pulse Coding
Student Society Update
Tech Reports Available
EMCSR 1990
TRs available
Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"
Use "ftp" to get old issues from hplpm.hpl.hp.com (15.255.176.205).
------------------------------------------------------------
Subject: Re: Neuron Digest V6 #26
From: ohare@itd.nrl.navy.mil (John O'Hare)
Date: Sun, 15 Apr 90 14:05:48 -0400
1. Re the query of George Bolt on NADC.
2. The addressee would be: Dr. Mien Wann
Code 703
Naval Air Development Center
Warminster, PA 18974 USA
(215) 441-2561 (not sure about extension)
------------------------------
Subject: Answers, Vol6 Issue 26
From: J. P. Letellier <jp@radar.nrl.navy.mil>
Date: Mon, 16 Apr 90 17:39:24 -0500
Here are two answers to questions in last digest...
> Neuron Digest Friday, 13 Apr 1990
> Volume 6 : Issue 26
> Subject: EE Times reference
>
> Anyone having information about the publisher's address and EE Times'
> subscription rates ?
EE Times is a CMP Publication, 600 Community Drive, Manhasset NY 11030,
costs $ 185 in Europe via airmail.
> Subject: NADC ??
>
> Does anyone have this paper or know where I can get hold of it. In
> particular, what is the "NADC", and where is it?
NADC is Naval Air Development Center, and it is at Warminster, PA,
outside of Philadelphia. Might try writing to "postmaster@nadc.mil" to
find someone.
------------------------------
Subject: Re: Is there a clock in the brain?
From: K. Mark Alexander <kma@SAMSON.cadr.dialnet.symbolics.com>
Date: Fri, 13 Apr 90 21:16:00 -0700
[[ Regarding searching for a brain clock and, positing its existance, its
programmability ]]
Musicians would be a good source for these sorts of experiments.
However, I suspect that you would need to find ones that are very
disciplined. So many of us just go by feeling and find it hard to
"synchronize" to precise rythmns. Case in point, try recording someone
playing the piano, be sure to tell them to play as precisely as possible.
Have him play to the beat of a metronome (but don't record the
metronome). Then find a drummer who is willing to try to play to that
recording. You'll find out real quick how "good" both of these musicians
are. See how many times the drummer needs to listen to the tape before
he is successful at synchronizing with it.
Some other thoughts. How about these kids that play some of these
fast-paced video games? It seems to me that they are developing their
"sort-term clock". Have you ever set your alarm clock to go off at 6am
and find that you wake up at 5:59 so you don't get blasted by the alarm?
And why just the human brain? There are many examples all throughout
nature of consistent timing-cycles.
------------------------------
Subject: Re: Is there a clock in the brain?
From: "Neuron-Digest Moderator Peter Marvit" <neuron@hplpm.hpl.hp.com>
Date: Thu, 26 Apr 90 00:06:29 -0700
The subject of time and its computational models is too tempting to let
pass. However, a few responses to particulars...
Regarding the original poster's experience of apparently timing 5 minutes
to 1/10 of a second... Aside from the incident possibly growing with the
telling, controlled experiemnets have shown several striking features of
our perception of long times (e.g., minutes): subjects who know they're
in a time perception task perform very differently than if time
perception is incidental, there is *large* individual variation between
subjects, and there often large variation within a subject. Hmmm, no
surprises here.
> * Is there a clock in the human brain? I mean a clock which behaves
> like a counter/timer peripheral as it would for a CPU.
So far, there is *no* evidence of a single clock for short or medium time
interval perception. The biological mechanism for longer durations
(diurnal, annual, etc) is also unclear, though external cues seem to have
very string regulating effects. Some researchers have postulated banks
of oscillator neurons which are used to measure time. Endogenous bursters
are used for some muscle movements (e.g,. cardiac or somatogastric), but
whether we can use them for cognitive processing seems far from certain.
In fact, much work has shown that we may have a hierarchical system of
"clocks", though the lower limit of resolution is quite broad. Although
cognitive psychologists havepostulated some of these "boxes", I know of
no one who has postulated the biological foundation. Closer to this
Digest's subject, I'm not aware of *any* artificial neural networks which
explicitly code in time as part of the signal information. Perhaps
someone can enlighten me.
> * Can you *program* a clock into your brain?
Someone suggested checking with drummers or other musicians. Remember,
however, there is considerable difference between perception and
production. My own (unpublished) studies show that we just can't detect
differences in rhythms below 30 msec. Trained musicians vary their
"regular beat" by as much as 50 msec. We can certainly train our muscles
to act reasonably precisely, but it appears we have definite physical
and cognitive limits of perception.
While inventive, the original poster described some experiments which
appeared to include no statistical (or even raw) data, poor controls, and
poor design. I refer him to articles by Paul Fraisse (his review in
"Psychology of Music" D. Deutsch, editor, for example), Stephen Handel
(any number in the journal "Perception and Psychophysics), or Povel and
Essens (in the journal "Music Perception") as good places to start.
Careful readers will take the original posters conclusions with an
imperial gallon of salt. On the other hand, if he knows of evidence
supporting a 100 Hz central clock, I look forward to reading it!
Psychophysical measurements are important, since they will dictate the
resolution of the underlying biological networks. How can we construct
connectionist models which will act the same way? Currently, artificial
neural networks go "ker-chunk" all at once. Each epoch updates all the
weights or sends signals through a single layer or .... We need that type
of local determinism in our models to understand the microscopic
behaviour and ease the actual simulation. Can we retain this simplicity
of execution and still capture the essence of time coding? How can we
represent the time course of a signal, as well as its information
content, using simple neurons? I don't think we have cells which "count",
although we certainly have cells which "sum"; do sums of sums hold the
time information?
Hmmm, I'm ambling into wild speculation and I need help. Apologies for
lack of specific references. If there is interest, I'll dig up a real
bibliography on the perception of time.
-Peter "time out" Marvit
------------------------------
Subject: Temporal Pulse Coding
From: reynolds@bucasd.bu.edu (John Huntington Reynolds)
Organization: Boston University Center for Adaptive Systems
Date: 21 Apr 90 18:35:03 +0000
I'm very interested in "multiple meaning" theories (e.g. Raymond and
Lettvin, and now Optican and Richmond), the informational role that
conduction blocks in axon arbors might play, and the function of
temporally modulated pulse codes in general.
I'm writing in order to gather references to related work. I'm really
just getting my feet wet at this point -- I joined Steve Grossberg's
Cognitive and Neural Systems program as a PhD student in September, and
with courses and my R.A. work I've been too snowed under to really pursue
these interests very fully.
Work in temporal pulse encoding I am aware of includes
Chung, Raymond, and Lettvin (1970) Multiple meanings in single visual
units. Brain Behavior and Evolution 3:72-101.
Gray, Konig, Engel, and Singer (1989) Oscillatory Responses in Cat
Visual Cortex Exhibit inter-Columnar Synchronization Which
Reflects Global Stimulus Properties. Nature Vol. 338, March
1989.
Optican, Podell, Richmond, and Spitzer (1987) Temporal Encoding of
Two-Dimensional Patterns by Single Units in Primate Inferior
Temporal Cortex. (three part series) Journal of Neurophysiology.
Vol 57, No 1, January 1987.
Pratt, Gill (1990) Pulse Computation. PhD Thesis. MIT, January, 1990.
Raymond and Lettvin (1978) Aftereffects of activity in
peripheral axons as a clue to nervous coding. In: Physiology
and Pathobiology of Axons. SG Waxman, ed. Raven Press, New York.
Richmond, Optican, and Gawne (1990) Neurons Use Multiple Messages
Encoded in Temporally Modulated Spike Trains to Represent
Pictures. Preprint of a chapter in Seeing Contour and Color
ed. J. Kulikowski, Pergamon Press.
... and a lot of work that has been done in the area of temporal coding
in the auditory nerve and cochlear nucleus (average localized synchrony
response (ALSR) coding).
I've finally reached a (brief) lull in my activities here, and I'd
appreciate any advice you'd care to offer.
--thanks in advance, John Reynolds
------------------------------
Subject: Re: Temporal Pulse Coding
From: ins_atge@jhunix.HCF.JHU.EDU (Thomas G Edwards)
Organization: The Johns Hopkins University - HCF
Date: 22 Apr 90 19:44:23 +0000
Modulated pulse codes represent a very interesting way of implementing
neural networks in silicon. At IJCNN '90 I saw a neural chip implemented
by using AND gates for multiplication and OR gates for adding signals set
up and random bit encodings from a certain magnitude.
Current-based methods of neural net implementation seem very promising
right now, and current pulsing would represent a low duty-cycle method of
implementation (which would reduce power problems).
You might want to get someone at B.U. to get you hooked up with the
connectionist mailing list which is only open to those doing active
neural net research. Alot more scientific information can be gleaned
from the list.
-Tom
------------------------------
Subject: Re: Temporal Pulse Coding
From: aboulang@bbn.com (Albert Boulanger)
Date: 23 Apr 90 18:07:13 +0000
Poppelbaum at the University of Illinois made a several year study out of
Stochastic and "Burst" processing (which represent a hardware/
compute-time tradeoff since one has to collect statistics over time.) A
good book to look at is
"Stochastic and Deterministic Averaging Processors"
P. Mars & W.J. Poppelbaum,
Peter Peregrinus LTD (IEE), 1981
This book has several examples of real worked-out prototypes. This
Includes some hardware for stochastic learning automata. Stochastic
learning automata are a basis to the Barto-Sutton-Klopf reinforcement
learning.
Stochastically,
Albert Boulanger
BBN Systems & Technologies Corp.
aboulanger@bbn.com
------------------------------
Subject: Student Society Update
From: gaudiano@bucasb.bu.edu
Date: Wed, 25 Apr 90 23:04:07 -0400
Hello everyone,
We are very excited about the overwhelming response to our student
society. Our new name (International Student Society for Neural Networks,
or ISSNNet) reflects the large number of interested people from all over
the world. Over 400 people requested a copy of our first newsletter,
almost one half from outside the U.S. If you had sent a request before
April 10 but still have not received the newsletter, or if you have any
other questions, please send a message to:
issnnet@bucasb.bu.edu
We have begun receiving membership requests (only $5 for the year), and
some official donations. Please remember that we will only continue to
send out future issues of the newsletter and other information to
official members, so send us your membership form as soon as you can!
Also, although we realize it was not clearly stated in the newsletter,
YOU DON'T HAVE TO BE A STUDENT TO JOIN! We have many activities and
programs that will be useful to everyone, and your non-student
memberships will show your support for students.
If you are going to IJCNN in San Diego or to INNC in Paris, come visit
our booth. We will have T-shirts, newsletters, and some of our other
events. We will also have an official ISSNNet meeting/social event at
IJCNN (more details later).
If you want to make donations or sponsor students presenting papers at NN
conferences, send e-mail to <issnnet@bucasb.bu.edu>. We are in the
process of becoming incorporated, and we should have our non-profit
status sometime this fall. We have provisions in our bylaws for a
flexible governing board to accommodate the international and dynamic
nature of our society. Get involved!
------------------------------
Subject: Tech Reports Available
From: Yeong-Ho Yu <yu@cs.utexas.edu>
Date: Tue, 10 Apr 90 05:38:22 -0500
The following two technical reports are available.
They will appear in the Proceedings of IJCNN90.
----------------------------------------------------------------------
EXTRA OUTPUT BIASED LEARNING
Yeong-Ho Yu and Robert F. Simmons
AI Lab, The University of Texas at Austin
March 1990 AI90-128
ABSTRACT
One way to view feed-forward neural networks is to regard them as
mapping functions from the input space to the output space. In this
view, the immediate goal of back-propagation in training such a network
is to find a correct mapping function among the set of all possible
mapping functions of the given topology. However, finding a correct one
is sometimes not an easy task, especially when there are local minima.
Moreover, it is harder to train a network so that it can produce correct
output not only for training patterns but for novel patterns which the
network has never seen before. This so-called generalization capability
has been poorly understood, and there is little guidance for achieving a
better generalization. This paper presents a unified viewpoint for the
training and generalization of a feed-forward network, and a technique
for improved training and generalization based on this viewpoint.
- ------------------------------------------------------------------------
DESCENDING EPSILON IN BACK-PROPAGATION:
A TECHNIQUE FOR BETTER GENERALIZATION
Yeong-Ho Yu and Robert F. Simmons
AI Lab, The University of Texas at Austin
March 1990 AI90-130
ABSTRACT
There are two measures for the optimality of a trained feed-forward
network for the given training patterns. One is the global error
function which is the sum of squared differences between target outputs
and actual outputs over all output units of all training patterns. The
most popular training method, back-propagation based on the Generalized
Delta Rule, is to minimize the value of this function. In this method,
the smaller the global error is, the better the network is supposed to
be. The other measure is the correctness ratio which shows, when the
network's outputs are converted into binary outputs, for what percentage
of training patterns the network generates the correct binary outputs.
Actually, this is the measure that often really matters. This paper
argues that those two measures are not parallel and presents a technique
with which the back-propagation method results in a high correctness
ratio. The results show that the trained networks with this technique
often exhibit high correctness ratios not only for the training patterns
but also for novel patterns.
-----------------------------------------------------------------------
To obtain copies, either:
a)
unix> ftp cheops.cis.ohio-state.edu # (or ftp 128.146.8.62)
Name (cheops.cis.ohio-state.edu:): anonymous
Password (cheops.cis.ohio-state.edu:anonymous): <ret>
ftp> cd pub/neuroprose
ftp> binary
ftp> get
(remote-file) yu.output-biased.ps.Z
(local-file) foo.ps.Z
ftp> get
(remote-file) yu.epsilon.ps.Z
(local-file) bar.ps.Z
ftp> quit
unix> uncompress foo.ps bar.ps
unix> lpr -P(your_local_postscript_printer) foo.ps bar.ps
b) If you have any problem of accessing the directory above,
send a request to
yu@cs.utexas.edu
or
Yeong-Ho Yu
AI Lab
The University of Texas at Austin
Austin, TX 78712.
------------------------------
Subject: EMCSR 1990
From: Georg Dorffner <ai-vie!georg@relay.EU.net>
Date: Tue, 10 Apr 90 12:02:17 -0100
Announcement
Tenth European Meeting on Cybernetics and Systems Research
April 17-20, 1990
University of Vienna, Austria
Symposium L:
Parallel Distributed Processing in Humans and Machines
Chairs: David Touretzky (Carnegie Mellon)
Georg Dorffner (Vienna)
The following papers will be presented:
Tuesday afternoon (Apr. 17)
INVITED LECTURE:
A Computational Basis for Phonology
D. Touretzky, USA
On the Neural Connectance-Performance Relationship
G. Barna, P. Erdi, Hungary
Quasi-Optimized Learning Dynamics in Sparsely Connected Neural
Network Models
K.E. Kuerten, Germany
Memorization and Deleting in Linear Neural Networks
A. Petrosino, F. Savastano, R. Tagliaferri, Italy
A Memory-Based Connectionist Network for Speech Recognition
C.-C. Chen, Belgium
Meta-Parsing in Neural Networks
A. Nijholt, The Netherlands
Parallel Data Assimilation in Knowledge Networks
A. Parodi, S. Khouas, France
Wednesday morning (Apr. 18):
Preprocessing of Musical Information and Examples of
Applications for Neural Networks
G. Hipfinger, C. Linster, Austria
Symbolic Behavior and Code Generation: The Emergence of
"Equivalence Relations" in Neural Networks
G.D.A. Brown, M. Oaksford, United Kingdom
Connectionism and Unsupervised Knowledge Representation
I.M. Havel, Czechoslovakia
On Learning Content-Blind Rules
C. Mannes, G. Dorffner, Austria
- * -
The conference will include other symposia on the following
topics:
- General Systems Methodology
- Fuzzy Sets, Approximate Reasoning and Knowledge-Based Systems
- Designing and Systems
- Humanity, Architecture and Conceptualization
- Cybernetics in Biology and Medicine
- Cybernetics of Socio-Economic Systems
- Managing Change and Innovation
- Systems Engineering and Artificial Intelligence for Peace
Research
- Communication and Computers
- Software Development for Systems Theory
- Artificial Intelligence
- Impacts of Artificial Intelligence
- Panel on Organizational Cybernetics, National Development
Planning, and Large-Scale Social Experiments
- * -
Conference Fee: AS 2,900 (ca. $240, incl.proceedings),
NO FEE for students with valid id!
The proceedings will also be available from World Scientific
Publishing Co., entitled "Cybernetics and Systems '90; R.Trappl
(ed.)"
Registration will be possible at the conference site (main
building of the University of Vienna). You can also contact:
EMCSR Conference Secretariat
Austrian Society for Cybernetic Studies
Schottengasse 3
A-1010 Vienna, Austria
Tel: +43 1 535 32 810
Fax: +43 1 63 06 52
Email: sek@ai-vie.uucp
------------------------------
Subject: TRs available
From: Jai Choi <jai@blake.acs.washington.edu>
Date: Fri, 13 Apr 90 21:19:33 -0700
==================================================================
Two Technical Notes Available
==================================================================
1. Query Learning Based on Boundary Search and Gradient
Computation of Trained Multilayer Perceptrons
Jenq-Neng Hwang, Jai J. Choi, Seho Oh, Robert J. Marks II
Interactive Systems Design Lab.
Department of Electrical Engr., FT-10
University of Washington
Seattle, WA 98195
****** Abstract *******
In many machine learning applications, the source of the training data
can be modeled as an oracle. An oracle has the ability, when presented
with an example (query), to give a correct classification. An efficient
query learning is to provide the good training data to the oracle at low
cost. This report presents a novel approach for query based neural
network learning. Consider a layered perceptron partially trained for
binary classification. The single output neuron is trained to be either
a 0 or a 1. A test decision is made by thresholding the output at, say,
0.5. The set of inputs that produce an output of 0.5, forms the
classification boundary. We adopted an inversion algorithm for the
neural network that allows generation of this boundary. In addition, for
each boundary point, we can generate the classification gradient. The
gradient provides a useful measure of the sharpness of the
multi-dimensional decision surfaces. Using the boundary point and
gradient information, conjugate input pair locations are generated and
presented to an oracle for proper classification. This new data is used
to further refine the classification boundary thereby increasing the
classification accuracy. The result can be a significant reduction in
the training set cardinality in comparison with, for example, randomly
generated data points. An application example to power security
assessment is given.
(will be presented in IJCNN'90, San Diego)
**********************************************************************
2. Iterative Constrained Inversion of Neural Networks and its Applications
Jenq-Neng Hwang, Chi H. Chan
****** Abstract ******
This report presents a new approach to solve the constrained inverse
problems for a trained nonlinear mapping. These problems can be found in
a wide variety of applications in dynamic control of nonlinear systems
and nonlinear constrained optimization. The forward problem in a
nonlinear functional mapping is to obtain the best approximation of the
output vector given the input vector. The inverse problem, on the other
hand, is to obtain the best approximation of the input vector given a
specified output vector, i.e., to find the inverse function of the
nonlinear mapping, which might not exist except when the constraints are
imposed on. Most neural networks previously proposed for training the
inverse mapping either adopted an one-way constraint perturbation or a
two-stage learning. Both of these approaches are very laborious and
unreliable. Instead of using two neural networks for emulating the
forward and inverse mappings separately, we applied the network inversion
algorithm, which works directly on the network used to train the forward
mapping, yielding the inverse mapping. Our approach uses one network to
emulate both of forward and inverse nonlinear mapping without explicitly
characterizing and implementing the inverse mapping. Furthermore, our
single network inversion approach allows to iteratively locate the
optimal inverted solution which also satisfies some constraints imposed
on the inputs, and also allows best exploitation of the sensitivity
measure of the inputs to outputs in a non- linear mapping.
(presented in 24 Conf. on Information Systems and Sciences)
******** For copy of above two TR ************
Send your physical address to
Jai Choi
Dept. EE, FT-10
Univ. of Washington
Seattle, WA 98195
or "jai@blake.acs.washington.edu".
------------------------------
End of Neuron Digest [Volume 6 Issue 27]
****************************************