Copy Link
Add to Bookmark
Report
Neuron Digest Volume 09 Number 04
Neuron Digest Sunday, 2 Feb 1992 Volume 9 : Issue 4
Today's Topics:
Administrivia - technical report etiquette
TR - PPNN: A Faster Learning and Better Generalizing Neural Net
TR - Teaching Arithmetic to a Neural Network
TR - Implementing Spatial Relations in Neural Nets
TR - Efficient Question Answering in a Hybrid System
TR - Identification And Control Of Nonlinear Systems
Preprints - Memory-neuron Networks
Contents of IJNS Vol. 2, issue 3
Announcement of paper on learning syntactic categories.
Send submissions, questions, address maintenance, and requests for old
issues to "neuron-request@cattell.psych.upenn.edu". The ftp archives are
available from cattell.psych.upenn.edu (128.91.2.173). Back issues
requested by mail will eventually be sent, but may take a while.
----------------------------------------------------------------------
Subject: Administrivia - technical report etiquette
From: "Neuron-Digest Moderator, Peter Marvit" <neuron@cattell.psych.upenn.edu>
Date: Sun, 02 Feb 92 17:30:25 -0500
I'm about to (finally) publish a series of technical report
announcements. At the same time, I'd like to remind readers of Neuron
Digest of some common sense rules for requesting copies.
o Please check each announcement carefully for instructions.
Don't send requests to me.
o If you request hardcopies (paper) of reports, please limit
requests to only those publications which are truly interesting
to you. Don't ask for "one of everything." Remember, someone
must those stuff envelopes and pay for postage.
o For ftp archives, please connect during "off-hours" (i.e, *not*
in the middle of the day). Try to be aware of time zone
differences when calculating when to connect. Most ftp
archives are services on machines with other uses; lots of
connects during the day slows the machine down and makes the
local users unhappy.
Thanks for being considerate. Lots of excellent reading awaits you...
: Peter Marvit, Neuron Digest Moderator
: Courtesy of Psychology Department, University of Pennsylvania
: neuron-request@cattell.psych.upenn.edu
------------------------------
Subject: TR - PPNN: A Faster Learning and Better Generalizing Neural Net
From: Bo Xu <ITGT500%INDYCMS.BITNET@vma.cc.cmu.edu>
Date: Tue, 15 Oct 91 09:22:41 -0500
Following is the abstract of a paper accepted by IJCNN'91-SINGAPORE.
The main purpose of this paper was to attack the problems of slow rate of
convergence, local minima, and incapability of learning (under certain
preset criteria) etc problems associated with the original back-propagation
neural nets from an alternative viewpoint ---- topology ---- instead of the
learning algorithm and units responsive characteristics. It was shown in
this paper that the topology is a very important factor limiting the
performances of back-propagation neural networks besides the already studied
factors such as the learning algorithm and the units characteristics.
All comments are welcome.
PPNN: A Faster Learning and Better Generalizing Neural Net
Bo Xu
Indiana University
Liqing Zheng
Purdue University
Abstract----It was pointed out in this paper that the planar topology
of current back-propagation neural network (BPNN) sets limits to solve
the slow convergence rate problem, local minima, and other problems
associated with BPNN. The parallel probabilistic neural network (PPNN)
using a new neural network topology, stereotopology, was proposed to
overcome these problems. The learning ability and the generalization
ability of BPNN and PPNN were compared for several problems. The
simulation results show that PPNN was capable of learning any kinds of
problems much faster than BPNN and generalized better than BPNN too. It
was analyzed that the faster, universal learnability of PPNN was due to
the parallel characteristic of PPNN's stereotopology, and the better
generalization ability came from the probabilistic characteristic of
PPNN's memory retrieval rule.
Bo Xu
Indiana University
itgt500@indycms.iupui.edu
------------------------------
Subject: TR - Teaching Arithmetic to a Neural Network
From: ANDERSON%BROWNCOG.BITNET@mitvma.mit.edu
Date: Mon, 21 Oct 91 14:46:00 -0500
Technical Report 91-3 available from:
Department of Cognitive and Linguistic Sciences
Box 1978, Brown University, Providence, RI 02912
A Study in Numerical Perversity:
Teaching Arithmetic to a Neural Network
James A. Anderson, Kathryn T. Spoehr, and David J. Bennett
Department of Cognitive and Linguistic Sciences
Box 1978
Brown University
Providence, RI 02912
Abstract
There are only a few hundred well-defined facts in
elementary arithmetic, but humans find them hard to learn and
hard to use. One reason for this difficulty is that the
structure of elementary arithmetic lends itself to severe
associative interference. If a neural network corresponds in
any sense to brain-style computation, then we should expect
similar difficulties teaching elementary arithmetic to a neural
network. We find this observation is correct for a simple
network that was taught the multiplication tables. We can
enhance learning of arithmetic by forming a hybrid coding for
the representation of number that contains a powerful analog or
"sensory" component as well as a more abstract component. When
the simple network uses a hybrid representation, many of the
effects seen in human arithmetic learning are reproduced,
including overall error patterns and response time patterns for
false products. An extension of the arithmetic network is
capable of being flexibly programmed to correctly answer
questions involving terms such as "bigger" or "smaller."
Problems can be answered correctly, even if the particular
comparisons involved had not been learned previously. Such a
system is genuinely creative and flexible, though only in a
limited domain. It remains to be seen if the computational
limitations of this approach are coincident with the limitations
of human cognition.
A version of this report will appear as a chapter in:
"Neural Networks for Knowledge Representation and Inference"
Edited by Daniel S. Levine and Manuel Aparicio, IV
To be published by
Lawrence Erlbaum Associates, Hillsdale, New Jersey
Copies can be obtained by sending an email message to:
LI700008@brownvm.BITNET
or to:
anderson@browncog.BITNET
------------------------------
Subject: TR - Implementing Spatial Relations in Neural Nets
From: zeiden@cs.wisc.edu
Date: Mon, 28 Oct 91 09:30:14 -0600
I have placed the following tech report in the NEUROPROSE ftp
archive at Ohio State, under the name zeidenberg.containment.ps.Z
Implementing Spatial Relations in Neural Nets: The Case of
Figure/Ground and Containment
Matthew Zeidenberg
zeiden@cs.wisc.edu
A neural network system that computes the relation of containment between
objects in a retina-like input array is described. This system is
multi-layer, and operates by recognizing and segmenting the objects in
the input to place them in separated arrays. The figure of each object,
that is, the set of all pixels on the perimeter of or contained in the
object, is computed for each object, using a method that involves a
connectionist implementation of a standard algorithm using parity
networks. These figures are then used to compute containment relations
between the objects in the input.
ftp Instructions:
unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52)
Name: anonymous
Password: neuron
ftp> cd pub/neuroprose
ftp> binary
ftp> get zeidenberg.containment.ps.Z
ftp> quit
unix> uncompress zeidenberg.containment.ps.Z
unix> lpr zeidenberg.containment.ps (or other command to print
postscript)
------------------------------
Subject: TR - Efficient Question Answering in a Hybrid System
From: Joachim Diederich <joachim@gmdzi.gmd.de>
Date: Tue, 29 Oct 91 16:57:47 -0100
The following paper has been placed in the Neuroprose archives at
Ohio State. The file is "diederich.hybrid.ps.Z." See ftp in-
structions below.
Efficient Question Answering in a Hybrid System
Joachim Diederich (1,2) & Debra L. Long (2)
(1) German National Research Center for Computer Science (GMD)
Schloss Birlinghoven, P.O. Box 1240
D-5205 St.Augustin 1, Germany
(2) Department of Psychology
University of California, Davis
Davis, CA 95616, U.S.A.
ABSTRACT:
A connectionist model for answering open-class questions in the
context of text processing is presented. The system answers ques-
tions from different question categories, such as "How," Why,"
and "Consequence" questions. These question categories have been
identified in several empirical studies (Graesser & Clark, 1985;
Graesser, 1990). The system responds to a question by generating
a set of possible answers that are weighted according to their
plausibility. Search is performed by means of a massively paral-
lel, directed spreading activation process. The search process
operates on several knowledge sources (i.e., connectionist net-
works) that are learned or explicitly built-in. Spreading activa-
tion involves the use of signature messages (Lange & Dyer, 1989).
Signature messages are numeric values that are propagated
throughout the networks and identify a particular question
category (this makes the system hybrid). Binder units that gate
the flow of activation between textual units receive these signa-
tures and change their states. That is, the binder units either
block the spread of activation or allow the flow of activation in
a certain direction. The process results in a pattern of activa-
tion that represents a set of candidate answers based on avail-
able knowledge sources.
This paper will appear in the IJCNN-91 Singapore Proceedings.
unix> ftp archive.cis.ohio-state.edu
Name: anonymous
Password: neuron
ftp> cd pub/neuroprose
ftp> binary
ftp> get diederich.hybrid.ps.Z
ftp> quit
unix> uncompress diederich.hybrid.ps.Z
unix> lpr diederich.hybrid.ps
Joachim Diederich
German National Research Center for Computer Science (GMD)
P.O. Box 1240
D-5205 St. Augustin 1
Germany
e-mail: joachim@gmdzi.gmd.de
------------------------------
Subject: TR - Identification And Control Of Nonlinear Systems
From: Marios Polycarpou <polycarp@bode.usc.edu>
Date: Fri, 01 Nov 91 15:38:30 -0800
The following paper has been placed in the Neuroprose archives at
Ohio State. The file is "polycarpou.stability.ps.Z." See ftp in-
structions below.
IDENTIFICATION AND CONTROL OF NONLINEAR SYSTEMS
USING NEURAL NETWORK MODELS: DESIGN AND STABILITY ANALYSIS
Marios M. Polycarpou and Petros A. Ioannou
Department of Electrical Engineering - Systems
University of Southern California, MC-2563
Los Angeles, CA 90089-2563, U.S.A
Abstract:
The feasibility of applying neural network learning techniques in
problems of system identification and control has been demonstrated
through several empirical studies. These studies are based for the most
part on gradient techniques for deriving parameter adjustment laws. While
such schemes perform well in many cases, in general, problems arise in
attempting to prove stability of the overall system, or convergence of
the output error to zero.
This paper presents a stability theory approach to synthesizing and
analyzing identification and control schemes for nonlinear dynamical
systems using neural network models. The nonlinearities of the dynamical
system are assumed to be unknown and are modelled by neural network
architectures. Multilayer networks with sigmoidal activation functions
and radial basis function networks are the two types of neural network
models that are considered. These static network architectures are
combined with dynamical elements, in the form of stable filters, to
construct a type of recurrent network configuration which is shown to be
capable of approximating a large class of dynamical systems.
Identification schemes based on neural network models are developed using
two different techniques, namely, the Lyapunov synthesis approach and the
gradient method. Both identification schemes are shown to guarantee
stability, even in the presence of modelling errors. A novel network
architecture, referred to as dynamic radial basis function network, is
derived and shown to be useful in problems dealing with learning in
dynamic enviroments. For a class of nonlinear systems, a stable neural
network based control configuration is presented and analyzed.
unix> ftp archive.cis.ohio-state.edu
Name: anonymous
Password: neuron
ftp> cd pub/neuroprose
ftp> binary
ftp> get polycarpou.stability.ps.Z
ftp> quit
unix> uncompress polycarpou.stability.ps.Z
unix> lpr polycarpou.stability.ps
Any comments are welcome!
Marios Polycarpou
e-mail: polycarp@bode.usc.edu
------------------------------
Subject: Preprints - Memory-neuron Networks
From: kunnikri@neuro.cs.gmr.com (K.P.Unnikrishnan CS/50)
Date: Sat, 02 Nov 91 19:59:28 -0500
The following (p)reprints are now available:
MEMORY-NEURON NETWORKS: A PROLEGOMENON
Pinaki Poddar
Tata Institute of Fundamental Research
Bombay 400005, India.
K. P. Unnikrishnan
General Motors Research Laboratories
Warren, MI 48090, USA.
ABSTRACT:
We present a feed-forward neural network architecture capable of
real-time prediction and recognition of temporal patterns. The network
contains memory neurons to store past network activations in an efficient
fashion. A learning rule that uses only locally available information is
developed for learning connection strengths and memory coefficients in
the network. With a set of learned memory coefficients, the network is
capable of implementing non-linear temporal predictors or recognizers of
the appropriate order. It allows compact feed-forward networks to solve
complex tasks that may require retention of temporal information of
finite or infinite length. The network is equivalent to integrate and
fire models with a set of fast and a set of slow synapses. It can easily
be implemented and trained using currently available analog VLSI chips.
Experiments where networks are trained to recognize temporal sequences of
various lengths or predict future values of the Mackey-Glass time-series
are presented.
===============================
NON-LINEAR PREDICTION OF SPEECH SIGNALS USING MEMORY-NEURON NETWORKS
Pinaki Poddar
K. P. Unnikrishnan
ABSTRACT:
....... Experiments where memory-neuron networks are trained to predict
speech waveforms and sequences of spectral frames are described.
Performance of the network for prediction of time-series with minimal a
priori assumptions of its statistical properties is shown to be better
than linear autoregressive models.
For copies, send e-mail to unni@neuro.cs.gmr.com. Also indicate if you
have interest in the simulation programs.
Unnikrishnan
------------------------------
Subject: Contents of IJNS Vol. 2, issue 3
From: CONNECT@nbivax.nbi.dk
Date: Mon, 04 Nov 91 09:22:00 +0100
INTERNATIONAL JOURNAL OF NEURAL SYSTEMS
The International Journal of Neural Systems is a quarterly journal
which covers information processing in natural and artificial neural
systems. It publishes original contributions on all aspects of this
broad subject which involves physics, biology, psychology, computer
science and engineering. Contributions include research papers, reviews
and short communications. The journal presents a fresh undogmatic
attitude towards this multidisciplinary field with the aim to be a
forum for novel ideas and improved understanding of collective and
cooperative phenomena with computational capabilities.
ISSN: 0129-0657 (IJNS)
=----------------------------------
Contents of Volume 2, issue number 3 (1991):
1. D.G. Stork:
Sources of neural structure in speech and language processing.
2. L. Xu, A. Krzyzak and E. Oja:
Neural nets for the dual subspace pattern recognition method.
3. P.J. Zwietering, E.H.L. Aarts and J. Wessels:
The design and complexity of exact multi-layered perceptrons.
4. M.M. van Hulle:
A goal programming network for mixed integer linear programming:
A case study for the job-shop scheduling problem.
5. J-X. Wu and C. Chan:
A three-layered adaptive network for pattern density estimation
and classification.
6. L. Garrido and V. Gaitan:
Use of neural nets to measure the tau-polarisation and its
Bayesian interpretation.
7. C.M. Bishop:
A fast procedure for retraining the multilayer perceptrons.
8. V. Menon and D.S. Tang:
Population oscillations in neuronal groups.
9. V. Rodrigues and J. Skrzypek:
Combining similarities and dissimilarities in supervised learning.
=----------------------------------
Editorial board:
B. Lautrup (Niels Bohr Institute, Denmark) (Editor-in-charge)
S. Brunak (Technical Univ. of Denmark) (Assistant Editor-in-Charge)
D. Stork (Stanford) (Book review editor)
Associate editors:
B. Baird (Berkeley)
D. Ballard (University of Rochester)
E. Baum (NEC Research Institute)
S. Bjornsson (University of Iceland)
J. M. Bower (CalTech)
S. S. Chen (University of North Carolina)
R. Eckmiller (University of Dusseldorf)
J. L. Elman (University of California, San Diego)
M. V. Feigelman (Landau Institute for Theoretical Physics)
F. Fogelman-Soulie (Paris)
K. Fukushima (Osaka University)
A. Gjedde (Montreal Neurological Institute)
S. Grillner (Nobel Institute for Neurophysiology, Stockholm)
T. Gulliksen (University of Oslo)
D. Hammerstrom (Oregon Graduate Institute)
D. Horn (Tel Aviv University)
J. Hounsgaard (University of Copenhagen)
B. A. Huberman (XEROX PARC)
L. B. Ioffe (Landau Institute for Theoretical Physics)
P. I. M. Johannesma (Katholieke Univ. Nijmegen)
M. Jordan (MIT)
G. Josin (Neural Systems Inc.)
I. Kanter (Princeton University)
J. H. Kaas (Vanderbilt University)
A. Lansner (Royal Institute of Technology, Stockholm)
A. Lapedes (Los Alamos)
B. McWhinney (Carnegie-Mellon University)
M. Mezard (Ecole Normale Superieure, Paris)
J. Moody (Yale, USA)
A. F. Murray (University of Edinburgh)
J. P. Nadal (Ecole Normale Superieure, Paris)
E. Oja (Lappeenranta University of Technology, Finland)
N. Parga (Centro Atomico Bariloche, Argentina)
S. Patarnello (IBM ECSEC, Italy)
P. Peretto (Centre d'Etudes Nucleaires de Grenoble)
C. Peterson (University of Lund)
K. Plunkett (University of Aarhus)
S. A. Solla (AT&T Bell Labs)
M. A. Virasoro (University of Rome)
D. J. Wallace (University of Edinburgh)
D. Zipser (University of California, San Diego)
=----------------------------------
CALL FOR PAPERS
Original contributions consistent with the scope of the journal are
welcome. Complete instructions as well as sample copies and
subscription information are available from
The Editorial Secretariat, IJNS
World Scientific Publishing Co. Pte. Ltd.
73, Lynton Mead, Totteridge
London N20 8DH
ENGLAND
Telephone: (44)81-446-2461
or
World Scientific Publishing Co. Inc.
Suite 1B
1060 Main Street
River Edge
New Jersey 07661
USA
Telephone: (1)201-487-9655
or
World Scientific Publishing Co. Pte. Ltd.
Farrer Road, P. O. Box 128
SINGAPORE 9128
Telephone (65)382-5663
------------------------------
Subject: Announcement of paper on learning syntactic categories.
From: Steve Finch <steve@cogsci.edinburgh.ac.uk>
Date: Fri, 08 Nov 91 18:34:00 +0000
I have submitted a copy of a paper Nick Chater and I have written, to the
neuroprose archive. It details a hybrid system comprising a statistically
motivated network and a symbolic clustering mechanism which together
automatically classify words into a syntactic hierachy by imposing a
similarity metric over the contexts in which they are observed to have
occured in USENET newsgroup articles. The resulting categories are very
linguistically intuitive.
The abstract follows:
Symbolic and neural network architectures differ with respect to the
representations they naturally handle. Typically, symbolic systems use
trees, DAGs, lists and so on, whereas networks typically use high
dimensional vector spaces. Network learning methods may therefore appear
to be inappropriate in domains, such as natural language, which are
naturally modelled using symbolic methods. One reaction is to argue that
network methods are able to {\it implicitly} capture this symbolic
structure, thus obviating the need for explicit symbolic representation.
However, we argue that the {\it explicit} representation of symbolic
structure is an important goal, and can be learned using a hybrid
approach, in which statistical structure extracted by a network is
transformed into a symbolic representation. We apply this approach at
several levels of linguistic structure, using as input unlabelled
orthographic, phonological and word-level strings. We derive
linguistically interesting categories such as `noun', `verb',
`preposition', and so on from unlabeled text.
To get it by anonymous ftp type
ftp archive.cis.ohio-state.edu
when asked for login name type anonymous; when asked for password type
neuron.
Then type
cd pub/neuroprose
binary
get finch.hybrid.ps.Z
quit
Then uncompress it and lpr it.
Steven Finch Phone: +44 31 650 4435 | University of Edinburgh
UUCP: ...!uunet!mcvax!ukc!its63b!cogsci!steve | Centre for Cognitive Science
ARPA: steve%cogsci.ed.ac.uk@nsfnet-relay.ac.uk | 2 Buccleuch Place
JANET: steve@uk.ac.ed.cogsci | Edinburgh EH8 9LW
------------------------------
End of Neuron Digest [Volume 9 Issue 4]
***************************************