Copy Link
Add to Bookmark
Report

Neuron Digest Volume 07 Number 14

eZine's profile picture
Published in 
Neuron Digest
 · 1 year ago

Neuron Digest	Saturday, 23 Mar 1991		Volume 7 : Issue 14 

Today's Topics:
Preliminary Information On A New Journal
Preprint - A Delay-Line Based Motion Detector Chip
TR - Connectionism and Developmental Theory
TR - Planning with an Adaptive World Model
Preprint on Texture Generation with the Random Neural Network
TR - Finite Precision Eroor Analysis of ANN H/W
TR - Quantization in ANNs
preprint - Discovering Viewpoint-Invariant Relationships
Tech Report available: NN module systems
tech report
Paper availabe on adaptive neural filtering
CRL TR-9101: The importance of starting small


Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"
Use "ftp" to get old issues from hplpm.hpl.hp.com (15.255.176.205).

------------------------------------------------------------

Subject: PRELIMINARY INFORMATION ON A NEW JOURNAL
From: Emil Pelikan <CVS45%CSPGCS11@pucc.PRINCETON.EDU>
Date: Fri, 15 Feb 91 08:51:22 +0700

[[ Editor's Note: This came in all UPPER CASE. I edited it to make it
more readble. Any captilization errors are therefore mine. -PM ]]

The Computerworld Co., Prague, Czechoslovakia, will introduce in 1991
a new scientific journal

NEURAL NETWORK WORLD

(Neural and Massively-Parallel Computing and Information Systems).

The journal is expected to be published bi-monthly with a very good
technical quality and will present the theoretical and applications
oriented scientific papers from the following fields:

Theory of neural networks, natural and artificial
Methods of neurocomputing
Biophysics and neuroscience
Synthesis and construction of neurocomputers
Mass-parallel information processing
Distributed and parallel computer systems
Applications of neurocomputing in sciences and engineering
Parallel artificial inteligence methods
Parallel-computing methods

For 1991 the following dominant topics for the individual issues will be
emphasized: Neural networks in telecommunications, neurocoprocessors,
attention and brain function modeling, massively-parallel computing
methods, intercellular communication, multilayer neural structures,
learning and testing of neural networks models.

Editor-in-chief: Mirko Novak
Institute of Computer and Information Sciences
Pod Vodarenskou Vezi 2
182 07 Praha 8
CZECHOSLOVAKIA

Phone: /00422/ 8152080, 821639
Fax: /00422/ 8585789

Additional information : Emil Pelikan (CVS45 CSPGCS11)

------------------------------

Subject: Preprint - A Delay-Line Based Motion Detector Chip
From: John Lazzaro <lazzaro@sake.Colorado.EDU>
Date: Fri, 22 Feb 91 00:10:21 -0700


An announcement of a preprint on the neuroprose server ...


A Delay-Line Based Motion Detection Chip

Tim Horiuchi, John Lazzaro*, Andrew Moore, Christof Koch

CNS Program, Caltech and *Optoelectronics Center, CU Boulder


Abstract
--------

Inspired by a visual motion detection model for the rabbit retina and by
a computational architecture used for early audition in the barn owl, we
have designed a chip that employs a correlation model to report the
one-dimensional field motion of a scene in real time. Using subthreshold
analog VLSI techniques, we have fabricated and successfully tested a 8000
transistor chip using a standard MOSIS process.



To retrieve ...

>cheops.cis.ohio-state.edu
>Name (cheops.cis.ohio-state.edu:lazzaro): anonymous
>331 Guest login ok, send ident as password.
>Password: your_username
>230 Guest login ok, access restrictions apply.
>cd pub/neuroprose
>binary
>get horiuchi.motion.ps.Z
>quit
%uncompress horiuchi.motion.ps.Z
%lpr horiuchi.motion.ps


--jl

------------------------------

Subject: TR - Connectionism and Developmental Theory
From: Kim Plunkett <psykimp@aau.dk>
Date: Fri, 22 Feb 91 11:47:37 +0100

The following technical report is now available. For a copy,
email "psyklone@aau.dk" and include your ordinary mail address.

Kim Plunkett


+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++



Connectionism and Developmental Theory

Kim Plunkett and Chris Sinha
University of Aarhus, Denmark

Abstract

The main goal of this paper is to argue for an ``epigenetic
developmental interpretation'' of connectionist modelling of
human cognitive processes, and to propose that parallel dis-
tributed processing (PDP) models provide a superior account
of developmental phenomena than that offered by cognitivist
(symbolic) computational theories. After comparing some of
the general characteristics of epigeneticist and cognitivist
theories, we provide a brief overview of the operating prin-
ciples underlying artificial neural networks (ANNs) and
their associated learning procedures. Four applications of
different PDP architectures to developmental phenomena are
described. First, we assess the current status of the debate
between symbolic and connectionist accounts of the process
of English past tense formation. Second, we introduce a
connectionist model of concept formation and vocabulary
growth and show how it provides an account of aspects of
semantic development in early childhood. Next, we take up
the problem of compositionality and structure dependency in
connectionist nets, and demonstrate that PDP models can be
architecturally designed to capture the structural princi-
ples characteristic of human cognition. Finally, we review a
connectionist model of cognitive development which yields
stage-like behavioural properties even though structural and
input assumptions remain constant throughout training. It is
shown how the organisational characteristics of the model
provide a simple but precise account of the equilibration of
the processes of accommodation and assimilation. The
authors conclude that a coherent epigenetic-developmental
interpretation of PDP modelling requires the rejection of
so-called hybrid-architecture theories of human cognition.

------------------------------

Subject: TR - Planning with an Adaptive World Model
From: Sebastian Thrun <hplabs!gmdzi!st>
Date: Fri, 22 Feb 91 11:29:04 -0100

Technical Reports available:



Planning with an
Adaptive World Model

S. Thrun, K. Moeller, A. Linden

We present a new connectionist planning method. By interaction with an
unknown environment, a world model is progressively constructed using
gradient descent. For deriving optimal actions with respect to future
reinforcement, planning is applied in two steps: an experience network
proposes a plan which is subsequently optimized by gradient descent with
a chain of world models, so that an optimal reinforcement may be obtained
when it is actually run. The appropriateness of this method is
demonstrated by a robotics application and a pole balancing task.

(to appear in proceedings NIPS*90)

=-------------------------------------------------------------------------

A General Feed-Forward Algorithm
for Gradient Descent
in Connectionist Networks

S. Thrun, F. Smieja

An extended feed-forward algorithm for recurrent connectionist networks
is presented. This algorithm, which works locally in time, is derived
both for discrete-in-time networks and for continuous networks. Several
standard gradient descent algorithms for connectionist networks (e.g.
Williams/Zipser 88, Pineda 87, Pearlmutter 88, Gherrity 89, Rohwer 87,
Waibel 88, especially the backpropagation algorithm Rumelhart/Hinton/
Williams 86, are mathematically derived from this algorithm. The
learning rule presented in this paper is a superset of gradient descent
learning algorithms for multilayer networks, recurrent networks and
time-delay networks that allows any combinations of their components. In
addition, the paper presents feed-forward approximation procedures for
initial activations and external input values. The former one is used
for optimizing starting values of the so-called context nodes, the latter
one turned out to be very useful for finding spurious input attractors of
a trained connectionist network. Finally, we compare time, processor and
space complexities of this algorithm with backpropagation for an
unfolded-in-time network and present some simulation results.

(in: "GMD Arbeitspapiere Nr. 483")

=-------------------------------------------------------------------------

Both reports can be received by ftp:

unix> ftp cheops.cis.ohio-state.edu

Name: anonymous
Guest Login ok, send ident as password
Password: neuron
ftp> binary
ftp> cd pub
ftp> cd neuroprose
ftp> get thrun.nips90.ps.Z
ftp> get thrun.grad-desc.ps.Z
ftp> bye

unix> uncompress thrun.nips90.ps
unix> uncompress thrun.grad-desc.ps
unix> lpr thrun.nips90.ps
unix> lpr thrun.grad-desc.ps


=-------------------------------------------------------------------------

To all European guys: The same files can be retrieved from gmdzi.gmd.de
(129.26.1.90), directory pub/gmd, which is probably a bit cheaper.
=
- -------------------------------------------------------------------------

If you have trouble in ftping the files, do not hesitate to contact me.


--- Sebastian Thrun
(st@gmdzi.uucp, st@gmdzi.gmd.de)


------------------------------

Subject: Preprint on Texture Generation with the Random Neural Network
From: Erol Gelenbe <erol@ehei.ehei.fr>
Date: Fri, 22 Feb 91 18:13:25


The following paper, accepted for oral presentation at ICANN-91, is
available as a preprint :

Texture Generation with the Random Neural Network Model

by

Volkan Atalay, Erol Gelenbe, Nese Yalabik

A copy may be obtained by e-mailing your request to :

erol@ehei.ehei.fr

Erol Gelenbe
EHEI
Universite Rene Descartes (Paris V)
45 rue des Saints-Peres
75006 Paris

------------------------------

Subject: TR - Finite Precision Eroor Analysis of ANN H/W
From: Jordan Holt <holt@pierce.ee.washington.edu>
Date: Mon, 25 Feb 91 11:33:44 -0800

Technical Report Available:



Finite Precision Error Analysis of Neural
Network Hardware Implementations

Jordan Holt, Jenq-Neng Hwang

The high speed desired in the implementation of many neural network
algorithms, such as back-propagation learning in a multilayer perceptron
(MLP), may be attained through the use of finite precision hardware.
This finite precision hardware; however, is prone to errors. A method of
theoretically deriving and statistically evaluating this error is
presented and could be used as a guide to the details of hardware design
and algorithm implementation. The paper is devoted to the derivation of
the techniques involved as well as the details of the back-propagation
example. The intent is to provide a general framework by which most
neural network algorithms under any set of hardware constraints may be
evaluated.

Section 2 demonstrates the sources of error due to finite precision
computation and their statistical properties. A general error model is
also derived by which an equation for the error at the output of a
general compound operator may be written. As an example, error equations
are derived in Section 3 for each of the operations required in the
forward retrieving and error back- propagation steps of an MLP.
Statistical analysis and simulation results of the resulting distribution
of errors for each individual step of an MLP are also included in this
section. These error equations are then integrated, in Section 4, to
predict the influence of finite precision computation on several stages
(early, middle, final stages) of back-propagation learning. Finally,
concluding remarks are given in Section 5.

=----------------------------------------------------------------------------

The report can be received by ftp:

unix> ftp cheops.cis.ohio-state.edu

Name: anonymous
Guest Login ok, send ident as password
Password: neuron
ftp> binary
ftp> cd pub
ftp> cd neuroprose
ftp> get holt.finite_error.ps.Z
ftp> bye

unix> uncompress holt.finite_error.ps
unix> lpr holt.finite_error.ps


------------------------------

Subject: TR - Quantization in ANNs
From: "Yun Xie, Sydey Univ. Elec. Eng., Tel: (+61-2" <xie@ee.su.OZ.AU>,
"2842)"@CS.CMU.EDU
Date: Fri, 01 Mar 91 09:59:47 -0500

The following is the abstract of a report on our recent research work. The
report is available by FTP and has been submitted.


Analysis of the Effects of Quantization in Multi-Layer
Neural Networks Using Statistical Model

Yun Xie Marwan A. Jabri

Dept. of Electronic Engineering Shool of Electrical Engineering
Tsinghua University The University of Sydney
Beijing 100084, P.R.China N.S.W. 2006, Australia

ABSTRACT

A statistical quantization model is used to analyse the effects
of quantization when digital technique is used to implement a real-valued
feedforward multi-layer neural network.
In this process, we introduce a parameter that we call ``effective
non-linearity coefficient'' which is important in the study of the
quantization effects.
We develop, as function of the quantization parameters, general
statistical formulations of the performance degradation of the neural
network caused by quantization.
Our formulation predicts (as intuitively one may think) that network's
performance degradation gets worse when the number of bits is decreased;
a change of the number of hidden units in a layer has no effect on the
degradation; for a constant ``effective non-linearity coefficient'' and
number of bits, an increase in the number of layers leads to worse
performance degradation of the network; the number of bits in successive
layers can be reduced if the neurons of the lower layer are non-linear.



unix>ftp cheops.cis.ohio-state.edu
Connected to cheops.cis.ohio-state.edu
220 cheops.cis.ohio-state.edu FTP server ready.
Name: anonymous
331 Guest login ok, send ident as password.
Password: neuron
230 Guest login ok, access restrictions apply.
ftp>binary
ftp>cd pub
ftp>cd neuroprose
ftp>get yun.quant.ps.Z
ftp>bye

unix>uncompress yun.quant.ps.Z
unix>lpr yun.quant.ps



















------------------------------

Subject: preprint - Discovering Viewpoint-Invariant Relationships
From: zemel@cs.toronto.edu
Date: Tue, 05 Mar 91 15:16:56 -0500

The following paper has been placed in the neuroprose archives
at Ohio State University:


Discovering Viewpoint-Invariant Relationships
That Characterize Objects


Richard S. Zemel & Geoffrey E. Hinton
Department of Computer Science
University of Toronto
Toronto, Ont. CANADA M5S-1A4


Abstract

Using an unsupervised learning procedure, a network is trained on an
ensemble of images of the same two-dimensional object at different
positions, orientations and sizes. Each half of the network ``sees'' one
fragment of the object, and tries to produce as output a set of 4
parameters that have high mutual information with the 4 parameters output
by the other half of the network. Given the ensemble of training
patterns, the 4 parameters on which the two halves of the network can
agree are the position, orientation, and size of the whole object, or
some recoding of them. After training, the network can reject instances
of other shapes by using the fact that the predictions made by its two
halves disagree. If two competing networks are trained on an unlabelled
mixture of images of two objects, they cluster the training cases on the
basis of the objects' shapes, independently of the position, orientation,
and size.




This paper will appear in the NIPS-90 proceedings.


To retrieve it by anonymous ftp, do the following:

unix> ftp cheops.cis.ohio-state.edu # (or ftp 128.146.8.62)
Name (cheops.cis.ohio-state.edu:): anonymous
Password (cheops.cis.ohio-state.edu:anonymous): <ret>
ftp> cd pub/neuroprose
ftp> binary
ftp> get zemel.unsup-recog.ps.Z
ftp> quit
unix>
unix> zcat zemel.unsup-recog.ps.Z | lpr -P<your postscript printer>


------------------------------

Subject: Tech Report available: NN module systems
From: Frank Smieja <hplabs!gmdzi!smieja>
Organization: Gesellschaft fuer Mathematik und Datenverarbeitung (GMD)
Date: Fri, 08 Mar 91 09:57:11 -0100

The following technical report is available. It will appear in the
proceedings of the AISB90 COnference in Leeds, England. The report
exists as smieja.minos.ps.Z in the Ohio cheops account (in /neuroprose,
anonymous login via FTP). Normal procedure for retrieval applies.



MULTIPLE NETWORK SYSTEMS (MINOS) MODULES:
TASK DIVISION AND MODULE DISCRIMINATION


It is widely considered an ultimate connectionist objective to
incorporate neural networks into intelligent {\it systems.} These systems
are intended to possess a varied repertoire of functions enabling
adaptable interaction with a non-static environment. The first step in
this direction is to develop various neural network algorithms and
models, the second step is to combine such networks into a modular
structure that might be incorporated into a workable system. In this
paper we consider one aspect of the second point, namely: processing
reliability and hiding of wetware details. Presented is an architecture
for a type of neural expert module, named an {\it Authority.} An
Authority consists of a number of {\it Minos\/} modules. Each of the
Minos modules in an Authority has the same processing capabilities, but
varies with respect to its particular {\it specialization\/} to aspects
of the problem domain. The Authority employs the collection of Minoses
like a panel of experts. The expert with the highest {\it confidence\/}
is believed, and it is the answer and confidence quotient that are
transmitted to other levels in a system hierarchy.



------------------------------

Subject: tech report
From: Mathew Yeates <mathew@elroy.Jpl.Nasa.Gov>
Date: Wed, 13 Mar 91 09:38:45 -0800


The following technical report (JPL Publication) is available for
anonymous ftp from the neuroprose directory at cheops.cis.ohio-state.edu.
This is a short version of a previous paper "An Architecture With Neural
Network Characteristics for Least Squares Problems"
and has appeared in
various forms at several conferences.

There are two ideas that may be of interest:
1) By making the input layer of a single layer Perceptron fully
connected, the learning scheme approximates Newtons algorithm
instead of steepest descent.
2) By allowing local interactions between synapses the network can
handle time varying behavior. Specifically, the network can
implement the Kalman Filter for estimating the state of a linear
system.

get both yeates.pseudo-kalman.ps.Z and
yeates.pseudo-kalman-fig.ps.Z

A Neural Network for Computing the Pseudo-Inverse of a Matrix
and Applications to Kalman Filtering

Mathew C. Yeates
California Institute of Technology
Jet Propulsion Laboratory

ABSTRACT

A single layer linear neural network for associative memory is described.
The matrix which best maps a set of input keys to desired output targets
is computed recursively by the network using a parallel implementation of
Greville's algorithm. This model differs from the Perceptron in that the
input layer is fully interconnected leading to a parallel approximation
to Newtons algorithm. This is in contrast to the steepest descent
algorithm implemented by the Perceptron. By further extending the model
to allow synapse updates to interact locally, a biologically plausible
addition, the network implements Kalman filtering for a single output
system.




------------------------------

Subject: Paper availabe on adaptive neural filtering
From: yinlin@cs.tut.fi (Lin Yin)
Date: Thu, 14 Mar 91 12:34:22 +0200


The following paper is available, and will appear in ICANN-91
proceedings.


ADAPTIVE BOOLEAN FILTERS FOR NOISE REDUCTION

Lin Yin, Jaakko Astola, and Yrj\"o Neuvo

Signal Processing Laboratory
Department of Electrical Engineering
Tampere University of Technology
33101 Tampere, FINLAND

Abstract

Although adaptive signal processing is closely related to neural network
in its principles, adaptive neural networks have yet to demonstrate their
adaptive filtering ability. In this paper, a new class of nonlinear
filters called Boolean filters are defined based on the threshold
decomposition. It is shown that the Boolean filters include all median
type filters. Two adaptive Boolean filtering algorithms are derived for
determining optimal Boolean filters under the mean square error (MSE)
criterion or the mean absolute error (MAE) criterion. Experimental
results demonstrate that the adaptive Boolean filters produce quite
promising results in image processing.


For a copy of this preprint send an email request with your MAIL
ADDRESS to: yinlin@tut.fi

---------Lin Yin


------------------------------

Subject: CRL TR-9101: The importance of starting small
From: Jeff Elman <elman@crl.ucsd.edu>
Date: Thu, 14 Mar 91 15:05:04 -0800


The following technical report is available. Hardcopies may be obtained
by sending your name and postal address to crl@crl.ucsd.edu.

A compressed postscript version can be retrieved through ftp
(anonymous/ident) from crl.ucsd.edu (128.54.165.43) in the file
pub/neuralnets/tr9101.Z.


CRL Technical Report 9101

"
Incremental learning, or
The importance of starting small"

Jeffrey L. Elman

Center for Research in Language
Departments of Cognitive Science and Linguistics
University of California, San Diego
elman@crl.ucsd.edu

ABSTRACT

Most work in learnability theory assumes that both the
environment (the data to be learned) and the learning
mechanism are static. In the case of children, however,
this is an unrealistic assumption. First-language learning
occurs, for example, at precisely that point in time when
children undergo significant developmental changes.

In this paper I describe the results of simulations in
which network models are unable to learn a complex grammar
when both the network and the input remain unchanging. How-
ever, when either the input is presented incrementally, or-
- -more realistically--the network begins with limited memory
that gradually increases, the network is able to learn the
grammar. Seen in this light, the early limitations in a
learner may play both a positive and critical role, and
make it possible to master a body of knowledge which could
not be learned in the mature system.


------------------------------

End of Neuron Digest [Volume 7 Issue 14]
****************************************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT