Copy Link
Add to Bookmark
Report

Neuron Digest Volume 08 Number 03

eZine's profile picture
Published in 
Neuron Digest
 · 1 year ago

Neuron Digest   Monday, 14 Oct 1991                Volume 8 : Issue 3 

Today's Topics:
ingber.eeg.ps.Z in Neuroprose archive
Paper available: Synchronization of Spikes
Preprint - Continuous time temporal -prop with time delays
TR - Non-linear systems analysis using ANN and fuzzy logic
preprint - Uniqueness of Parisi's Scheme for Symmetry Breaking
Reprint - neural robots control
TR - Stable Control of Nonlinear Systems
TR - Learning, Behavior, and Evolution


Send submissions, questions, address maintenance, and requests for old
issues to "neuron-request@cattell.psych.upenn.edu". The ftp archives are
available from cattell.psych.upenn.edu (128.91.2.173). Back issues
requested by mail will eventually be sent, but may take a while.

----------------------------------------------------------------------

Subject: ingber.eeg.ps.Z in Neuroprose archive
From: Lester Ingber <ingber@umiacs.UMD.EDU>
Date: Wed, 14 Aug 91 15:04:18 -0500

The paper ingber.eeg.ps.Z has been placed in the Neuroprose archive.
This can be accessed by anonymous FTP on cheops.cis.ohio-state.edu
(128.146.8.62) in the pub/neuroprose directory.

This will laserprint out to 65 pages, so I give the abstract below to
help you decide whether it's worth it. (I also enclose a referee's
review afterwards to sway you the other way.) The six figures can be
mailed on request, and I'm willing to make some hardcopies of the galleys
or reprints, when they come, available. However, since this project is
funded out of my own pocket, I might have to stop honoring such requests.
The published paper will run 44 pages.

This message may be forwarded to other lists.

Lester Ingber


------------------------------------------
| |
| |
| |
| Prof. Lester Ingber |
| ______________________ |
| |
| |
| P.O. Box 857 703-759-2769 |
| McLean, VA 22101 ingber@umiacs.umd.edu |
| |
------------------------------------------

=======================================================================
Physical Review A, vol. 44 (6) (to be published 15 Sep 91)

Statistical mechanics of neocortical interactions:
A scaling paradigm applied to electroencephalography

Lester Ingber
Science Transfer Corporation, P.O. Box 857, McLean, VA 22101
(Received 10 April 1991)

A series of papers has developed a statistical mechanics of
neocortical interactions (SMNI), deriving aggregate behavior of
experimentally observed columns of neurons from statistical
electrical-chemical properties of synaptic interactions. While
not useful to yield insights at the single neuron level, SMNI has
demonstrated its capability in describing large-scale properties
of short-term memory and electroencephalographic (EEG) systemat-
ics. The necessity of including nonlinear and stochastic struc-
tures in this development has been stressed. In this paper, a
more stringent test is placed on SMNI: The algebraic and numeri-
cal algorithms previously developed in this and similar systems
are brought to bear to fit large sets of EEG and evoked potential
data being collected to investigate genetic predispositions to
alcoholism and to extract brain "signatures" of short-term
memory. Using the numerical algorithm of Very Fast Simulated
Re-Annealing, it is demonstrated that SMNI can indeed fit this
data within experimentally observed ranges of its underlying
neuronal-synaptic parameters, and use the quantitative modeling
results to examine physical neocortical mechanisms to discrim-
inate between high-risk and low-risk populations genetically
predisposed to alcoholism. Since this first study is a control
to span relatively long time epochs, similar to earlier attempts
to establish such correlations, this discrimination is incon-
clusive because of other neuronal activity which can mask such
effects. However, the SMNI model is shown to be consistent with
EEG data during selective attention tasks and with neocortical
mechanisms describing short-term memory (STM) previously pub-
lished using this approach. This paper explicitly identifies
similar nonlinear stochastic mechanisms of interaction at the
microscopic-neuronal, mesoscopic-columnar and macroscopic-
regional scales of neocortical interactions. These results give
strong quantitative support for an accurate intuitive picture,
portraying neocortical interactions as having common algebraic or
physics mechanisms that scale across quite disparate spatial
scales and functional or behavioral phenomena, i.e., describing
interactions among neurons, columns of neurons, and regional
masses of neurons.

PACS Nos.: 87.10.+e, 05.40.+j, 02.50.+s, 02.70.+d

=======================================================================
Report of Referee Manuscript No. AD4564

Over the years, I had several occasions to review papers by Lester
Ingber. However, there was never time enough to fully digest and
comprehend all of the details to convince myself that his efforts of
developing a theoretical basis for describing neocortical brain functions
are in fact sound and not just speculative. This paper dispels all those
reservations and doubts, but unfortunately it is rather lengthy.

This paper, and the research behind it, is pioneering, and it needs to be
published. The question is whether Physical Review A is the appropriate
journal.

Since the paper reviews and presents in a rather comprehensive fashion
the research by Lester Ingber in the area of modeling neocortical brain
functions, I recommend that it be submitted to Review of Modern Physics.
=======================================================================


------------------------------

Subject: Paper available: Synchronization of Spikes
From: Alfred_Nischwitz <alfred@lnt.e-technik.tu-muenchen.de>
Date: Tue, 20 Aug 91 02:26:59 -0800


The following paper has been published in the proceedings of the
International Conference on Artificial Neural Networks '91 in Helsinky:

SYNCHRONIZATION OF SPIKES IN POPULATIONS OF
LATERALLY COUPLED MODEL NEURONS

by Alfred Nischwitz Lehrstuhl fuer Nachrichtentechnik
Peter Klausner Technische Universitaet Muenchen
Arcisstrasse 21, D-8000 Muenchen 2, F.R.Germany
Helmut Gluender Institut fuer Medizinische Psychologie
Ludwig-Maximilians-Universitaet
Goethestrasse 31, D-8000 Muenchen 2, F.R.Germany

ABSTRACT:

The synchronization of impulse generation in non-oscillating networks of
laterally coupled model neurons is investigated. The influence of network
parameters and of the external stimulation on the quality of
synchronization is quantified. It turns out that synchronization accuracy
in the order of few tenths of the impulse duration can be achieved and
that it is barely influenced by moderate deviations from optimum
parameter values.

Hardcopies of the paper are available. Please send requests to the
following address in Germany:

Alfred Nischwitz
Department for Communication Science
Technical University of Munich
Arcisstrasse 21, D-8000 Muenchen 2, F.R.Germany
email: alfred@lnt.e-technik.tu-muenchen.de

Alfred Nischwitz

------------------------------

Subject: Preprint - Continuous time temporal -prop with time delays
From: shawnd@ee.ubc.ca
Date: Fri, 20 Sep 91 15:11:54 -0800


The following preprint is available by ftp from the neuroprose archive
at archive.cis.ohio-state.edu:


Continuous-Time Temporal Back-Propagation with Adaptable Time Delays

Shawn P. Day
Michael R. Davenport

Departments of Electrical Engineering and Physics
University of British Columbia
Vancouver, B.C., Canada

ABSTRACT

We present a generalization of back-propagation for training multilayer
feed-forward networks in which all connections have time delays as well
as weights. The technique assumes that the network inputs and outputs
are continuous time-varying multidimensional signals. Both the weights
and the time delays adapt using gradient descent, either in ``epochs''
where they change after each presentation of a training signal, or
``on-line'', where they change continuously. Adaptable time delays allow
the network to discover simpler and more accurate mappings than can be
achieved with fixed delays. The resulting networks can be used for temporal
and spatio-temporal pattern recognition, signal prediction, and signal
production. We present simulation results for networks that were trained
on-line to predict future values of a chaotic signal using its present
value as an input. For a chaotic signal generated by the Mackey-Glass
differential-delay equation, networks with adaptable delays typically had
less than half the prediction error of networks with fixed delays.

Here's how to get the preprint from neuroprose:

unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52)
Name: anonymous
Password: neuron
ftp> cd pub/neuroprose
ftp> binary
ftp> get day.temporal.ps.Z
ftp> quit
unix> uncompress day.temporal.ps.Z
unix> lpr day.temporal.ps (or however you print postscript)

Any questions or comments can be addressed to me at:

Shawn Day
Department of Electrical Engineering
2356 Main Mall
University of British Columbia
Vancouver, B.C., Canada
V6T 1Z4

phone: (604) 264-0024
email: shawnd@ee.ubc.ca


------------------------------

Subject: TR - Non-linear systems analysis using ANN and fuzzy logic
From: Jai Choi <jai@sol.boeing.com>
Date: Tue, 24 Sep 91 15:21:21 -0800

The following technical report is available.
This's been submitted for FUZZ-IEEE '92, San Diego.

A copy will be available by request only to

Jai J. Choi
Boeing Computer Services
P.O.Box 24346, MS 6C-04
Seattle, WA 98124, USA

or drop your surface address to "jai@sol.boeing.com".

<Title: Non-linear system diagnosis using neural networks and fuzzy logic>.

<Abstract:>

We propose a real-time diagnostic system using a combination of neural
networks and fuzzy logic. This neuro-fuzzy hybrid system utilizes
real-time processing, prediction and data fusion. A layer of n trained
neural networks processes $n$ independent time series (channels) which
can be contaminated with environmental noise. Each network is trained to
predict the future behavior of one time series. The prediction error and
its rate of change from each channel are computed and sent to a fuzzy
logic decision output stage, which contains $n+1$ modules. The (n+1)st
final-output module performs data fusion by combining n individual fuzzy
decisions that are tuned to match the domain expert's need.

------------------------------

Subject: preprint - Uniqueness of Parisi's Scheme for Symmetry Breaking
From: Benny Lautrup <LAUTRUP@nbivax.nbi.dk>
Date: Wed, 02 Oct 91 09:05:00 +0000


New preprint

Uniqueness of Parisi's Scheme for Replica Symmetry Breaking

B. Lautrup
Computational Neural Network Center
The Niels Bohr Institute
Blegdamsvej 17
2100 Copenhagen, Denmark

Abstract:
Replica symmetry breaking in spin glass models is investigated
using elements of the theory of permutation groups. It is shown
how the various types of symmetry breaking gives rise to special
algebras and that Parisi's scheme may be uniquely characterized
by two simple conditions on these algebras, namely transposition
symmetry and simple extensibility. An alternative to the Parisi
scheme is shown to be unacceptable.


The paper may be retrieved by anonymous ftp from

nbibel.nbi.dk (129.142.100.11)

in the directory pub/neuroprose under the name

lautrup.parisi.ps.Z

It is a compressed postscript file.

Regards

Benny Lautrup


------------------------------

Subject: Reprint - neural robots control
From: Patrick van der Smagt <smagt@fwi.uva.nl>
Date: Wed, 02 Oct 91 09:28:55 +0000

I mentioned a paper some time ago about neural robotics control.
Popular demand made me decide to make it available by anonymous ftp
from neuroprose.


The following reprint is available by ftp from the neuroprose
archive at archive.cis.ohio-state.edu:

A real-time learning neural robot controller

P. Patrick van der Smagt
Ben J. A. Kr\"ose

Department of Computer Systems
University of Amsterdam
Kruislaan 403, 1098 SJ
Amsterdam, The Netherlands



ABSTRACT

A neurally based adaptive controller for a 6 degrees of freedom (DOF)
robot manipulator with only rotary joints and a hand-held camera is
described. The task of the system is to place the manipulator directly
above an object that is observed by the camera (i.e., 2D hand-eye
coordination). The requirement of adaptivity results in a system which
does not make use of any inverse kinematics formulas or other detailed
knowledge of the plant; instead, it should be self-supervising and adapt
on-line.

The proposed neural system will directly translate the preprocessed
sensory data to joint displacements. It controls the plant in a feedback
loop. The robot arm may make a sequence of moves before the target is
reached, when in the meantime the network learns from experience. The
network is shown to adapt quickly (in only tens of trials) and form a
correct mapping from input to output domain.

Here's how to get the reprint from neuroprose:

unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52)
Name: anonymous
Password: neuron
ftp> cd pub/neuroprose
ftp> binary
ftp> get smagt.rtcontrol.ps.Z
ftp> quit
unix> uncompress smagt.rtcontrol.ps.Z
unix> lpr smagt.rtcontrol.ps (or however you print postscript)

Questions or comments can be sent to me at:

Patrick van der Smagt
Department of Computer Systems
University of Amsterdam
Kruislaan 403, 1098 SJ
Amsterdam, The Netherlands
email: smagt@fwi.uva.nl
fax: +31 20 525 7490
phone: +31 20 525 7524


------------------------------

Subject: TR - Stable Control of Nonlinear Systems
From: "E. Tzirkel-Hancock" <et@eng.cam.ac.uk>
Date: Wed, 02 Oct 91 15:31:19 +0000

The following report has been placed in the neuroprose archives at
Ohio State University:

STABLE CONTROL OF NONLINEAR
SYSTEMS USING NEURAL NETWORKS

Eli Tzirkel-Hancock & Frank Fallside

Technical Report CUED/F-INFENG/TR.81

Cambridge University
Engineering Department
Trumpington Street
Cambridge CB2 1PZ
England

Abstract

A neural network based direct control architecture is presented, that
achieves output tracking for a class of continuous time nonlinear plants,
for which the nonlinearities are unknown. The controller employs neural
networks to perform approximate input/output plant linearization. The
network parameters are adapted according to a stability principle. The
architecture is based on a modification of a method previously proposed
by the authors, where the modification comprises adding a sliding control
term to the controller. This modification serves two purposes: first, as
suggested by Sanner and Slotine, sliding control compensates for plant
uncertainties outside the state region where the networks are used, thus
providing global stability; second, the sliding control compensates for
inherent network approximation errors, hence improving tracking
performance.

A complete stability and tracking error convergence proof is given and
the setting of the controller parameters is discussed. It is demonstrated
that as a result of using sliding control, better use of the network's
approximation ability can be achieved, and the asymptotic tracking error
can be made dependent only on inherent network approximation errors and
the frequency range of unmodeled dynamical modes. Two simulations are
provided to demonstrate the features of the control method.


************************ How to obtain a copy ************************

a) via FTP:

% ftp archive.cis.ohio-state.edu
..
Name (archive.cis.ohio-state.edu): anonymous
Password: neuron
ftp> cd pub/neuroprose
ftp> binary
ftp> get tzirkel.control_tr81.ps.Z
ftp> quit
% uncompress tzirkel.control_tr81.ps.Z
% lp tzirkel.control_tr81.ps

b) via postal mail:

Request a hardcopy from

Eli Tzirkel, et@eng.cam.ac.uk
Speech Laboratory
Cambridge University Engineering Department
Trumpington Street, Cambridge CB2 1PZ
England

------------------------------

Subject: TR - Learning, Behavior, and Evolution
From: stefano nolfi <STIVA%IRMKANT.BITNET@vma.cc.cmu.edu>
Date: Thu, 03 Oct 91 11:41:47 -0500


The following technical report is available.

Send request to STIVA at IRMKANT.BITNET

DO NOT REPLY TO THIS MESSAGE



Learning, Behavior, and Evolution

Domenico Parisi Stefano Nolfi Federico Cecconi
Institute of Psychology
CNR - Rome
e-mail: stiva@irmkant.Bitnet


Abstract

We present simulations of evolutionary processes operating on populations
of neural networks to show how learning and behavior can influence
evolution within a strictly Darwinian framework. Learning can accelerate
the evolutionary process both when learning tasks correlated with the
fitness criterion and when random learning tasks are used. Furthermore,
an ability to learn a task can emerge and be transmitted evolutionarily
for both correlated and uncorrelated tasks. Finally, behavior that allows
the individual to self-select the incoming stimuli can influence evolution
by becoming one of the factors that determine the observed phenotypic
fitness on which selective reproduction is based. For all the effects
demonstrated, we advance a consistent explanation in terms of a
multidimensional weight space for neural networks, a fitness surface for
the evolutionary task, and a performance surface for the learning task.


This paper will be presented at ECAL-91 - European Conference on Artificial
Life, December 1991, Paris.


------------------------------

End of Neuron Digest [Volume 8 Issue 3]
***************************************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT