Copy Link
Add to Bookmark
Report

Neuron Digest Volume 07 Number 08

eZine's profile picture
Published in 
Neuron Digest
 · 1 year ago

Neuron Digest	Wednesday,  6 Feb 1991		Volume 7 : Issue 8 

Today's Topics:
Re: Transputers for neural networks?
Simulators
New Book: Neural Networks for Control
Open Link Fault Tolerance Simulations for a Small BP Network
MSc in NEURAL COMPUTATION
Color Vision: BBS Call for Commentators
TR abstract (Constructive Learning by Specialization)
technical report on self-organiztion is available
Paper announcement: local Backward-Forward
Tech report on 'Awareness"


Send submissions, questions, address maintenance and requests for old issues to
"
neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"
Use "
ftp" to get old issues from hplpm.hpl.hp.com (15.255.176.205).

------------------------------------------------------------

Subject: Re: Transputers for neural networks?
From: mamisra%pollux.usc.edu@usc.edu (Manavendra Misra)
Date: Fri, 25 Jan 91 16:48:41 -0800

In comp.ai.neural-nets you write:

> Hi netters,

> I am compiling a technical report on scientific/industrial use of
>transputers and similar parallel computers (Intel IPSC, NCube, Cosmic
>Cube, etc...) for neural network research/applications. If you happen
>to be involved both in neural networks and parallel computers, please get
>in touch with me.

: Hi Tom, I have just started reading comp.ai.neural-nets so this
might be a delayed response. Anyhow, I am a grad student at USC
and am doing research in parallel implementation of neural networks.
At the Brain Simulation Lab at USC, we have a transputer machine
which we use for face recognition. Although I am not directly
related to the project, I have used the machine. I shall forward
your mail to Jean-Marc Fellous who is the resident expert on the
machine and he will be delighted to answer your questions, I'm
sure.

I would certainly be interested in the report you compile and would
like a copy. Thanks very much!

Manav.

PS. USC is the University of Southern California, Los Angeles and this
lab houses Prof. Christoph von der Malsburg's group.

------------------------------

Subject: Simulators
From: Bill Mammel <CHRMWCM%engvms.unl.edu@CUNYVM.CUNY.EDU>
Date: Thu, 31 Jan 91 13:57:00 -0500

I'm a senior undergraduate chemical engineer at the University of
Nebraska- Lincoln. I am doing a study using neural networks for chemical
process simulation/optimization. I'm looking at several varieties of the
back- propagation network and the Boltzmann Perceptrons. Most of my work
so far has been with NeuralWork's Explorer simulator. I would welcome
any advice or comments from anyone on this. This work is going to be
presented at a chemical engineering student conference in early April.

Also, I have seen a number of requests for a list of commercial
simulators. have a list of software simulators and hardware suppliers as
part of my bibliography, and hope to finish checking the compilation
within the next week or so. When I get it done, I'll forward it to the
Digest.

Again, any help would be greatly appreciated as I'm the only person in
the Engineering College using neural networks that I'm aware of.

A lone netter, Department of Chemical Engineering
Bill Mammel 236 Avery Hall
(chrmwcm@engvms.unl.edu) University of Nebraska-Lincoln
Lincoln, NE 68588-0126
(402) 472-2750

------------------------------

Subject: New Book: Neural Networks for Control
From: Rich Sutton <rich@gte.com>
Date: Fri, 01 Feb 91 14:56:12 -0500

In the final hours of 1990, MIT Press printed a new book that I hope will
be of interest to readers of this mailing list:

NEURAL NETWORKS FOR CONTROL
edited by
W. T. Miller, R. S. Sutton and P. J. Werbos

This is an edited collection with articles by Barto, Narendra, Widrow,
Albus, Anderson, Selfridge, Sanderson, Kawato, Atkeson, Mel, Williams,
and many others.

- Rich Sutton

------------------------------

Subject: Open Link Fault Tolerance Simulations for a Small BP Network
From: Dan Burns <burns@tisss.radc.af.mil>
Date: 01 Feb 91 16:10:00 -0400

Some time ago I was overcome by curiousity about the relative fault
tolerance of ANN and conventional digital circuit forms, so I ftp'ed the
Rochester Connectionist Simulator (RCS) and did some simulations. I would
like to post the results, and perhaps generate some discussion on this
topic. I'm a neophyte in the ANN area, with a background in microcircuit
reliability physics, so if this topic has been hashed and rehashed here,
please excuse and do a find next '-----'. If so, could someone please
point me to a couple of specific, good references? The rest of this
message discusses simulations determining the effects of all possible
single link deletions on the performance of only one small backprop
circuit, for the case of opens developed long after training (eg. due to
long term failure mechanisms).

I chose a simple test case circuit from the RCS package, the 838 encode
decoder. It encodes 1-of-8 input unit signals to 3 hidden units, and
then decodes these to 1-of-8 output units. A quick estimate of the sizes
of the ANN (21 neurons, 65 synapses) and conventional digital versions
(42 gates) gave me a factor of about 3.5 larger for the ANN (242,000 vs
68,000 sq um). I used estimates based on a neuron summing amplifier
layout from Mead's book, a synapse weight storage cell layout estimated
from a chip photo of Intel's N64, and generic standard cell sizes for the
conventional layout. The penalty might be more using a PLA. I don't know
how this compares with the area penalty for triple redundancy at the
system level (about 3+ ?), or whether an ANN version could match the
fault tolerance of a triply redundant conventional circuit.

There are certainly many possible fault types in both conventional and
ANN circuit forms. Some would be equally fatal to both, for instance an
open power supply bus or shorted output, but others may have very
different effects. I have evaluated only one fault type, a missing link,
which could result from an open circuit in a synapse input or output
conductor or due to electromigration or stress cracking, or a synapse
input or output stuck at zero due to an oxide breakdown, or a light path
blocked by a particle in an optical implementation. (I really chose it
because I happened to learn quickly that there is a delete link statement
in the RCS simulator language).

Any open signal line will cause a conventional digital circuit to fail,
except in redundant paths, which are rare, and usually unintentional in
most designs. Note that I am concerned here with long term failures, not
time zero failures. ANN's will certainly yield higher than conventional
for the case of opens present before training, but that is left as an
exercise. How is the ANN version effected by open signal lines which
develop after a time? Sneazy!

For each link in the circuit, I initially did a simulation sequence of
train, delete the link, optionally re-train, and test. These simulations
ran hands off, done with Unix scripts running the RCS, and some small
programs to generate the link list, analyze the test output files and
keep tabs on results. The network sometimes did not train in a set
number of cycles from trial to trial, apparently because of different
random initial weight settings, so I added a test of the network after
training, before doing a link deletion. If the network had not trained
in a set number of cycles, it was started again. The output level
failure criteria I used was (0,1)=(<=300,>=700), relaxed from (0,1000).
The results for the 838 network were;

test name test sequence Pass/Fails
test3.res5 400 training,test,del,test 20P, 48F 29%P
test3.res6 800 training,test,del,test 18P, 50F 26%P
test3.res7 800 training,test,del,100 training,test 62P, 6F 91%P
test3.res8 800 training,test,del,100 training,test 64P, 4F 94%P

The network survived almost a third of all possible single link
deletions. With a little re-training after link deletion, it survived
about 90% of single link deletions. The few non-recoverable failures
which did occur were non-repeatible. The last test was a repeat, and
different links failed. The simulations of the failing links were
repeated several times and they passed, perhaps because of different
initial random weight settings and training results. The overall results
were even better for very relaxed output failure criteria, e.g
(0,1)=(d,highest) or (0,1)=(<=500,>500). Continuous re-training may not
be attractive from a system design point of view, but would appear to
make for a robust system if it can be done without other ill effects
(over-training?). The results for the 848 network were;

test name test sequence Pass/Fails
test4.res1 400 training,test,del,test 39P, 46F 46%P
test4.res2 800 training,test,del,test 38P, 47F 46%P
test4.res3 800 training,test,del,100 training,test 84P, 1F 99%P
test4.res5 800 training,test,del, 50 training,test 83P, 2F 98%P

In general, the 848 network survived almost half of all possible single
link deletions without re-training. Note that prunning out a hidden unit
would degrade fault tolerance with or without out re-training. With a
little re-training after link deletion, the 848 survived about 98+% of
single link deletions. Again, the links which caused non-recoverable
failures in the first were run over again 4 times each, and they all
passed all 4 times.

If periodic re-training increases fault tolerance, how much is optimum?
It probably takes a lot of time and should be minimized. Tolerance to
specific faults depends on the specific training result and in turn,
random initial weight settings. Perhaps there are things which can be
done to increase fault tolerance (eg. limit weight maximums, reverse
prune). In a critical application without ongoing learning, it might be
wise to produce several training results and test them for fault
tolerance before picking one.

This leaves me wondering... Who cares about ANN fault tolerance? Enough
to build in re-training and more hidden units? Can fault tolerance be
predicted from theory, because it probably won't be possible to simulate
large systems. Is anyone working on testing strategies for complex ANNs?
Is built in self test needed to boost fault coverage, as in conventional
circuit forms? How does one isolate faults in ANN chips for analysis,
based on electrical tests?

I'm willing to share any of the Unix scripts to run the jobs for this
work if you're into the RCS and want to run them overnight on your
favorite net (on your own computer, thank you).

ROME LAB ROME LAB ROME LAB ROME LAB ROME LAB ROME LAB ROME LAB ROME LAB R
O Dan Burns O O
M RL/RBRP M Reliability Physics Branch Motto: M
E Bldg. 3, Room 1080 E Comprehendere Vires, E
Griffiss AFB NY 13441-5700 Studeamus Infirmitates,
L tel 315-330-2868 L Veni ad Veritas, L
A burns@tisss.radc.af.mil A Et Exposum BSum. A
B ROME LAB ROME LAB ROME LAB ROME LAB ROME LAB ROME LAB ROME LAB ROME LAB
Assertions,opinions, and mottos expressed here-in are mine alone and may
not be used to form the basis of litigation against the Government [;+)

------------------------------

Subject: MSc in NEURAL COMPUTATION
From: "
Peter J.B. Hancock" <pjh@compsci.stirling.ac.uk>
Date: Mon, 28 Jan 91 16:07:12 +0000


M.Sc. in NEURAL COMPUTATION:
A one-year full time course at the University of Stirling, Scotland,
offered by the Centre for Cognitive and Computational Neuroscience
(CCCN), and the Departments of Psychology, Computing Science, and Applied
Mathematics.

Aims and context:
The course is designed for students entering the field from any of a
variety of disciplines, e.g. Computing, Psychology, Biology, Engineering,
Mathematics, Physics. It aims to combine extensive practical experience
with a concern for basic principles. The study of neural computation in
general is combined with an in-depth analysis of vision.

The first few weeks of the course form an integrated crash course in the
basic techniques and ideas. During the autumn semester lectures,
seminars, and specified practical exercises predominate. In the spring
and summer work based on each student's own interests and abilities
predominates. This culminates in a research project that can be submitted
anytime between July 1 and September 1. Where work on the M. Sc. has been
of a sufficiently high standard it can be converted into the first year
of a Ph. D. program.

Courses:
Autumn: 1. Principles of neural computation. 2. Principles of vision.
3. Cognitive Neuroscience. 4. Computational and Mathematical techniques.

Spring and summer: 1. Advanced practical courses, including e.g.
properties, design and use of neurocomputational systems, image
processing, visual psychophysics. 2. Advanced topics in neural
computation, vision, and cognitive neuroscience. 3. Research project.

CCCN: The CCCN is a broadly-based interdisciplinary research centre. It
has a well established reputation for work on vision, neural nets, and
neuropsychology. The atmosphere is informal, friendly, and enthusiastic.
Research meetings are held once or twice a week during semester.
Students, research staff, and teaching staff work closely together. The
centre has excellent lab and office space overlooking lakes and
mountains. The university is located in the most beautiful landscaped
campus in Europe. It has excellent sporting facilities. Some of the
most striking regions of the Scottish highlands are within easy reach.

Eligibility:
Applicants should have a first degree, e.g. B.A., B.Sc., in any of
a variety of disciplines, e.g. Computing, Psychology, Biology,
Mathematics Engineering, Physics.

For further information and application forms contact:
School Office, School of Human Sciences,
Stirling University, Stirling FK9 4LA, SCOTLAND

Specific enquiries to:
Dr W A Phillips, CCCN, Psychology, Stirling University, Scotland
e-mail: WAP at UK.AC.STIRLING.CS
No deadline for applications is specified.

------------------------------

Subject: Color Vision: BBS Call for Commentators
From: Stevan Harnad <harnad@clarity.Princeton.EDU>
Date: Mon, 04 Feb 91 11:46:22 -0500

[Apologies if you receive this Call more than once; it has been sent to
several lists, some with overlapping subscriberships]

Below is the abstract of a forthcoming target article to appear in
Behavioral and Brain Sciences (BBS), an international, interdisciplinary
journal that provides Open Peer Commentary on important and controversial
current research in the biobehavioral and cognitive sciences.
Commentators must be current BBS Associates or nominated by a current BBS
Associate. To be considered as a commentator on this article, to suggest
other appropriate commentators, or for information about how to become a
BBS Associate, please send email to:

harnad@clarity.princeton.edu or harnad@pucc.bitnet or write to: BBS, 20
Nassau Street, #240, Princeton NJ 08542 [tel: 609-921-7771]

To help us put together a balanced list of commentators, please give some
indication of the aspects of the topic on which you would bring your
areas of expertise to bear if you were selected as a commentator. A
nonfinal draft of the full text is available for inspection by anonymous
ftp according to the instructions following the abstract.
____________________________________________________________________
WAYS OF COLORING
Comparative color vision as a case study for cognitive science

Evan Thompson
Center for Cognitive Studies, Tufts University, Medford, MA. 02155
E-mail: ethompso@pearl.tufts.edu

Adrian Palacios
Department of Biology, Yale University, New Haven, CT. 06511
Institut des Neurosciences (CNRS- Paris VI), 9 Quai St. Bernard, 75005 Paris
E-mail: apalac@yalevm.bitnet

Francisco J. Varela
Institut des Neurosciences (CNRS- Paris VI), 9 Quai St. Bernard, 75005 Paris
CREA, Ecole Polytechnique, 1 rue Descartes, 75005 Paris
E-mail: fv@frunip62.bitnet

ABSTRACT: Different explanations of color vision favor different
philosophical positions: Computational vision is more compatible with
objectivism (the color is in the object), psychophysics and
neurophysiology with subjectivism (the color is in the head).
Comparative research suggests that an explanation of color must be both
experientialist (unlike objectivism) and ecological (unlike
subjectivism). Computational vision's emphasis on optimally "
recovering"
prespecified features of the environment (i.e., distal properties,
independent of the sensory-motor capacities of the animal) is
unsatisfactory. Conceiving of visual perception instead as the visual
guidance of activity in an environment that is determined largely by that
very activity suggests new directions for research.

Keywords: adaptation, color vision, comparative vision, computation,
evolution, ecological optics, objectivism, ontology, qualia, sensory
physiology, subjectivism.

To help you decide whether you would wish to comment on this article, a
(nonfinal) draft is retrievable by anonymous ftp from princeton.edu
according to the instructions below. The filename is thompson.bbs Please
do not prepare a commentary on this draft. Just let us know, from
inspecting it, what relevant expertise you feel you would bring to bear
on what aspect of the article.

-------------------------------------------------------------
To retrieve a file by ftp from a Unix/Internet site, type:
ftp princeton.edu

When you are asked for your login, type:
anonymous

For your password, type:
ident

then change directories with:
cd pub/harnad

To show the available files, type:
ls

Next, retrieve the file you want with (for example):
get thompson.bbs

When you have the file(s) you want, type:
quit

---
The above cannot be done form Bitnet directly, but there is a fileserver
called bitftp@pucc.bitnet that will do it for you. Send it the one line
message

help

for instructions (which will be similar to the above, but will be in the
form of a series of lines in an email message that bitftp will then
execute for you).

------------------------------

Subject: TR abstract (Constructive Learning by Specialization)
From: P.Refenes@cs.ucl.ac.uk
Date: Fri, 18 Jan 91 17:49:39 +0000


CONSTRUCTIVE LEARNING by SPECIALISATION

A. N. REFENES & S. VITHLANI
Department of Computer Science
University College London
Gower Street London
WC1 6BT, UK
ABSTRACT


This paper describes and evaluates a procedure for constructing and
training multi-layered perceptrons by adding units to the network
dynamically during training. The ultimate aim of the learning procedure
is to construct an architecture which is sufficiently large to learn the
problem but necessarily small to generalise well. Units are added as they
are needed. By showing that the newly added unit makes fewer mistakes
than before, and by training the unit not to disturb the earlier
dynamics, eventual convergence to zero-errors is guaranteed. Other
techniques operate in the opposite direction by pruning the network and
removing "
redundant" connections. Simulations suggest that this method
is efficient in terms of learning speed, the number of free parameters in
the network (i.e. the number of free parameters in the network (i.e.
connections), and it compares well to other methods in terms of
generalisation performance.

------------------------------

Subject: technical report on self-organiztion is available
From: Erdi Peter <h1705erd@ella.hu>
Date: Mon, 28 Jan 91 11:09:00

[[ Editor's Note: This is from one of the newest subscribers. I'm glad
to see Hungary on the network and active in these areas! -PM ]]

Technnical Report on self-organiztion is available:

SELF-ORGANIZATION IN THE NERVOUS SYSTEM:
NETWORK STRUCTURE AND STABILITY

(KFKI-1990/60; 23 pages, to appear in: Mathematical Approaches to Brain
Functioning Diagnostics, Manchester Univ. Press)

Abstract: Many neurodynamic phenomena can be interpreted in terms
of self-organization. The conceptual and mathematical frameweorks
of neural network moels are reviewed. The connections between the
structure and dynamics of neural architectures are studied by
qualitative stability analysis. Some criteria for the structural
bais of regular, periodic and chaotic behaviour in neural networks
are given.

Peter Erdi, Biophysics Group
Central Reseacrh Institiute for Physics
Hungarian Academy of Sciences, H-1525 Budapest P.O. Box 49,
Hungary
e-mail:h1705erd@ella.uucp







------------------------------

Subject: Paper announcement: local Backward-Forward
From: thanasis kehagias <ST401843@brownvm.brown.edu>
Date: Thu, 31 Jan 91 12:41:42 -0500


I just finished writing the following paper and I am considering possible
means of distribution. Sad as it is, our budget is in bad shape and I
cannot afford to send hard copies, so please do not ask. If there is
sufficient interest, I will post a Postscript copy on the ohio-state
archive. In the meantime, anybody who wants a LaTeX file copy, write me
and I will send one electronically.

Thanasis

___________________________________________________________________


Stochastic Recurrent Networks Training by

the Local Backward-Forward Algorithm


by Ath. Kehagias

Division of Applied Mathematics
Brown University
Providence, RI 02912

We introduce Stochastic Recurrent Networks, which are collections of
interconnected finite state units. A unit goes into a new state at every
discrete time step following a probability law that is conditional on the
state of neighboring units at the previous time step. A network of this
type can be trained to learn a time series, where ``training'' is
understood in terms of maximizing the Likelihood function. A new training
(i.e. Likelihood maximmization) algorithm is introduced, the Local
Backward-Forward Algorithm. The new algorithm is based on the fast
Backward-Forward Algorithm of Hidden Markov Models training and improves
speed of learning (as compared to Back Porpagation) dramatically.

_____________________________________________________________________

By way of further explanation:

The Local Backward Forward algorithm is a modification of the Hidden
Markov Models Backward Forward algorithm. Recall that BF is a greedy
algorithm, just like Back Propagation, but the step size in every
iteration is chosen automatically. So we can have large steps in the
descent direction and the algorithm converges very fast. For the
algorithm to be applicable we need a new type of neural net, lets call it
Stochastic Recurrent Net. It is a collection of local, finite state ,
probabilistic automata. The network parameters are the local transition
probabilities. There is no specific mention of nonlinear input/output
units (e.g. sigmoids) , even though the SRN can be implemnted this way.

Given the local transition probabilities, we can compute a global Markov
transition matrix for the SRN. The SRN is essentially a Hidden Markov
Model. But the local BF estimates local transition probabilities rather
than the global state transition matrix. In this way I try to combine the
strong points of connectionism (parallel distributed processing) and HMM
(fast training). Judging from numerical examples, training is indeed
fast.


The SRN equations look identical to a control system equation, except
they describe a finite state system. It should be possible to carry over
many useful algorithms of stochastic and optimal control and apply them
to connectionist problems.

------------------------------

Subject: Tech report on 'Awareness"

From: Christof Koch <koch%CITIAGO.BITNET@VMA.CC.CMU.EDU>
Date: Sat, 02 Feb 91 13:05:15 -0800

The following paper is available by anyonymous FTP from Ohio State
University from pub/neuroprose. The manuscript, a verbatim copy of the
same named manuscript which appeared in "Seminars in the Neurosciences",
Volume 2, pages 263-273, 1990, is stored in three files called
koch.awareness1.ps.Z, koch.awareness2.ps.Z and koch.awareness3.ps.Z For
instructions on how to get this file, see below.




TOWARDS A NEUROBIOLOGICAL THEORY OF CONSCIOUSNESS

Francis Crick and Christof Koch



Visual awareness is a favorable form of consciousness to study
neurobiologically. We propose that it takes two forms: a very fast
form, linked to iconic memory, that may be difficult to study; and a
somewhat slower one involving visual attention and short-term memory.
In the slower form an attentional mechanism transiently binds together
all those neurons whose activity relates to the relevant features of a
single visual object. We suggest that this is done by generating
coherent semi-synchronous oscillations, probably in the 40 Hz range.
These oscillations then activate a transient short-term (working)
memory. We outline several lines of experimental work that might
advance the understanding of the neural mechanisms involved. The
neural basis of very short-term memory especially needs more
experimental study.

Key words: consciousness / awareness/ visual attention / 40 Hz
oscillations / short-term memory.


For comments, send e-mail to koch@iago.caltech.edu.

Christof



P.S. And this is how you can FTP and print the file:

unix> ftp cheops.cis.ohio-state.edu (or 128.146.8.62)
Name: anonymous
Password: neuron
ftp> cd pub/neuroprose (actually, cd neuroprose)
ftp> binary
ftp> get koch.awareness1.ps.Z
ftp> get koch.awareness2.ps.Z
ftp> get koch.awareness3.ps.Z
ftp> quit
unix> uncompress koch.awareness1.ps.Z
unix> lpr koch.awareness1.ps.Z
unix> uncompress koch.awareness2.ps.Z
unix> lpr koch.awareness2.ps.Z

unix> uncompress koch.awareness3.ps.Z
unix> lpr koch.awareness3.ps.Z

Done!


------------------------------

End of Neuron Digest [Volume 7 Issue 8]
***************************************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT