Copy Link
Add to Bookmark
Report

Neuron Digest Volume 03 Number 01

eZine's profile picture
Published in 
Neuron Digest
 · 1 year ago

NEURON Digest	Wed Jan 20 18:46:14 CST 1988   Volume 3 / Issue 1 
Today's Topics:

Re: Genesis of language (was: Why can't my cat talk, and a bunch of others)
MLNS Announcement
Net Simulators Review for Science Magazine?
An NN Simulator for a Simulation Class
Neural computations based on timing and phase in neural nets
MacBrain
Hopfield Networks
Mathematical Linguistics
Commercial products based on neural nets?
NAMING CONVENTION
Stanford Adaptive Networks Colloquium
Tech Report -- Connectionist Bibliography

----------------------------------------------------------------------

Date: 10 Dec 87 17:00:07 GMT
From: mnetor!utzoo!henry@uunet.uu.net (Henry Spencer)
Subject: Re: Genesis of language (was: Why can't my cat talk, and ...)

> Apropos cell death in brains, the old saw about losing 10,000 neurons every
> day is now being challenged...

Also of note, the other old saw that new neurons are not formed after birth
in higher animals is now known to be untrue. At least some higher animals
(some types of birds) do grow new neurons at times. Last I heard, nobody
yet knows how common this phenomenon is.
--
Those who do not understand Unix are | Henry Spencer @ U of Toronto Zoology
condemned to reinvent it, poorly. | {allegra,ihnp4,decvax,utai}!utzoo!henry

------------------------------

Date: Sun, 10 Jan 88 21:50:43 EST
From: Bob Weidlich <weidlich@ludwig.scc.com>
Subject: MLNS Announcement


A PROPOSAL TO THE NEURAL NETWORK RESEARCH COMMUNITY
TO BUILD A
MULTI-MODELED LAYERED NEURAL NETWORK SIMULATOR TOOL SET (MLNS)

Robert Weidlich

Contel Federal Systems

January 11, 1988


The technology of neural networks is in its infancy. Like all other major new
technologies at that stage, the development of neural networks is slowed by
many impediments along the road to realizing its potential to solve many sig-
nificant real world problems. A common assumption of those on the periphery
of neural network research is that the major factor holding back progress is
the lack of hardware architectures designed specifically to implement neural
networks. But those of us who use neural networks on a day to day basis real-
ize that a much more immediate problem is the lack of sufficiently powerful
neural network models. The pace of progress in the technology will be deter-
mined by the evolution of existing models such as Back Propagation, Hopfield,
and ART, as well as the development of completely new models.

But there is yet another significant problem that inhibits the evolution of
those models: lack of powerful-yet-easy-to-use, standardized, reasonably-
priced toolsets. We spend months of time building our own computer simula-
tors, or we spend a lot of money on the meager offerings of the marketplace;
in either case we find we spend more time building implementations of the
models than applying those models to our applications. And those who lack
sophisticated computer programming skills are cut out altogether.

I propose to the neural network research community that we initiate an
endeavor to build a suite of neural network simulation tools for the public
domain. The team will hopefully be composed of a cross-section of industry,
academic institutions, and government, and will use computer networks, pri-
marily Arpanet, as its communications medium. The tool set, hereinafter
referred to as the MLNS, will ultimately implement all of the significant
neural network models, and run on a broad range of computers.

These are the basic goals of this endeavor.

1. Facilitate the growth and evolution of neural network technology by
building a set of powerful yet simple to use neural network simula-
tion tools for the research community.

2. Promote standardization in neural network tools.

3. Open up neural network technology to those with limited computer
expertise by providing powerful tools with sophisticated graphical
user interfaces. Open up neural network technology to those with
limited budgets.

4. Since we expect neural network models to evolve rapidly, update the
tools to keep up with that evolution.

This announcement is a condensation of a couple of papers I have written
describing this proposed effort. I describe how to get copies of those docu-
ments and get involved in the project, at the end of this announcement.

The MLNS tool will be distinctive in that will incorporate a layered approach
to its architecture, thus allowing several levels of abstraction. In a sense,
it is a really a suite of neural net tools, one operating atop the other,
rather than a single tool. The upper layers enable users to build sophisti-
cated applications of neural networks which provide simple user interfaces,
and hide much of the complexity of the tool from the user.

This tool will implement as many significant neural network models (i.e., Back
Propagation, Hopfield, ART, etc.) as is feasible to build. The first release
will probably cover only 3 or 4 of the more popular models. We will take an
iterative approach to building the tool and we will make extensive use of
rapid prototyping.

I am asking for volunteers to help build the tool. We will rely on computer
networks, primarily Arpanet and those networks with gateways on Arpanet, to
provide our communications utility. We will need a variety of skills - pro-
grammers (much of it will be written in C), neural network "experts", and
reviewers. Please do not be reluctant to help out just because you feel
you're not quite experienced enough; my major motivation for initiating this
project is to round-out my own neural networking experience. We also need
potential users who feel they have a pretty good feel for what is necessary
and desirable in a good neural network tool set.

The tool set will be 100% public domain; it will not be the property of, or
copyrighted by my company (Contel Federal Systems) or any other organization,
except for a possible future non-commercial organization that we may want to
set up to support the tool set.

If your are interested in getting involved as a designer, an advisor, a poten-
tial user, or if you're just curious about what's going on, the next step is
to download the files in which I describe this project in detail. You can do
this by ftp file transfer and an anonymous user. To do that, take the follow-

1. Set up an ftp session with my host:

"ftp ludwig.scc.com"
(Note: this is an arpanet address. If you are
on a network other than arpanet with a gateway
to arpanet, you may need a modified address
specification. Consult your local comm network
gure if you need help.)

2. Login with the user name "anonymous"
3. Use the password "guest"
4. Download the pertinent files:

"get READ.ME" (the current status of the files)
"get mlns_spec.doc (the specification for the MLNS)
"
get mlns_prop.doc (the long version of the proposal)

If for any reason you cannot download the files, then call or write me the
following address:

Robert Weidlich
Mail Stop P/530
Contel Federal Systems
12015 Lee Jackson Highway
Fairfax, Virginia 22033
(703) 359-7585 (or) (703) 359-7847
(leave a message if I am not available)
ARPA: weidlich@ludwig.scc.com




------------------------------

Date: Fri, 4 Dec 87 16:06:01 EST
From: futrell@corwin.ccs.northeastern.edu
Subject: Net Simulators Review for Science Magazine?

As you may know, Science magazine publishes reviews of
software from time to time. There is some interest at
Science in publishing a review of neural net simulation
systems, with emphasis on ones that run on micros. In
order to help Science arrange for these reviews, I need
to know:

1. Candidate packages for review. These should have
been on the market for a while (not a week!). They do
not have to be commercial, but there has to be some
confidence that the distributor will support them and
answer users' questions in the future. Note that the
packages are to be simulators, not applications which
focus on merely using neural or connectionist learning
technology underneath to solve some other problem.

2. Candidates for the reviewer. The person(s) who writes
the review should be, ideally, a working scientist who has
day-to-day involvement with neuroscience or cognitive
science. The philosophy at Science is to have scientists
write reviews for other scientists.

Science mails about 170,000 issues a week and their
readership is far larger than that. The thrust of the
magazine is primarily biological, but that is more
historical accident then by design.

Please contact me, rather than Science (I am one of the
advisors for their software reviews).

Mail to this net or to me at (CSNet):
futrelle@corwin.ccs.northeastern.edu

or: (617)-437-2076

or: Robert P. Futrelle
College of Computer Science 161CN
Northeastern University
360 Huntington Ave.
Boston, MA 02115

------------------------------

Date: Fri, 11 Dec 87 13:04:54 EST
From: "Charles S. Roberson" <csrobe@icase.arpa>
Subject: An NN Simulator for a Simulation Class

The study of Neural Net Simulations is starting to get some attention at
William and Mary and I have been given a task -- Find 'something' from
the Neural Net domain that could be discussed and simulated in a second
semester simulation class. So I decided to ask those most knowledgeable
in the field -- you!

Some background: Simulation II is a graduate level course which has
Simulation I has a prereq. Simulation I is a rather intense class which
teaches the fundamentals of random number generation (from discrete and
continuous distributions), fitting distributions to data, and simulating
discrete events. The professor's primary area of research is Digital
Image Processing, and his curiosity is piqued concerning Neural Nets.

The Proposal: The professor has agreed to introduce a Neural Net simulation
late in the second semester IFF there is a well written paper that would,
on its own merit, provide enough of framework to allow its simulation.
It its not necessary that the (graduate) students be able to understand
the whole paper, but the professor must be able to digest enough of the
paper to properly present it to the class.

My Question: Is there such a paper that would fit these restrictions?
All simulations have been of the 'Roll your own' variety (we have written
everything down to our random number (Lehmer) generator), so I would
imagine this philosophy will carry into the second semester. The amount
of time spent on this specific simulation would probably not exceed two
weeks. I know this is limiting, but the ability to perform this
simulation could open up William and Mary to investigating Neual Networks
more thoroughly.

Thanks,
Chip Roberson, Graduate Student, College of William and Mary

-------------------------------------------------------------------------
Chip Roberson ARPANET: csrobe@icase.arpa
1105 London Company Way BITNET: $csrobe@wmmvs.bitnet
Williamsburg, VA 23185 UUCP: ...!uunet!pyrdc!gmu90x!wmcs!csrobe
-------------------------------------------------------------------------

------------------------------

Date: Fri, 11 Dec 87 10:28:34 PST
From: Kaiti Riley <kaiti%meridian@ads.arpa>
Subject: Neural computations based on timing and phase in neural nets


Most of the literature on neural networks emphasises spatial
computation in networks by modifying weights. Another way to view neural
computations is to examine the relative timing and phase in neural networks.
As an example, Reichardt's original work on motion detection in fly retinae
looked at the timing difference between two adjacent detectors to
determine direction of motion (updated versions by van Santen and Sperling,
Adelson and Bergen, Watson and Ahumada, etc.).

I am currently looking for work that specifically examines dynamic
timing and phasal computations in neural(like) networks. (Work in
asynchronous parallel automata systems appears to also be related, but
I am also in need of references on this related subject.) I will post
responses to the net.

Thanks in advance!

------------------------------


Date: Tue, 22 Dec 87 16:30:10 est
From: ucbcad!ames.UUCP!ulowell!cg-atla!jmacdon@UCBVAX.BERKELEY.EDU
Subject: MacBrain

I've seen several references to a software package called MacBrain by
Neuronics of Cambridge, Massachusetts. The most recent reference was
on page 103 of the November issue of MacWorld magazine. None of the
references to date have provided sufficient information for ordering
the package or even contacting Neuronics. They are not listed in the
phone directory nor is directory assistance of any assistance. If a
hard pointer to Neuronics is available could some kind soul email it
to me or, if Neuronics is no more, would someone with the package be
willing to provide me with a copy? I would also be interested in
hearing from anyone who has used MacBrain.

Thanks.
Jeff MacDonald
jmacdon@cg-atla
617-658-0200 ext. 5406

------------------------------

Date: Wed 30 Dec 87 13:11:38-PST
From: Rakesh & <MOHAN%DWORKIN.usc.edu@oberon.usc.edu>
Subject: Hopfield Networks


I am using Hopfield Networks for optimization in some Computer
Vision problems. I would like to know of some strategies to set
the various weights. Also, has somebody experimented with
non-symmetric weights and/or self-excitation (of nodes) in
similar networks.


Do the weights depend on the size of the problem? In other words,
if a set of weights is found to work for a given problems, do
the same weights work if there is change in the number of nodes
by some orders of magnitude?

Thanks in advance,

Rakesh Mohan

mohan@oberon.USC.EDU

------------------------------

Date: Thu, 7 Jan 88 12:16 N
From: FLEUR%HLERUL55.BITNET@cunyvm.cuny.edu
Subject: Mathematical Linguistics

Dear fellow networkers,

At the department of Experimental Psychology (Leyden Holland) we did a
simmualtion based on 'neuronet like' principles. With our simmulation
we intended to explore the field of human mental association of words.
The algoritm of the program was based on Quillian's semantic memory model.
This model can be adapted to neuronets when considering the semantics of
a word represented by the activity of a cluster of parallel active neurons.
In our view the members of the cluster can be situated 'all over the brain'.

We translated mental association in exchange of 'activation' between
two or more clusters. This communication we described with the aid of
intertwined first-order partial differential equations concerning
the change in time of total (neural) activity in a cluster connected to
other clusters.

We would like to contact other workers in this line of research.

Greetings:
Han Geurdes
Erik Fleur

Dep. Exp. Psy
State Univ. Leyden
Hooigracht 15
Leyden

GEURDES@HLERUL55

------------------------------

Date: Thu, 7 Jan 88 05:33 PST
From: nesliwa%nasamail@ames.arc.nasa.gov (NANCY E. SLIWA)
Subject: Commercial products based on neural nets?


I've had a request for information about the existence of any commercial
products based on neural net technology. Not to develop neural net
applications, like HNC and Sigma neurocomputers, but actual products
that use neuromimetic approaches.

I've heard/read somewhere long since about two things:
(1) a California-based product for processor board layout
(2) a McLean, VA-based company that has been selling neural-based
products since the 60's

Does anyone know the specifics of these items, and/or especially any
other examples? Please respond to me directly, and I'll summarize to
the list. Thanks!

Nancy Sliwa
MS 152D
NASA Langley Research Center
Hampton, VA 23665-5225
804/865-3871

nesliwa%nasamail@ames.arpa or nancy@grasp.cis.upenn.edu


------------------------------

Date: Thu,17 Dec 87 16:22:49 GMT
From: Julian_Dow@vme.glasgow.ac.uk
Subject: NAMING CONVENTION

I just got registered on the BB, and read with delight the debate on
catchphrases. I too despise "NEURAL NETWORKS", but acknowledge that
the term is here to stay. Why not just adopt a convention I suspect
already is coming to pass:

NEURONAL NETWORK for a network of neurons, i.e. in biology

NEURAL NETWORK for a network of the things that electronic
engineers imagine to be neurons.

The difference is subtle, but unmistakable.

------------------------------

Date: Wed, 23 Dec 87 13:40:17 PST
From: Mark Gluck <gluck@psych.stanford.edu>
Subject: Stanford Adaptive Networks Colloquium

Stanford University Interdisciplinary Colloquium Series:

ADAPTIVE NETWORKS
AND THEIR APPLICATIONS

Co-sponsored by the Depts. of Psychology and Electrical Engineering

Winter Quarter Schedule
-----------------------

Jan. 12th (Tuesday, 3:15pm)
Harry Klopf "The Drive-Reinforcement Neuronal Model:
Wright Aeronautical Labs A Real-time Learning Mechanism for
U. S. Air Force Unsupervised Learning"


Feb. 9th (Tuesday, 3:15pm)
Tom Landauer "Trying to Teach a Backpropogation Network\fR
Bellcore to Recognize Elements of Continuous Speech."


Feb. 12th (Friday, 1:15pm)
Yann Le Cun "Pseudo-Newton and Other Variations of
Dept. of Computer Science, Backpropogation"

University of Toronto

Mar. 9th (Tuesday, 3:45pm)
Jeffrey Elman "Processing Language Without Symbols?
Dept. of Linguistics, A Connectionist Approach"

U.C., San Diego

Mar. 29th (Tuesday, 3:15pm)
Dan Hammerstrom "Casting Neural Networks in Silicon:
Oregon Graduate Center Good News and Bad News"



Additional Information
----------------------

Focus: Adaptive networks, parallel-distributed processing,
connectionist models, computational neuroscience, the neurobiology
of learning and memory, and neural models.

Format: Tea will be served 15 minutes prior to the talks, outside
the lecture hall. The talks (including discussion) last about
one hour. Following each talk, there will be a reception in
the fourth floor lounge of the Psychology Dept.

Location: Unless otherwise noted, all talks will be held in room 380-380W,
which can be reached through the lower level courtyard between the
Psychology and Mathematical Sciences buildings.

Technical Level: These talks will be technically oriented and are intended
for persons actively working in related areas. They are not intended
for the newcomer seeking general introductory material.

Information: To be placed on an electronic mail distribution list for
information about these and other adaptive network events in the Stanford area,
send email to netlist@psych.stanford.edu. For additional information,
contact Mark Gluck, Bldg. 420-316; (415) 725-2434 or email to
gluck@psych.stanford.edu

Program Committee: Bernard Widrow (E.E.), David Rumelhart, Misha
Pavel, Mark Gluck (Psychology).

------------------------------

Date: Mon, 28 Dec 87 16:01:08 EST
From: MaryAnne Fox <mf01@gte-labs.csnet>
Subject: Tech Report -- Connectionist Bibliography

SELECTED BIBLIOGRAPHY ON CONNECTIONISM

Oliver G. Selfridge
Richard S. Sutton
Charles W. Anderson

GTE Labs


An annotated bibliography of 38 connectionist works of historical
or current interest.


For copies, reply to this message with your USmail address, or send
to: Mary Anne Fox
GTE Labs MS-44
Waltham, MA 02254
mf01@GTE-Labs.csNet

------------------------------

Date: Wed, 30 Dec 87 18:45:03 est
From: Bob Allen <rba@flash.bellcore.com>

Preprints available of a paper presented the the conference on
Neural Information Processing Systems

Stochastic Learning Networks and their Electronic Implementation
Joshua Alspector, Robert B. Allen, Victor Hu, and Srinagesh Satyanarayana

We describe a family of learning algorithms that operate on a recurrent,
symmetrically connected, neuromorphic network that, like the Boltzmann
machine, settles in the presence of noise. These networks learn by
modifying synaptic connection strengths on the basis of correlations
seen locally by each synapse. We describe a version of the supervised
learning algorithm for a network with analog activation functions. We
also demonstrate unsupervised competitive learning with this approach,
where weight saturation and decay play an important role, and describe
preliminary experiments in reinforcement learning, where noise is used
in the search procedure. We identify the above described phenomena as
elements that can unify learning techniques at a physical microscopic
level.

These algorithms were chosen for ease of implementation in vlsi. We
have designed a CMOS test chip in 2 micron rules that can speed up the
learning about a millionfold over an equivalent simulation on a VAX
11/780. The speedup is due to parallel analog computation for summing
and multiplying weights and activations, and the use of physical
processes for generating random noise. The components of the test chip
are a noise amplifier, a neuron amplifier, and a 300 transistor adaptive
synapse, each of which is separately testable. These components are
also integrated into a 6 neuron and 15 synapse network. Finally, we
point out techniques for reducing the area of the electronic correla-
tional synapse both in technology and design and show how the algorithms
we study can be implemented naturally in electronic systems.

Reprints of an earlier paper "A Neuromorphic VLSI Learning System"
J. Alspector and R.B. Allen, in: Advanced Reserach in VLSI, edited by
P. Losleben, are also available.

Contact: Bob Allen, 2A-367, Bell Communications Research,
Morristown, NJ 07960, rba@bellcore.com


------------------------------

End of NEURON-Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT