Copy Link
Add to Bookmark
Report
Neuron Digest Volume 03 Number 05
NEURON Digest Thu Feb 11 17:26:23 CST 1988 Volume 3 / Issue 5
Today's Topics:
neural-net
Learning analogies in neural nets
Adaline learning rule
Who's doing What in neural net research?
A short definition of Genetic Algorithms
Re: Cognitive System using Genetic Algo
Stanford Adaptive Networks Colloquium
Preprint available on adaptive network models of human learning
----------------------------------------------------------------------
Date: Wed, 03 Feb 88 12:46:07 EST
From: wall <WALL%SBCCVM.BITNET@mitvma.mit.edu>
Subject: Re: NEURON Digest - V3 #3
does anyone know of any work being done in the use of more biologically
motivated models of neural nets? lets face it, this PDP stuff is getting
pretty far afield of real neurons, and i just wondered wether recent research
in neurology is being put to use in AI systems. any help on this would be
greatly appreciated.
10Q in advance
wallace marshall
WALL@SBCCVM.BITNET
------------------------------
Date: 5 Feb 88 04:39:58 GMT
From: Francis Kam <nuchat!uhnix1!cosc2mi@uunet.uu.net>
Subject: neural-net
I am working on the learning aspects of the neural net model in computing
and would like to know what's happening in the rest of the neural net
community in the following areas:
1) neural net models
2) neural net learning rules
3) experimental (analog, digital, optical) results of any kind with
figures;
4) neural net machines (commercial, experimental, any kind);
5) any technical reports in these areas;
For information exchange and discussion purpose,
please send mail to mkkam@houston.edu.
Thank you.
------------------------------
Date: 9 Feb 88 04:49:12 GMT
From: Francis Kam <nuchat!uhnix1!cosc2mi@uunet.uu.net>
Subject: Who's doing What in neural net research?
I posted a request for neural net research information which was considered
too general. My objective is to gather information of technical
reports, papers, on-going research projects, experimental results, etc. I
think information like this will facilitate further research in neural net
or related computation for the research community as a whole. If there're
already sources of information like these, please let me know. Information
that I collected from responses of this article will be posted onto the
bulletin board or sent by email.
To be specific, interested(to me) areas are: (not meant to be comprehensive)
1) neural net model (or PDP model) as a general model of parallel
computation -- validity, applicable problem domains, performance
analysis, levels and structures, etc.;
2) neural machines -- experimental and commercial; I'm aware of Mark III
Mark IV by TRW electronics (details not known); any work on hypercubes
and Connection Machine, etc.; in communication intensive model like
neural net, is there any successful simulation done on an Ethernet of
Sun's, for example;
3) neural net programming environment -- language of neuronal computation
description, network layout description, monitoring, software development
environment etc.;
4) learning in neural net -- learning rules, memory capacity, behavior such
as forgetting and re-learn, etc.;
5) applications -- optimization problems, ai applications, any others to show
the usefulness of this model.
Please send your responses to:
CSNET : mkkam@houston.edu
USmail: Francis Kam
PGH 550, Department of Computer Science
University of Houston
4800 Calhoun,
Houston, TX 77004.
------------------------------
Date: 8 Feb 88 15:17:00 GMT
From: merrill@iuvax.cs.indiana.edu
Subject: Learning analogies in neural nets
I would be interested in hearing from anyone who has been working on
analogy in neural networks, and, more specifically, learning analogies
in that context. Please E-mail any responses, and I'll post a summary
to the network.
--- John Merrill
ARPA: merrill@iuvax.cs.indiana.edu
UUCP: {pyramid, ihnp4, pur-ee, rutgers}!iuvax!merrill
------------------------------
Date: 9 Feb 88 18:47:03 GMT
From: Thomas E Burns <macintos@ee.ecn.purdue.edu>
Subject: Adaline learning rule
Does anyone know of any studies on the learning/recal capacity of
the Adaline learning Rule? Also does anyone know of the learning
rule with the highest learn/recal capacity?
I would appreciate any information.
Will at Purdue
------------------------------
Date: Fri, 5 Feb 88 10:08:02 PST
From: Rik Belew <rik@sdcsvax.ucsd.edu>
Subject: A short definition of Genetic Algorithms
Mark Goldfain asks:
Would someone do me a favor and post or email a short definition of the
term "Genetic Learning Algorithm" or "Genetic Algorithm" ?
I feel like Genetic Algorithms has two, not quite distinct meanings
these days. First, there is a particular (class of) algorithms developed
by John Holland and his students. This GA(1) has at its most distinctive
feature the "cross-over" operator, which Holland has gone to some
effort to characterize analytically. Then there is a broader class GA(2)
of genetic algorithms (sometimes also called "simulated evolution") that
bear some loose resemblence to population genetics. These date back
to at least Fogel, Owen and Walsh (1966). Generally, these
algorithms make use of only a "mutation" operator.
The complication comes with work like Ackley's thesis (CMU, 1987)
which refers to Holland's GA(1), but which is most accurately
described as a GA(2).
Richard K. Belew
rik@cs.ucsd.edu
Computer Science & Engr. Dept. (C-014)
Univ. Calif - San Diego
San Diego, CA 92093
------------------------------
Date: 5 Feb 88 18:22:21 GMT
From: g451252772ea@deneb.ucdavis.edu
Subject: Re: Cognitive System using Genetic Algo
I offer definitions by (1) aspersion (2) my broad characterization (3) one
of J Holland's shortest canonical characterizations and (4) application.
(1) GA are anything J Holland and/or his students say they are. (But this
_is_ an aspersion on a rich, subtle and creative synthesis of formal systems
and evolutionary dynamics.)
(2) Broadly, GA are an optimization method for complex (multi-peaked, multi-
dimensional, ill-defined) fitness functions. They reliably avoid local
max/min, and the search time is much less than random search would require.
Production rules are employed, but only as mappings from bit-strings (with
wild-cards) to other bit strings, or to system outputs. System inputs are
represented as bitstrings. The rules are used stochastically, and in
parallel (at least conceptually; I understand several folk are doing
implementations, too).
A pretty good context paper for perspective (tho weak on the definition of
GA!) is the Nature review 'New optimization methods from physics and
biology' (9/17/87, pp.215-19). The author discusses neural nets,
simulated annealing, and one example of GA, all applied to the TSP, but
comments that "... a thorough comparason ... _would be_ very interesting"
(my emphasis).
(3) J. Holland, "Genetic algorithms and adaptation", pp. 317-33 in
ADAPTIVE CONTROL OF ILL-DEFINED SYSTEMS, 1984, Ed. O. Selfridge, E. Rissland,
M. A. Arbib. Page 319 has:
"In brief, and very roughly, a genetic algorithm can be looked
upon as a sampling procedure that draws samples from the set C; each
sample drawn has a value, the fitness of the corresponding genotype.
>From this point of view the population of individuals at any time t,
call it B(t), is a _set_ of samples drawn from C. The genetic algo-
rithm observes the fitnesses of the individuals in B(t) and uses
this information to generate and test a new set of individuals,
B(t+1). As we will soon see in detail, the genetic algorithm uses
the familiar "reproduction according to fitness" in combination with
crossing over (and other genetic operators) to generate the new
individuals. This process progressively biases the sampling pro-
cedure toward the use of _combinations_ of alleles associated with
above-average fitness. Surprisingly, in a population of size M, the
algorithm effectively exploits some multiple of M^3 combinations in
exploring C. (We shall soon see how this happens.) For populations
of more than a few individuals this number, M^3, is vastly greater
than the total number of alleles in the population. The correspond-
ing speedup in the rate of searching C, a property called _implicit
parallelism_, makes possible very high rates of adaptation. Moreover,
because a genetic algorithm uses a distributed database (the popu-
lation) to generate new samples, it is all but immune to some of the
difficulties -- false peaks, discontinuities, high-dimensionality,
etc. -- that commonly attend complex problems."
Well, _I_ shall soon close here, but first the few examples of applications
that I know of (the situation reminds me of the joke about the two rubes
visiting New York for the first time, getting off the bus with all of
$2.50. What to do? One takes the money, disappears into a drugstore
and reappears having bought a box of Tampax. Quoth he, "With tampax,
you can do _anything_!) Anyway:
o As noted, the TSP is a canonical candidate.
o A student of Holland has implemented a control algorithm for
a gas pipe-line center, which monitors and adaptively controls flow
rates based on cyclic usages and arbitrary, even ephemeral, constraints.
o Of course, some students have done some real (biological) population
genetics studies, which I note are a tad more plausible than the usual
haploid, deterministic equations.
o Byte mag. has run a few articles, e.g. 'Predicting International
Events' and 'A bit-mapped Classifier' (both 10/86).
o Artificial animals are being modelled in artificial worlds. (When
will the Vivarium let some their animated blimps ("fish") be so programmed?)
Finally, I noted above that the production rules take system inputs as
bit-strings. This representation allows for induction, and opens up a
large realm of cognitive science issues, addressed by Holland et al in
their newish book, INDUCTION.
Hope this helps. I really would like to hear about other application
areas; pragmatic issues are still unclear in my mind also, but as apparent,
the GA model has intrinsic appeal.
Ron Goldthwaite / UC Davis, Psychology and Animal Behavior
'Economics is a branch of ethics, pretending to be a science;
ethology is a science, pretending relevance to ethics.'
------------------------------
------------------------------
Date: Mon, 1 Feb 88 14:13:38 EST
From: "Stephen J. Hanson" <jose@tractatus.bellcore.com>
Subject: please post
Connectionist Modeling and Brain Function:
The Developing Interface
February 25-26, 1988
Princeton University
Lewis Thomas Auditorium
This symposium explores the interface between connectionist modeling
and neuroscience by bringing together pairs of collaborating speakers
or researchers working on related problems. The speakers will consider
the current state and future prospects of four fields in which convergence
between experimental and computational approaches is developing rapidly.
Thursday Friday
Associative Memory and Learning Sensory Development and Plasticity
9:00 am 9:00 am
Introductory Remarks Preliminaries
Professor G. A. Miller Announcements
9:15 am 9:15 am
Olfactory Process and Associative Role of Neural Activity in the
Memory: Cellular and Modeling Development of the Central Visual
Studies System: Phenomena, Possible Mechanism
and a Model
Professor A. Gelperin Professor Michael P. Stryker
AT&T Bell Laboratories University of California, San Francisco
Princeton University
10:30 am 10:30 am
Simple Neural Models of Towards an Organizing Principle for a
Classical Conditioning Perceptual Network
Dr. G. Tesauro Dr. R. Linsker, M.D., Ph.D.
Center for Complex Systems Research IBM Watson Research Lab
Noon-Lunch Noon-Lunch
1:30 pm 1:30 pm
Brain Rhythms and Network Memories: Biological Constraints on a Dynamic
I. Rhythms Drive Synaptic Change Network: Somatosensory Nervous System
Professor G. Lynch Dr. T. Allard
University of California, Irvine University of California, San Francisco
3:00 pm 3:00 pm
Brain Rhythms and Network Memories: Computer Simulation of Representational
II. Rhythms Encode Memory Plasticity in Somatosensory Cortical
Hierarchies Maps
Professor R. Granger Professor Leif H. Finkel
University of California, Irvine Rockefeller University
The Neuroscience Institute
4:30 pm General Discussion 4:30 pm General Discussion
5:30 pm Reception 5:30 pm Reception
Green Hall, Langfeld Lounge Green Hall, Langfeld Lounge
Organizers Sponsored by
Stephen J. Hanson Bellcore & Department of Psychology
Princeton U. Cognitive Science Laboratory
Carl R. Olson Princeton U. Human Information Processing Group
George A. Miller, Princeton U.
(new page)
Connectionist Modeling and Brain Function:
The Developing Interface
February 25-26, 1988
Princeton University
Lewis Thomas Auditorium
Travel Information
Princeton is located in central New Jersey, approximately 50 miles
southwest of New York City and 45 miles northest of Philadelphia. To
reach Princeton by public transportation, one usually travels through
one of these cities. We recommend the following routes:
By Car
>From NEW YORK - - New Jersey Turnpike to Exit #9, New Brunswick; Route
18 West (approximately 1 mile) to U.S. Route #1 South, Trenton. From
PHILADELPHIA - - Interstate 95 to U.S. Route #1 North. From
Washington - - New Jersey Turnpike to Exit #8, Hightstown; Route 571.
Princeton University is located one mile west of U.S. Route #1. It
can be reached via Washington Road, which crosses U.S. Route #1 at the
Penns Neck Intersection.
By Train
Take Amtrak or New Jersey Transit train to Princeton Junction, from
which you can ride the shuttle train (known locally as the "Dinky")
into Princeton. Please consult the Campus Map below for directions on
walking to Lewis Thomas Hall from the Dinky Station.
For any further information concerning the conference please
contact our conference planner:
Ms. Shari Landes
Psychology Department
Princeton University, 08544
Phone: 609-452-4663
Elec. Mail: shari@mind.princeton.edu
------------------------------
Date: Thu, 4 Feb 88 09:30:44 PST
From: Mark Gluck <netlist@psych.stanford.edu>
Subject: Stanford Adaptive Networks Colloquium
Stanford University Interdisciplinary Colloquium Series:
ADAPTIVE NETWORKS AND THEIR APPLICATIONS
**************************************************************************
Feb. 9th (Tuesday, 3:15pm)
Tom Landauer "Trying to Teach a Backpropagation Network
Bellcore to Recognize Elements of Continuous Speech."
**************************************************************************
Abstract
We have been trying to get a backpropagation network to learn to spot
elementary speech sounds (usually "demisyllables") in continuous speech.
The input is typically a 150 msec moving window of sound preprocessed
to a spectrotemporal representation resembling the transform
imposed by the ear. Output nodes represent speech sounds; their
goal states are determined by what speech sounds a human
recognizes in the same sound segment. I will describe a variety of
small experiments on training regimens and parameters, feedback variations
and tests of generalization.
. . . .
Format: Tea will be served 15 minutes prior to the talk, outside
the lecture hall. The talks (including discussion) last about
one hour. Following the talk, there will be a reception in
the fourth floor lounge of the Psychology Dept.
Location: Room 380-380W, which can be reached through the lower level
between the Psychology and Mathematical Sciences buildings.
Technical Level: These talks will be technically oriented and are intended
for persons actively working in related areas. They are not intended
for the newcomer seeking general introductory material.
Comming up next: Yann Le Cun (Univ. of Toronto) on Friday, Feb. 12th
at 1:15pm in room 420-050.
Co-sponsored by the Depts. of Psychology and Electrical Engineering
------------------------------
Date: Tue, 9 Feb 88 09:37:33 PST
From: Mark Gluck <netlist@psych.stanford.edu>
Subject: Stanford Adaptive Networks Colloquium
Stanford University Interdisciplinary Colloquium Series:
Adaptive Networks and their Applications
NOTE: TWO TALKS THIS WEEK
Tom Landauer & Yann Le Cun
TODAY:
Feb. 9th (Tuesday, 3:15pm)
Tom Landauer "Trying to Teach a Backpropagation Network
Bellcore to Recognize Elements of Continuous Speech."
**************************************************************************
FRIDAY:
Feb. 12th (Friday, 1:15pm) -- note time
Yann Le Cun "Pseudo-Newton and Other Variations of
Dept. of Computer Science, Backpropagation"
University of Toronto
Candada M5S 1A4
Abstract
Among all the learning procedures for connectionist networks, the
back-propagation algorithm (BP) is probably the most widely used. However,
little is known about its convergence properties. We propose a new
theoretical framework for deriving the BP based on the Langrangian formalism.
This method is similar to some of the methods used in optimal control theory.
We derive some variations of the basic procedure, including a pseudo-Newton
method that uses the second derivative of the cost function.
We also present some results involving networks with constrained weights.
It is shown that this technique can be used for putting some a priori
knowledge into the network in order to improve the generalization.
. . . .
Format: Tea will be served 15 minutes prior to the talk, outside
the lecture hall. The talks (including discussion) last about
one hour. Following the talk, there will be a reception in
the fourth floor lounge of the Psychology Dept.
Location: Room 380-380W, which can be reached through the lower level
between the Psychology and Mathematical Sciences buildings. The
Friday talk will be next door in 050.
Technical Level: These talks will be technically oriented and are intended
for persons actively working in related areas. They are not intended
for the newcomer seeking general introductory material.
Information: For additional information, contact Mark Gluck
(gluck@psych.stanford.edu) 415-725-2434.
Comming up next:
Mar. 9th (Wednesday, 3:45pm)
Jeffrey Elman "Processing Language Without Symbols?
Dept. of Linguistics, A Connectionist Approach"
U.C., San Diego
* * *
Co-Sponsored by: Departments of Electrical Engineering (B. Widrow) and
Psychology (D. Rumelhart, M. Gluck), Stanford Univ.
------------------------------
Date: Fri, 5 Feb 88 06:25:22 PST
From: Mark Gluck <gluck@psych.stanford.edu>
Subject: Preprint available on adaptive network models of human learning
Copies of the following preprint are available to those who
might be interested:
Gluck, M. A., & Bower, G. H. (in press). Evaluating an adaptive
network model of human learning, Journal of Memory and Language.
------------------------------
Abstract
--------
This paper explores the promise of simple adaptive networks as models of
human learning. The least-mean-squares (LMS) learning rule of networks
corresponds to the Rescorla-Wagner model of Pavlovian conditioning,
suggesting interesting parallels in human and animal learning. We
review three experiments in which subjects learned to classify
patients according to symptoms which had differing correlations with
two diseases. The LMS network model predicted the results of these
experiments, comparing somewhat favorably with several competing
learning models. We then extended the network model to deal with some
attentional effects in human discrimination learning, wherein cue
weight reflects attention to a cue. We further extended the model to
include conjunctive features, enabling it to approximate classic
results of the difficulty ordering oflearning differing types of
classifications. Despite the well-known limitations of one-layer
network models, we nevertheless promote their use as benchmark models
because of their explanatory power, simplicity, aesthetic grace, and
approximation, in many circumstances, to multilayer network models.
The successes of a simple model suggest greater accuracy of the LMS
algorithm against other learning rules, while its failures inform and
constrain the class of more complex models needed to explain complex
results.
. . . .
For copies, netmail to gluck@psych.stanford.edu or USmail to:
Mark Gluck
Dept. of Psychology
Bldg. 420; Jordan Hall
Stanford Univ.
Stanford, CA 94305
(415) 725-2434
------------------------------
End of NEURON-Digest
********************