Copy Link
Add to Bookmark
Report

Neuron Digest Volume 02 Number 17

eZine's profile picture
Published in 
Neuron Digest
 · 14 Nov 2023

NEURON Digest	Thu Jul 23 10:28:10 CDT 1987   Volume 2 / Issue 17 
Today's Topics:

Request for information.
Neural Networks
radical hardware
Submission for comp-ai-digest
neural networks , learning recognition algorithms, connectionist models
OPTICAL COMPUTING
The Stability/Plasticity Dilemma in Connectionist Networks
about other schools
DAPA Program Announcement
Neural Systems program started at CalTech
Conference Announcement - 1988 ICNN ANnual Meeting
Technical Report: Strategy Learning with Connectionist Networks
Technical Report: Two Problems with Backpropagation...

----------------------------------------------------------------------

Date: Fri, 10 Jul 87 07:17:01 PDT
From: "Susan L. Alderson" <suzie@cod.nosc.mil>
Subject: Request for information.

At the ICNN a very interesting paper was presented on the link between
CHAOS and Neural nets. Since we have to wait until September to get an
actual copy of the paper, I would appreciate some interim assistance.
Anybody out there taken a look at this and found good references, ideas
or whatever. If there's sufficient interest, I'll post the findings in
a summary means back to the net, but please send individual responses to
me at:

suzie@cod.nosc.mil

thanks-

Suzie Alderson
Naval Ocean Systems Center
Architecture and Applied Research Branch
Code 421
San Diego, CA 92152-5000

------------------------------

Date: Fri, 10 Jul 87 11:04:59 +0200
From: mcvax!idefix.laas.fr!helder@seismo.CSS.GOV (Helder Araujo)
Subject: Neural Networks

I am just starting working on a vision system, for which I am
considering several different architectures. I am interested in studying the
utilization of a neural network in such a system. My problem is that I am
lacking information on neural networks. I would be grateful if anyone could
suggest me a bibliography and references on neural networks. As I am not
a regular reader of AIlist I would prefer to receive this information
directly. My address:

mcvax!inria!lasso!magnon!helder

I will select the information and put it on AIlist.

Helder Araujo
LAAS
mcvax!inria!lasso!magnon!helder
7, ave. du Colonel-Roche
31077 Toulouse
FRANCE

------------------------------

Date: Tue, 14 Jul 87 11:11:14 EDT
From: wallace marshall <WALL%SBCCVM.BITNET@wiscvm.wisc.edu>
Subject: radical hardware

hi,
it occurs to me that the real advantages of connectionist type systems
cannot be realized as long as we are working with computers in the current
sense of the word. even "massively" parallel machines with a few hundred
thousand processors won't be able to do the trick. if intelligence is an
emergent property, it may well be that billions of units are needed for it
to emerge. what i'd like to know is, is much work being done to build totaly
different machines, maybe based on organic molecules? where can i find out
about such research? i mean, there's so much theory going aroud these days,
it seems there ought to be lots more technological work. even the rather
radical type of networks which just use op-amps and resistive lattices are much
too huge for really large numbers of them to be made. does anyone know about
this sort of thing? does anyone CARE?
wallace marshall
humble ee undergrad
at suny-stonybrook

------------------------------

From: "K.Newton - Biology" <knewton%watdcsu.waterloo.edu@RELAY.CS.NET>
Date: 14 Jul 87 14:53:27 GMT
Subject: neural networks, learning recognition algorithms, connectionist models

hello,

i realize that someone has already probably posted a similar
request, but as i haven't been on the net for very long, here goes:
i'm looking for references on neural networks, connectionist models
of vision and shape recognition and similar vision-oriented learning
-type algorithms; someone has told me that there is a paper by
Terry Sejnowsky (sp?) at John Hopkins which is quite interesting, but
i don't have the complete reference, so i'm having trouble tracking
it down.

thanks in advance,
knewton@watdcsu
k. glen newton

"Are you from the Waste island ?" - J.Logan

------------------------------

Date: Tue, 21 Jul 87 09:29 CDT
From: CS03503%SWTEXAS.BITNET@wiscvm.wisc.edu
Subject: OPTICAL COMPUTING

I Would like to know if you have any material regarding Optical computers,
if you do could you please mail it to me.
Thanks.
Luis Roberto Baessa
CS03503@SWTEXAS (Bitnet)

------------------------------

Date: Thu, 16 Jul 87 16:25:17 EDT
From: Larry Hunter <hunter-larry@yale.arpa> via WIMP-MAIL (Version 1.2/1.4)
Subject: The Stability/Plasticity Dilemma in Connectionist Networks

Date: 06 Jul 87 19:01:13 PDT (Mon)
From: creon@orville.arpa
Subject: Relearning and Rumelhart networks

A colleague and I have been experimenting with Rumelhart's "modified
delta rule"
for training a network with hidden hidden nodes to learn
arbitrary input/output associations....

Our primary problem is that the learning seems to be too "dramatic"
That is, if we have trained the network to correctly respond to, say,
500 different inputs, and then we have it learn the correct response
to just one more, it "forgets" the first 500, though it can relearn
them with a small amount of review, as compared to the amount of
training that it took to learn them initially....

Creon Levit

You have re-discovered an key problem in connectionist theory. To my
knowledge, this problem was first observed in Grossberg's article in
Biological Cybernetics, v. 23, p. 121, 1976. In that article, Grossberg
proves a theorem describing input environments for which competitive learning
networks (like Rummelhart's) can generate temporally stable recognition
codes. The learnable input environments are not all that general. A
description of the kinds of inputs that generate temporal instability
in competitive learning networks appears in Grossberg's Cognitive Science
article (V. 11, n. 1, p. 23, 1987):

[I]t was also shown, through explicit counterexamples, that competitive
learning models cannot learn a temporally stable code in response
to arbitrary input environments.... In addition to showing that
learning could become unstable in response to a complex input
environment, the analysis also showed that learning could all too
easily become unstable due to simple changes in the environment.
Changes in the probabilities of inputs, or in the deterministic
sequencing of inputs could readily wash away prior learning....
Rummelhart and Zipser (1985) were able to ignore this fundamental
issue by considering simple input enviroments whose probabilistic
rules do not change through time.


Grossberg proposed an alternative model in 1976 called adaptive reasonance
theory, or ART (Biological Cybernetics, V. 23, p. 187, see also the Cognitive
Science article cited above). Roughly, patterns of activation in ART are
compared with previously stored pattern templates. If the new pattern matches
an old one, it is blended in to the old pattern. If the new pattern fails to
match any old pattern (at some threshold), then a new template is formed.
Templates, once formed, are stable. Although this scheme has not, to my
knowledge, been programmed or empirically tested against significant problems,
has many formal advantages over Rummelhart/Hinton style connectionism, and
appeals to more traditional AI technique in its top-down flavor.

Additionally, you might consider the theoretical analysis of storage
in (actual) neural networks proposed by Baron in the SISTM Quarterly/Brain
Theory Newsletter, V. 2 p. 16,1979. It is illucidated in more detail in
his excellent new book, The Cerebral Computer: An Introduction to the
Computational Structure of the Human Brain (Lawrence Erlbaum Associates,
1987). This analysis, too addresses the problems of plasticity and
consolidation of memories.

For readers of the Neuron bboard, I would like to close with the
following personal observation: It is no doubt good for the fields
concerned with the mind and brain that connectionist models have generated
so much enthusiasm. On the other hand, dogmatic endorsement of the
Rummelhart/Hinton models has left several other important theorists out
of the picture. Even for those convinced that connectionism is a `whole
new way of looking at thinking,' I would remind you that other existing
analyses of minds and brains are both meritorious and relevant.

Larry Hunter
HUNTER@YALE.ARPA
------------------------------

Date: Fri, 10 Jul 87 12:39:33 EST
From: Manoel F Tenorio <tenorio@ee.ecn.purdue.edu>
Subject: about other schools

In reply to Brian's request, I would like to explain a bit about the program
that is being developed at Purdue University. At the School of Electrical
Engineering, a very dynamic program involving the areas of robotics,
intelligent manufacturing, and artificial intelligence has been established
for years, and for the past 2 years (into the next 3) will pursue a vigorous
expansion policy; including the hiring of several faculty members and a
brand new building to be finished in March, which should double our current
space.

The details of Purdue's research potentials, labs, and projects are too
numerous to explain here, and I advise you to write to us for further
information: School of Electrical Engineering, Purdue University, W. Lafayette,
In 47907. The admission for the graduate program is very competitive, and
the 100% increase experienced in applications promisses to make it even more
so. The five year expansion program called for 500 graduate students, but
we are already in that mark on the first year, going on 600 for next year.
Next year the School will be 100 years old.

More especifically about neurocomputing, we have 8 faculty members pursuing
interests in this field with double that number with an eye on getting into
the field. In March, the Special Interest Group in Neurocomputing (SIGN) was
formed with 8 faculty members from Quantitative Psycology and Cognitive Science
and members of Neurophysiology, Audiology, and Phisics Departments, besides
those from EE. The group has conducted over 56 hours of seminars this summer,
and several research projects came out of that. A similar group for graduate
students was formed in January, with weekly meetings, and this fall, The
Introduction to Neurocomputing is the first graduate course on a series about
the field. The School in committed to the area, and has provided several
established labs for the use of the group. This summer, we also got the
first equipment for the Neurocomputing Lab, exclusively for this type of
research. When thoroughly equipped, the lab will have several general
purpose workstations, 4 neurocomputing workstations, one commercial
neurocomputer, and other Purdue's prototypes on the works. The SIGN
group is negotiating to join forces with the much larger Purdue Neuroscience
Program.

If you send me mail in advance, we can arrange to show our work to you, in
your next visity to the area. Also
next summer, we will be offering the first hands-on series of short courses
in neurocomputing for those who can't afford to stay for the entire graduate
program. We plan to teach 50% of it IN OUR LAB... (send me mail if you are
interested)

Good luck in your decision and your graduate studies.

Sincerely,

------------------------------

Date: Tue, 14 Jul 87 08:38:26 CDT
From: GILES@AFSC-HQ.ARPA
Subject: DARPA Program Announcement

=========================
DARPA recently announced a program on adaptive network technology in
the Commerce Business Daily (CBD). I believe the announcement appeared
on Fri, June 19. It was also announced at the San Diego International
Conference on Neural Networks. The reason I'm posting this is that
the time window for short proposals is also a short one ( only a
couple of months). For more particulars one can contact Dr. Peter
Kemmey at DARPA/TTO.

[From a later message - MTG]

A further clarification of the DARPA Program announcement of Adaptive
Networks: The Broad Agency Announcement was issued on June 19, 1987;
announcement #1756402; DARPA point of contact Peter Kemmey 202/694-2974.
Most important: closing date 15 aug 87.


Lee Giles
AFOSR

------------------------------

Date: Tue, 14 Jul 87 09:37:09 CDT
From: gately@resbld.ti (Michael T. Gately)
Subject: Neural Systems program started at CalTech

[I found this in the 'UPDATE' section of July's IEEE COMPUTER
magazine. - MTG]

The AT&T Foundations has awarded the California Institute of Technology
a three-year, $300,000 grant to help support a new program in computation
and neural systems.
The principle behind neural systems is that they can recognize and
remember by association the same way organic systems do - and at
incredible speeds.
According to AT&T vice president William Clossey, the lightning
speed of neural systems depends on designing computer chips called
neural networks that mimic the way some brain cells retrieve stored information and solve problems.
Caltech's new program will combine aspects of neurobiology, computation,
information theory, VLSI technology, materials science, and studies of the
richness of complex systems.
One of the world's leading experts in this field is John Hopfield, a
professor of chemistry and biology at CalTech.


------------------------------

Date: Wed, 22 Jul 87 08:53:10 CDT
From: MIKE%BUCASA.BITNET@WISCVM.WISC.EDU
Subject: Conference Announcement - 1988 ICNN Annual Meeting

-----FIRST ANNOUNCEMENT-----

INTERNATIONAL NEURAL NETWORK SOCIETY
1988 ANNUAL MEETING

September 6--10, 1988
Boston, Massachusetts

The International Neural Network Society (INNS) is an association of
scientists, engineers, students, and others seeking to learn about and advance
our understanding of the modelling of behavioral and brain processes, and the
application of neural modelling concepts to technological problems. The INNS
invites all those interested in the exciting and rapidly expanding field of
neural networks to attend its 1988 Annual Meeting. The planned conference
program includes plenary lectures, symposia on selected topics, contributed
oral and poster presentations, tutorials, commercial and publishing
exhibits, a placement service for employers and educational institutions,
government agency presentations, and social events.

Individuals from fields as diverse as engineering, psychology, neuroscience,
computer science, mathematics, and physics are now engaged in neural network
research. This diversity is reflected in both the 1988 INNS Annual Meeting
Advisory Committee and in the Editorial Board of the INNS journal, Neural
Networks. In order to enhance the effectiveness of these multidisciplinary
ventures and to inform a wide audience, organization of the INNS Annual
Meeting will be carried out with the active participation of several
professional societies.

Meeting Advisory Committee includes:

Demetri Psaltis---Meeting Chairman
Larry Jackel---Program Chairman
Gail Carpenter---Organizing Chairman

Shun-ichi Amari
James Anderson
Maureen Caudill
Walter Freeman
Kunihiko Fukushima
Lee Giles
Stephen Grossberg
Robert Hecht-Nielsen
Teuvo Kohonen
Christoph von der Malsburg
Carver Mead
Edward Posner
David Rumelhart
Terrence Sejnowski
George Sperling
Harold Szu
Bernard Widrow

CALL FOR ABSTRACTS: The INNS announces an open call for abstracts to be
considered for oral or poster presentation at its 1988 Annual Meeting.
Meeting topics include:

--Vision and image processing
--Speech and language understanding
--Sensory-motor control and robotics
--Pattern recognition
--Associative learning
--Self-organization
--Cognitive information processing
--Local circuit neurobiology
--Analysis of network dynamics
--Combinatorial optimization
--Electronic and optical implementations
--Neurocomputers
--Applications

Abstracts must be typed on the INNS abstract form in camera-ready format.
An abstract form and instructions may be obtained by returning the
enclosed request form to: Neural Networks, AT&T Bell Labs, Room 4G-323,
Holmdel, NJ 07733 USA.

In order to be considered for presentation at the INNS 1988 Annual Meeting,
an abstract must be POSTMARKED NO LATER THAN March 31, 1988. Acceptance
notifications will be mailed by June 30, 1988. An individual may make at
most one oral presentation during the contributed paper sessions. Abstracts
accepted for presentation at the Meeting will be published as a supplement
to the INNS journal, Neural Networks. Published abstracts will be available
to participants at the conference.

***** ABSTRACT DEADLINE: MARCH 31, 1988 *****

CONFERENCE SITE: The 1988 Annual Meeting of the International Neural Network
Society will be held at the Park Plaza Hotel in downtown Boston. A block of
rooms has been reserved for the INNS at the rate of $91 per night plus tax
(single or double). Reservations may be made by contacting the hotel directly.
Be sure to give the reference "Neural Networks". A one-night deposit will be
requested.

HOTEL RESERVATIONS:
Boston Park Plaza Hotel
"Neural Networks"
1 Park Plaza at Arlington Street
Boston, MA 02117 USA
(800) 225-2008 (continental U.S.)
(800) 462-2022 (Massachusetts only)
Telex 940107

INTERNATIONAL RESERVATIONS:
Steigenberger, Utell International
KLM Golden Tulip, British Airways
REF: "Neural Networks"


Please note that other nearby hotel accomodations are typically more expensive
and may also sell out quickly.

CONFERENCE REGISTRATION: To register for the 1988 INNS Annual Meeting, return
the enclosed conference registration form, with registration fee; or contact:
UNIGLOBE---Neural Networks 1988, 40 Washington Street, Wellesley Hills, MA
02181 USA, (800) 521-5144 or (617) 235-7500.

The great interest and attention now being devoted to the field of neural
networks promises to generate a large number of meeting participants.
Conference room size and hotel accomodations are limited. Therefore early
registration is strongly advised.

For information about INNS membership, which includes a subscription to the
INNS journal, Neural Networks, write: Dr. Harold Szu---INNS, NRL Code 5756,
Washington, DC 20375-5000 USA, (202) 767-1493.

ADVANCE REGISTRATION FEE SCHEDULE

INNS Member Non-member
----------------------------------------------
Until March 31, 1988 $125 $170*
Until July 31, 1988 $175 $220*
Full-time student $50 $85*
----------------------------------------------
* Includes the option of electing one-year INNS membership and subscription
to the INNS journal, Neural Networks, free of charge.

The conference registration fee schedule has been set to cover abstract
handling costs, the book of abstracts, a buffet dinner reception, coffee
breaks, informational mailings, and administrative expenses. Anticipated
financial support by government and corporate sponsors will cover additional
basic meeting costs.

Tutorials and other special programs will require payment of additional fees.

STUDENTS AND VOLUNTEERS: Students are particularly welcome to join the INNS
and to participate fully in its Annual Meeting. Reduced registration and
membership rates are available for full-time students. In addition, financial
support is anticipated for students and meeting volunteers. To apply, please
enclose with the conference registration application a letter of request and a
brief description of interests.


----------------------------------------------------------------------------


-----ABSTRACT REQUEST FORM-----

INTERNATIONAL NEURAL NETWORK SOCIETY
1988 ANNUAL MEETING

September 6--10, 1988
Boston, Massachusetts

Please send an abstract form and instructions to:

Name:
Address:
Telephone(s):

All abstracts must be submitted camera-ready, typed on the INNS abstract form
and postmarked NO LATER THAN March 31, 1988.

MAIL TO:

Neural Networks
AT&T Bell Labs
Room 4G-323
Holmdel, NJ 07733 USA


-----------------------------------------------------------------------------

-----REQUEST FOR INFORMATION-----

INTERNATIONAL NEURAL NETWORK SOCIETY
1988 ANNUAL MEETING

September 6--10, 1988
Boston, Massachusetts

Please send information on the following topics to:

Name:
Address:
Telephone(s):


( ) Placement/Interview service
( ) Employer
( ) Educational institution
( ) Candidate
( ) Hotel accomodations
( ) Travel and discounted fares
Discounts of up to 60% off coach fare can be obtained on conference
travel booked through UNIGLOBE: (800) 521-5144 or (617) 235-7500.
( ) Volunteer and student programs
( ) Proposals for symposia and special programs
( ) Exhibits
( ) Commercial vendor
( ) Publisher
( ) Government agency
( ) Tutorials
( ) Press credentials
( ) INNS membership

MAIL TO:

Center for Adaptive Systems---INNS
Boston University
111 Cummington Street, Room 244
Boston, Massachusetts 02215 USA

ELECTRONIC MAIL TO:

mike@bucasa.bu.edu

------------------------------

Date: Mon, 13 Jul 87 14:22:56 EDT
From: Chuck Anderson <cwa0@gte-labs.csnet>
Subject: Technical Report: Strategy Learning with Connectionist Networks

Strategy Learning with Multilayer Connectionist Representations

Chuck Anderson
(cwa@gte-labs.csnet)

GTE Laboratories Incorporated
40 Sylvan Road
Waltham, MA 02254

Abstract


Results are presented that demonstrate the learning and
fine-tuning of search strategies using connectionist mechanisms.
Previous studies of strategy learning within the symbolic,
production-rule formalism have not addressed fine-tuning behavior.
Here a two-layer connectionist system is presented that develops its
search from a weak to a task-specific strategy and fine-tunes its
performance. The system is applied to a simulated, real-time,
balance-control task. We compare the performance of one-layer and
two-layer networks, showing that the ability of the two-layer network
to discover new features and thus enhance the original representation
is critical to solving the balancing task.

(Also appears in the Proceedings of the Fourth International Workshop on
Machine Learning, Irvine, June, 1987)



------------------------------

Date: Tue, 14 Jul 87 08:35:43 CDT
From: TILDE::"@RELAY.CS.NET:GILES@AFSC-HQ.ARPA" 13-JUL-1987 18:40
Subject: abstract from ICNN

============================
Enclosed is an abstract of a paper presented at the IEEE International
Conference on Neural Networks, San Diego, Ca, June 21-24, 1987. The
paper will appear in the conference proceedings and is also available
from the second author.


Generalization in Neural Networks: The Contiguity Problem


Tom Maxwell
Sachs/Freeman Assoc., 1401 McCormick Dr., Landover, Md. 20785

C Lee Giles
AFOSR, Bldg. 410, Bolling AFB, DC 20332

YC Lee
Energy Research Bldg., U. of Md., College Park, Md. 20742


Abstract

There has been a great deal of theoretical and experimental
work done in computer science, psychology, and philosophy on
inductive inference (or "generalization"), that is, the problem
of inferring general rules or extracting invariances (or
"concepts") from a set of examples. In this paper we will
investigate the problem of constructing a network which will
learn concepts from a training set consisting of some fraction of
the total number of possible examples of the concept. Without
some sort of prior knowledge, this problem is not well defined,
since in general there will be many possible concepts which are
consistent with a given training set. We suggest that one way of
incorporating prior knowledge is to construct the network such
that the structure of the network reflects the structure of the
problem environment. Two types of networks (a two layer slab
trained with back propagation and a single high order slab) are
explored to determine their ability to learn the concept of
contiguity. We find that the high order slab learns and
generalizes contiguity very efficiently, whereas the first order
network learns very slowly and shows little generalization
capability on the small problems that we have examined.

------------------------------

Date: Tue, 14 Jul 87 10:49:16 EDT
From: Rich Sutton <rich@gte-labs.csnet>
Subject: Technical Report: Two Problems with Backpropagation...

This paper appeared in the proceedings of last year's Cognitive Science
Conference and is now being distributed as a technical report. The
paper is directly relevant to the problem of new learning interfering
with old in backpropagation networks (as discussed in the recent posting
by Creon Levit). The report does not offer a solution -- only a
clarification and analysis of this problem and one other.

------------------------------------------------------------------------

TWO PROBLEMS WITH BACKPROPAGATION
AND OTHER STEEPEST-DESCENT LEARNING PROCEDURES FOR NETWORKS

Richard S. Sutton
GTE Laboratories Incorporated
Waltham, MA 02254

This article contributes to the theory of network learning procedures by
identifying and analyzing two problems with the backpropagation
procedure of Rumelhart, Hinton, and Williams (1985) that may slow its
learning. Both problems are due to backpropagation's being a gradient-
or steepest-descent method in the weight space of the network. The
first problem is that steepest descent is a particularly poor descent
procedure for surfaces containing {\it ravines}---places which curve
more sharply in some directions than others---and such ravines are
common and pronounced in performance surfaces arising from networks.
The second problem is that steepest descent results in a high level of
interference between learning with different patterns, because those
units that have so far been found most useful are also those most likely
to be changed to handle new patterns. The same problems probably also
arise with the Boltzmann machine learning procedure (Ackley, Hinton and
Sejnowski, 1985) and with reinforcement learning procedures (Barto and
Anderson, 1985), as these are also steepest-descent procedures.
Finally, some directions in which to look for improvements to
backpropagation based on alternative descent procedures are briefly
considered.

------------------------------------------------------------------------
Requests for copies should be sent to Rich@GTE-Labs.CSNet


------------------------------

End of NEURON-Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT