Copy Link
Add to Bookmark
Report

Neuron Digest Volume 02 Number 02

eZine's profile picture
Published in 
Neuron Digest
 · 14 Nov 2023

NEURON Digest       26 JAN 1987       Volume 2 Number 2 

Topics in this digest --
Queries - Neural Networks and Genetic Algorithms &
Basic Text for Hacker &
Information about the ICNN
News - Tech Report - Learning to predict
Seminars/Courses - Anderson at TI &
Workshop on NNs and Devices &
Hinton at Univ. of Calif. at Berkeley
(Using fast and slow weights) &
Hinton at Univ. of Calif. at Berkeley
(part whole hierarchies)
Conferences/Call for papers - AI at upcoming conferences

----------------------------------------------------------------------

Date: 21 January 1987
From: GATELY%CRL1@TI-CSL.CSNET
Subject: Neural Networks and Genetic Algorithms

I am interested in getting some opinions on the relationship
between connectionist systems and genetic systems. To make the
comparison more structured, we can choose a one dimensional
Neural Network using a Widrow-Hoff type algorithm and a
Holland-style Bucket Brigade genetic algorithm.

I understand the similarities at the outer level, both systems:
can learn
have the ability to recall learned information
learn from examples
improve their performance over time
utilize feedback during the learning phase

It is at the inner level that I am interested. I see two
possible areas which can be compared.

First, it is possible to view the input/output relationship
in both systems as being similar (sort of a hashing algorithm) -
possibly with intermediate states.

The learning however is quite different. In changing a weight
in a neural network, the input/output relationships for all the
i-o pairs are changed. But changing a rule in a genetic system
affects no other rules.

Any comments? Are these systems the same? At what level?

Regards, Mike
GATELY%CRL1@TI-CSL.CSNET

------------------------------

From: STERN.PASA@XEROX.COM 20-JAN-1987 10:10
To: gately%resbld%ti-csl.csnet@RELAY.CS.NET
Subj: Re: NEURON DIGEST 14-JAN-1987 / VOLUME 2 - NUMBER 1

I need some basic text to read on neural nets, oriented towards someone
(me) who has been hacking code for a long time. I have two basic
questions:

1. What does a neural net do that cannot be emulated on a sequential
machine?

2. Do any of the new neural net machines with N neurons (or whatever)
perform neural net algorithms any faster than N simple machines in
parallel?

Josh

------------------------------

From: PRLB2!BOURLARD@SEISMO.CSS.GOV 19-JAN-1987 21:06
To: NEURON-REQUEST%TI-CSL.CSNET@RELAY.CS.NET
Subj: IEEE FIRST ANNUAL INT. CONF. ON NEURAL NETWORKS, SAN DIEGO, JUNE 1987

Request about IEEE FIRST ANNUAL CONFERENCE ON NEURAL NETWORKS
San Diego, California
June 21-24, 1987

Among the activities of the Philips Research Lab. of Brussels (Belgium),
a group is working on speech recognition.
We are more precisely in charge of the acoustico-phonetic decoding.
Having acquired a good experience in Markov modelling of phonemes,
we are now investigating the "connectionist" and "neural network" approaches.
Since one year, we have examined the possible applications of Boltzmann
Machines and multilayer perceptrons for (isolated and connected) speech
recognition and obtained interesting results.
We would like to present some of these results at the First IEEE Int.Conf.
on Neural Networks (San Diego, June 1987).
Who could inform us about the publication material requested before February:
abstract, extended abstract or full paper?
What is the deadline for submission of the abstract or definitive paper?
What is the date of signification of acceptance?


H.Bourlard and C.J.Wellekens

Philips Reasearch Laboratory Brussels
Av. van Becelaere 2, Box 8
B - 1170 Brussels
BELGIUM

Electronic mail : hb@prlb2.UUCP

------------------------------

From: RICH@GTE-LABS.CSNET 20-JAN-1987 09:05
To: neuron@ti-csl
Subj: Tech Report Abstract -- Learning to Predict


LEARNING TO PREDICT
BY THE METHODS OF TEMPORAL DIFFERENCES

Richard S. Sutton
GTE Labs
Waltham, MA 02254
Rich@GTE-Labs.CSNet

This technical report introduces and provides the first formal results
in the theory of TEMPORAL-DIFFERENCE METHODS, a class of statistical
learning procedures specialized for prediction---that is, for using past
experience with an incompletely known system to predict its future
behavior. Whereas in conventional prediction-learning methods the error
term is the difference between predicted and actual outcomes, in
temporal-difference methods it is the difference between temporally
successive predictions. Although temporal-difference methods have been
used in Samuel's checker-player, Holland's Bucket Brigade, and the
author's Adaptive Heuristic Critic, they have remained poorly
understood. Here we prove the convergence and optimality of
temporal-difference methods for special cases, and relate them to
supervised-learning procedures. For most real-world prediction
problems, temporal-difference methods require less memory and peak
computation than conventional methods AND produce more accurate
predictions. It is argued that most problems to which supervised
learning is currently applied are really prediction problems of the sort
to which temporal-difference methods can be applied to advantage.

-----

p.s. Those who have previously requested a paper on "bootstrap learning"
are already on my mailing list and should receive the paper sometime next week.


------------------------------

Date: 12 January 1987
From: PENZ%CRL2@TI-CSL.CSNET
Subject: Special Seminar (TI) - James Anderson (Brown)

Friday - 16 January 1987 - 9:00
Texas Instruments - Central Research Building - Cafeteria
Neural Network Modeling

Prof. Jim Anderson of Brown University will give a two part talk
covering his neural network research. The first part will
introduce the field of neural networks and the general work going
on at Brown's Cognitive Science Department. In the second part,
Dr. Anderson will discuss the application of NN models to
specific pattern recognition problems.

Members of the Metroplex Institute for Neural Dynamics (MIND) and
other non-TIers are invited to attend. Please arrive at the
lobby of the Central Research Building before 8:30.

------------------------------

Date: 21 January 1987
From: GATELY%CRL1@TI-CSL.CSNET
Subject: WORKSHOP ON NEURAL NETWORK DEVICES AND APPLICATIONS

WORKSHOP ON NEURAL NETWORK DEVICES AND APPLICATIONS

Sponsored by:
Strategic Defense Initiative Organization /
Innovative Science and Technology Office

National Aeronautics and Space Administration / Office
of Aeronautics and Space Technology

Defense Advanced Researc Projects Agency

Hosted by:
Jet Propulsion Laboratory/California Institute of Technology
Pasadena, California.

February 18-19, 1987
Von Karman Auditorium

Neural networks offer the possibility of breakthrough performance
in several areas of technology. To promote this emerging field
the JPL Innovative Space Technology Center will host a workshop
on the potential uses of neural network technology with emphasis
on space and defense application.

Topics for discussion include neural network devices, VLSI and
optical system implementation, and applications in the areas of
signal processing, automation and robotics, data compression and
coding, tracking and control, and other space processing
functions. SDIO, NASA, and DARPA representatives will present
their aspirations for this technology. All meeting discussions
and material will be unclassified.

Registration and Check-In
All workshop sessions will be held at JPL in von Karman
Auditorium. Registration will begin Wednesday, February 18 at
7:30 a.m. A $10 registration fee will be collected at the door.

For further information, please contact:
JET PROPULSION LABORATORY
4800 Oak Grove Drive
Pasadena, CA 91109

[some of the] Organizing Committee:
Workshop Chairman: Dr. Edward C. Posner (818) 354-6224
Program Chariman: Dr. Robert D. Rasmussen (818) 354-2861
Administration: Pat McLane (818) 354-5556

[n.b. I received a notice about this workshop in the mail. I am
not sure if it is a closed session. I have reproduced it here
simply as general information. If you are interested in
attending, please call one of the above numbers to check
capacities, etc. MTG]


------------------------------

Date: Thu, 15 Jan 87 10:35:57 PST
From: admin%cogsci.Berkeley.EDU@berkeley.edu (Cognitive Science
Program)
Subject: Seminar - Using Fast and Slow Weights (UCB)


BERKELEY COGNITIVE SCIENCE PROGRAM
SPRING - 1987

Cognitive Science Seminar - IDS 237B

Tuesday, January 27, 11:00 - 12:30
2515 Tolman Hall
Discussion: 12:30 - 1:30
2515 Tolman Hall

``Using fast weights to deblur old memories and assimilate new ones."

Geoff Hinton
Computer Science
Carnegie Mellon

Connectionist models usually have a single weight on each connection. Some
interesting new properties emerge if each connection has two
weights -- a slow, plastic weight which stores long-term
knowledge and a fast, elastic weight which stores temporary
knowledge and spontaneously decays towards zero. Suppose that a
network learns a set of associations, and then subsequently
learns more associations. Associations in the first set will be-
come "blurred", but it is possible to deblur all the associations
in the first set by rehearsing on just a few of them. The
rehearsal allows the fast weights to take on values that cancel
out the changes in the slow weights caused by the subsequent
learning.

Fast weights can also be used to minimize interference by minim-
izing the changes to the slow weights that are required to as-
similate new knowledge. The fast weights search for the smallest
change in the slow weights that is capable of incorporating the
new knowledge. This is equivalent to searching for analogies
that allow the new knowledge to be represented as a minor varia-
tion of the old knowledge.
-----
UPCOMING TALKS
Feb 10: Anne Treisman, Psychology Department, UC Berkeley.
-----
ELSEWHERE ON CAMPUS
Geoff Hinton will speak at the SESAME Colloquium on Monday Jan. 26, in
Tolman 2515 from 4-6.


------------------------------

From: LAWS@SRI-STRIPE.ARPA 24-JAN-1987 00:26
To: cogsci-friends@cogsci.berkeley.edu
Subj: Seminar - Part-Whole Hierarchies in Connectionist Networks (UCB)

This is the abstract for Geoff Hinton's third talk on campus, Tuesday Jan. 27,
2-3:30, in Sibley Auditorium, Bechtel Center:

"Four ways of representing part-whole hierarchies in connectionist networks"

Connectionist models either ignore the problem of representing constituent
structure or they embody naive and inadequate theories. I will describe three
such theories and then discuss a more sophisticated theory that allows truly
recursive representations.

------------------------------

From: LAWS@SRI-STRIPE.ARPA 24-JAN-1987 00:26
To: cogsci-friends@cogsci.berkeley.edu
Subj: Seminar - Additional Talks by Geoff Hinton (UCB)

Geoff Hinton will give two talks on campus in addition to his talk for
the Cognitive Science Program. As previously announced, his Cognitive
Science talk be Tuesday, Jan. 27, 11-12:30 (followed by a discussion
period), in 2515 Tolman, on "Using fast weights to deblur old memories
and assimilate new ones" (abstract sent previously).

A prior talk, sponsored by the SESAME group, will take place on Monday,
Jan. 26, 4-5:30, in 2515 Tolman. The title and abstract follow:

"Self-supervised back-propagation for learning representations"

A connectionist network can learn useful internal representations by changing
the weights so as to minimize a measure of how badly the network is performing.
In the back-propagation procedure the measure is the squared difference between
the output produced by the network and the output required by a teacher. One
problem with this procedure is that it requires a teacher, so it is not obvious
how to apply it to "perceptual" learning in which there is no obvious teacher.
I shall describe a general way of using back-propagation for perceptual
learning and show how, in a special case, this general method reduces to a
familiar and superficially very different procedure called competitive
learning.

The third talk, arranged jointly by Professors Stuart Dreyfus and
Shankar Sastry, will take place on Tuesday, Jan. 27, 2-3:30, in the
Bechtel auditorium. The topic of the talk has not yet been announced.


------------------------------

Date: WED, 10 oct 86 17:02:23 CDT
From: leff%smu@csnet-relay
Subject: AI at upcoming conferences

[Excerpted from AIList Digest V5 #16 - MG]

*** Compcon 87 Cathedral Hill Hotel, San Francisco, February 23-27

Tutorial Number 3 on AI Machines: INstructor David Elliot Shaw of the
Columbia University Non-Von Project
Tutorial Number 7: Managing Knowledge Systems Development: Instructor
Avron Barr

1:30 - 3:00 February 25
A Neural Based Knowledge Processor - J. Vovodsky Neuro Logic Inc.
Connectionists, Symbol Processsing in Neural Based Architectures
D. Touretzky, Carnegie-Mellon Univ.
Drawbacks with Neural Based Architectures - D. Partridge New Mexico State
University
Timing Dependencies in Sentence Comprehension - H. Gigly, University of
New Hampshire

3:30 - 5:00 Thursday February 26
Lisp Machine Architecture Issues - R. Lim NASA Ames Research Center
High Level Language LISP Processor - S. Krueger: TI
Kyoto Common LISP - F. Giunchiglia IBUKI Inc.

Optical Neural Networks D. Psaltis: California Institute of Technology


------------------------------

End of NEURON Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT