Copy Link
Add to Bookmark
Report

Neuron Digest Volume 02 Number 16

eZine's profile picture
Published in 
Neuron Digest
 · 14 Nov 2023

NEURON Digest	Mon Jul 13 08:38:46 CDT 1987 
Volume 2 / Issue 16
Today's Topics:

Available Tech Report
Tech Report: Mapping Optimization Problems onto ANNS'
Proceedings of 1st Neural Networks Conference
Proceedings of ICNN
Information RE: THEORETICAL BIOLOGY
Relearning and Rumelhart networks
Posting from INFO-Futures
new MacArthur fellow
TOC Seminar -- Wednesday, July 8, -- Nick Littlestone
AAAI-87 Workshop:The Engineering of Neural-Based Architectures

----------------------------------------------------------------------

Date: 10 May 87 11:33 PDT
From: huberman.pa@xerox.com
Subject: Available Tech Report

The following report is available from huberman@Xerox.COM:

An Improved Three-Layer, Back Propagation Algorithm

by W. Scott Stornetta and B. A. Huberman

Abstract:

We report results of a modification to the three-layer back-propagation
algorithm originally proposed by Rumelhart. The dynamic range of input,
hidden and output units is made symmetric about 0 (ranging from -1/2 to
1/2) rather than from 0 to 1. Significant improvements in rate of
learning and uniformity of learning are found with the symmetrized range
system.


------------------------------

Date: Tue, 12 May 87 11:46:55 PDT
From: Harrison Leong <harrison@icarus.riacs.edu>
Subject: Tech Report: Mapping Optimization Problems onto ANNS'


.sp
.ce
\fBMapping Optimization Problems onto Artificial Neural Network Systems:
.ce
A General Principle and Examples\fR
.ce 5
Harrison MonFook Leong
Research Institute for Advanced Computer Science
NASA Ames Research Center
Moffett Field, CA, 94035
4/26/87
.ls 1
.sp
\fBABSTRACT:\fR General formulae for mapping optimization problems into
systems of ordinary differential equations arising from artificial neural
network research are presented. A comparison is made to optimization using
gradient-descent (ascent) methods. Two examples illustrate situations where
artificial neural network methods can improve convergence times.
They suggest that problems for which valid
solutions are at corners of a hypercube, indeed, are well suited to the form
of gradient search implemented by artificial neural network equations.
The general formulae are then inverted to derive the class of functions
that can be optimized using a special case of Pentti Kanerva's
sparse distributed memory (which, for the purposes of the paper, can
be considered a 3-layer network with feedback) and using a network
having non-linear multiple order multiplicative dependencies between
computational units (generalized high order correlation networks).
Preprints of the paper are now available.


------------------------------

Date: Wed 24 Jun 87 16:27:39-PDT
From: Matt Heffron <BEC.HEFFRON@ecla.usc.edu>
Subject: Proceedings of 1st Neural Networks Conference

I was unable to attend, but I think I would like to get a copy of the
proceedings for the Neural Network Conference in San Diego. Can anyone
tell me how to get a copy (Publisher, price, etc...)? (Is it worth getting?
My interest in neural networks is (at the present) just tracking the
technology, I'm not DOING anything with it.)

Thanks,
Matt Heffron BEC.HEFFRON@ECLA.USC.EDU
Beckman Instruments, Inc. voice: (714)-961-3728
2500 N. Harbor Blvd. MS X-11
Fullerton, CA 92634

[For this and the next message, I believe that copies can be obtained
through IEEE Publications beginning in October. You can probably get
a catalogue by writing to IEEE SERVICE CENTER, 445 Hoes Lane,
Piscataway, NJ 00854 - MTG]

------------------------------

Date: Thu, 2 Jul 87 14:02:54 EDT
From: princeton!iwano@rutgers.edu
Subject: Proceedings of ICNN

I read an article in NEURON Digest about Internatinal Conference on
Neural Networks which was held from June 21st to June 24th in San Diego.
Could you let me know where/how I can get the conference proceeding, if
it exists?

Kazuo Iwano

------------------------------

Date: Thu, 25 Jun 87 10:10:46 HOE
From: Esteban Vegas Lozano <ZCCBEVL%EB0UB011.BITNET@wiscvm.wisc.edu>
Subject: Information RE: THEORETICAL BIOLOGY

Dear friend:

We are a group of biologists very interested in THEORETICAL BIOLOGY.
This subject is not teach neither in our University nor in somewhere
else in Barcelona, Spain. Therefore we want to get information about the
different aspects of it; for example, we would desire to take notice of
the following related topics (Software, Abstract, Bibliographies, Books,
...) :

- Dynamic of Systems (Estability Theory, Population
Fluctuations, etc.).
- Information Theory in Biology and Evolution.
- Entropy and Living Systems.
- Models of Evolution, Development Genetics, ....
- Stochastic Process in Biology (Diffusion, Markov Processes,
Poisson Processes, etc.).
- Synergetics.
- Fractals.
- Neurobiology (Model of the Brain, Neural Networks,...).
- Cellular Automata
- Artificial Intelligence and Biology.
- Vision Systems.
- Knowledge Acquisition.
- Connectivity, Diversity in Biological Systems.
- Self-Organization.
- Chaos (Strangle Atractors, etc.).
- Catastrophe Theory.
- Cybernetics.
- Pattern Recognition, Pattern Formation.
- etc., etc.

Certainly we have some information about this themes however we would
like to get in contact with people interested in this subject too.

All message or mail about this letter should be sent to

<ZCCBEVL@EB0UB011.BITNET> for electronic mail.



Ricard Vicente Sol
Joan Saldanya Meca
Esteban Vegas Lozano

------------------------------

Date: 06 Jul 87 19:01:13 PDT (Mon)
From: creon@orville.arpa
Subject: Relearning and Rumelhart networks


A colleague and I have been experimenting with Rumelhart's "modified
delta rule"
for training a network with hidden hidden nodes to learn
arbitrary input/output associations. (Actually, we are trying to
teach the network to count, so the associations are not "arbitrary").
We have verified that the algorithm is working -- it can reproduce the
solutions in Rumelhart's paper.

Our primary problem is that the learning seems to be too "dramatic"
That is, if we have trained the network to correctly respond to, say,
500 different inputs, and then we have it learn the correct response
to just one more, it "forgets" the first 500, though it can relearn
them with a small amount of review, as compared to the amount of
training that it took to learn them initially.

This behavior persists over all sizes of networks, numbers of hidden
layers, and collections of patterns that we have tried. The learning
rates do not seem to have a qualitative effect on this behavior
either.

This behavior is undesirable, and seems "unphysiological" too, since
you do not forget everything you know each time you learn something
new. Nor do you have to review everything you have ever learned each
time you learn something new. Of course, the brain may do exactly
this at some level, and certainly a child tends to overemphasise the
last thing learned, just like our net. Nevertheless, on a serial
electronic digital machine, this constant need for retraining is
unacceptable.

We believe some possible remedies, though we have not tried them, may
be: 1) Feedback 2) sparse interconnection 3) limitiations on the
number of weights that may be changed during any one learning iteration.

Does anyone have expreience with solving this problem? Does anyone
have concrete information or references on implementation details for
any of our possible remedies?

Creon Levit
MS 258-5
NASA Ames Research Center
Moffett Field, California 94035

(415)-694-4403

creon@ames-nas.arpa (arpanet/milnet)
ihnp4!ames!amelia!creon (uucpnet)

------------------------------

Date: 9 Jul 87 13:46 PDT
From: Tony Wilkie /DAC/ <TLW.MDC@OFFICE-1.ARPA>
Subject: Posting from INFO-Futures

The following was found on INFO-Futures, and should perhaps be editted and
posted for thought and response. I personally found the "power measurement" a
bit perplexing.

-----

EXT-info-futures-request-BS1SP 8-Jul-87 Re: Connectionism etc...

From: {Dan Berleant <berleant@SALLY.UTEXAS.EDU>}DDN
To: info-futures@bu-cs.bu.edu
Identifier: EXT-info-futures-request-BS1SP / <8444@ut-sally.UUCP>
References-to: <8707062116.AA18752@crash.CTS.COM>
Sender: info-futures-request@bu-cs.bu.edu
Posted: 8-Jul-87 12:25-PDT Received: 9-Jul-87 13:22-PDT
Organization: U. Texas CS Dept., Austin, Texas
Message:

In article <8707062116.AA18752@crash.CTS.COM> pnet01!jim@RELAY.CS.NET
writes:

> Neural nets have already proven capable of feats that >required orders of
magnitude more effort to accomplish via conventiona; >artificial
intelligence techniques.

I take it 'neural network' means a highly interconnected set of simple
computing elements, such that at least some components (e.g. the
connections) may have a large number of values (e.g. strengths, in the case
of connections)? This 'analog' requirement would be to distinguish neural
network research from other multi-processor research. Also to distinguish it
from the traditional assumption in AI that symbolic processing is
sufficient.

What feats have neural nets been able to do in orders of magnitude
less effort? How is effort to be measured?

Dan Berleant

UUCP: {gatech,ucbvax,ihnp4,seismo...& etc.}!ut-sally!berleant
ARPA: ai.berleant@r20.utexas.edu


------------------------------

Date: 17-JUN-1987 08:44
From: TOURETZKY@C.CS.CMU.EDU
Subject: new MacArthur fellow

The MacArthur Foundation of Chicago released a list of 32 new MacArthur Fellows
yesterday. One of the awardees is Dave Rumelhart. Congratulations, Dave.

According to the New York Times (6/16/87, p. 14), the fellowship carries a
stipend of $150,000 to $375,000 paid out over five years; the exact amount is
based on the recipient's age. It is given to "outstandingly talented and
promising individuals"
from all walks of life. The 233 MacArthur fellows
(including the latest group of 32) include poets, journalists, historians, art
and literary critics, mathematicians, scientists, teachers, and social
activists. The oldest winner this year was 82; the youngest 27.

------------------------------

Date: Thu, 2 Jul 87 16:35:30 edt
From: Sally C. Bemus <bemus@THEORY.LCS.MIT.EDU>
Subject: TOC Seminar -- Wednesday, July 8, -- Nick Littlestone

DATE: Wednesday, July 8, 1987
TIME: Refreshments: 1:15PM
LECTURE: 1:30PM
PLACE: NE43-512A

Learning Quickly When Irrelevant Attributes Abound: A New
Linear-Threshold Algorithm

Prof. Nick Littlestone
Department of Computer and Information Sciences
University of California at Santa Cruz

Valiant and others have studied the problem of learning various classes of
Boolean functions from examples. Here we discuss on-line learning of these
functions. In on-line learning, the learner responds to each example
according to a current hypothesis. Then the learner updates the hypothesis,
if necessary, based on the correct classification of the example. This is
the form of the Perceptron learning algorithms, in which updates to the
weights occur after each mistake. One natural measure of the quality of
learning in the on-line setting is the number of mistakes the learner makes.
For suitable classes of functions, on-line learning algorithms are available
that make bounded number of mistakes, with the bound independent of the
number of examples seen by the learner. We present one such algorithm, which
learns disjunctive Boolean functions, and variants of the algorithm for
learning other classes of Boolean functions. The algorithm can be expressed
as a linear-threshold algorithm. A primary advantage of this algorithm is
that the number of mistakes that it makes is relatively little affected by
the presence of large numbers of irrelevant attributes in examples; we show
that the number of mistakes grows only logarithmically with the number of
irrelevant attributes. At the same time, the algorithm is computationally
time and space efficient.

Host: Prof. Ronald L. Rivest

------------------------------

Date: Wed, 8 Jul 87 08:49:40 PDT
From: Pat <wiley!pat@lll-lcc.arpa>
Subject: AAAI-87 Workshop:The Engineering of Neural-Based Architectures


AAAI-87 WORKSHOP:

THE ENGINEERING OF NEURAL-BASED ARCHTECTURES


In recent years there has been a resurgence of work in Neural-Based
Architectures due to significant progress in solving some of the
more serious problems which limited neural models of the 1960's. While
progress has been encouraging, nearly all research is currently done
using software simulations of neural networks applied to carefully
chosen problems in restricted domains. A major challenge facing the
field is to build REAL systems which have the power to deal with
REAL-WORLD applications. This workshop will discuss the implementation
of neurocomputers using the technology available today, "virtual"
neurocomputers that will be available soon, and the development of
VLSI and WSI technologies for neurocomputers of the future.

Cochairs: Wun C. Chiou Sr., Lockheed Palo Alto Research Center
Patrick W. Ransil, Lockheed Artificial Intelligence Center
Date: Monday, July 13, 1987, From 8:30 am to 12:30 pm
Place: South Campus Center, Room 316B, University of Washington,
Seattle, WA

Invited Speakers:
Joshua Alspector, Bell Communications Research, Morristown NJ
"A Neuromorphic VLSI Learning System"

John Lambe, Jet Propulsion Laboratories/CalTech, Pasadena CA
"Some Experimental Aspects Of Connective Circuits"

John Denker, AT&T Bell Laboratories, Holmdel, NJ
"A Highly Interconnected Architecture For Learning,
Adaptation, And Generalization"


Dan Hammerstrom, Oregon Graduate Center, Beaverton OR
"Wafer-Scale Integrated Neurocomputing: Do Two Wrongs
Make a Right?"


Douglas Reilly, Nestor, Providence RI
"Nestor Learning Systems: Multi-Modular Neural Networks"

Federico Faggin, Synaptics, San Jose, CA
"VLSI Neural Systems"

Robert Hecht-Nielsen, Hecht-Nielsen Neurocomputer Corporation,
San Diego, CA, "Neurocomputers: Architectures & Applications"


The workshop is open to all attendees of AAAI-87.


------------------------------

End of NEURON-Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT