Copy Link
Add to Bookmark
Report
Neuron Digest Volume 02 Number 22
NEURON Digest Mon Oct 5 23:35:02 CDT 1987 Volume 2 / Issue 22
Today's Topics:
Researchers in neural/connectionist robotics?
Neural Networks & Unaligned fields
Re: Neural Networks & Unaligned fields
NEURAL NETWORKS SIMULATIONS IN Smalltalk/LISP/(prolog)
Re: NEURAL NETWORKS SIMULATIONS IN Smalltalk/LISP/(prolog)
Need email names
Re: Neural Networks
references wanted
Neural Networks
AI Expert source for Hopfield Nets
Symposium Announcement
Technical report abstract...
----------------------------------------------------------------------
Date: Thu 3 Sep 87 09:32:24-PDT
From: nesliwa%telemail@ames.arpa (NANCY E. SLIWA)
Subject: Researchers in neural/connectionist robotics?
A colleague of mine is attempting to organize a session for the
American Controls Conference in neural/connectionist approaches/applications
to robotics (other that strictly image processing). Could anyone
suggest the names/addresses/phone numbers of researchers in this area?
Particularly other than Kuperstein, Jorgensen, and Pellionisz.
Thanks in advance.
Nancy Sliwa
MS 152D
NASA Langley Research Center
Hampton, VA 23665-5225
(804)865-3871
nancy@grasp.cis.upenn.edu
nesliwa%telemail@orion.arpa
------------------------------
Date: Thu 3 Sep 87 09:33:12-PDT
From: ihnp4!inuxc!iuvax!ndmath!milo@ucbvax.Berkeley.EDU (Greg Corson)
Organization: Math. Dept., Univ. of Notre Dame
Subject: Neural Networks & Unaligned fields
Ok, here's a quick question for anyone who's getting into Neural Networks.
If you setup the type of network described in BYTE this month, or the
type used in the program recently posted to the net, what happens if you
feed it an input image that is not aligned right?
For example, in the Byte article they demonstrate correct recall of an image
corrupted by randomly flipping a number of bytes, simulating "noise". What
would happen if they just shifted the input image one or two bits to the left?
Would the network still recognize the pattern?
Greg Corson
...seismo!iuvax!ndmath!milo
------------------------------
Date: Thu 3 Sep 87 10:01:52-PDT
From: Ken Laws <Laws@kl.sri.com>
Subject: Re: Neural Networks & Unaligned fields
The current networks will generally fail to recognize shifted patterns.
All of the recognition networks I have seen (including the optical
implementations) correlate the image with a set of templates and then
use a winner-take-all subnetwork or a feedback enhancement to select
the best-matching template. Vision researchers were doing this kind
of matching (for character recognition, with the character known to
be centered in the visual field) back in the 50s and early 60s. Position
independence was then added by convolving the image and template, essentially
performing the match at every possible shift. This was rather expensive,
so Fourier, Hough, and hierarchical matching techniques were introduced.
Then came edge detection, shape description, and many other paradigms.
We don't have all the answers yet, but we've come a long way from the
type of matching currently implemented in neural networks.
The advantage of the networks, particularly those implemented in analog
hardware, is speed. IF you have a problem for which alignment is known,
or IF you have time or hardware to try all possible alignments, or IF
your network is complex enough to store all templates at a sufficient
number of shifts, neural networks may be able to give you an off-the-shelf
recognizer that bypasses the need to research all of the pattern recognition
literature of the last decade.
I suspect that the above conditions will actually hold in a fair number
of engineering situations. Indeed, many of these applications have already
been identified by the signal processing community. Neural networks offer
a trainable alternative to DSP or acoustic convolution chips. Where rules
and explanations are appropriate, designers will use expert systems; otherwise
they will neural networks and similar systems. Only the most difficult
and important applications will require development of customized reasoning
systems such as numerical or object-oriented simulations.
-- Ken
------------------------------
Date: Thu 3 Sep 87 23:33:49-PDT
From: hao!boulder!mikek@husc6.harvard.edu (Mike Kranzdorf)
Organization: University of Colorado, Boulder
Subject: Re: Neural Networks & Unaligned fields
I am not familiar with the net in Byte, but I assume it is a two layer net,
like the one that was posted. If this is the case, shifted patterns will
not be recognized. It takes at least three layers for a net to have an
internal representation of the structure of an input pattern. A good
overview paper describing these kinds of conditions can be found in the
IEEE ASSP (Acoustics, Speech, and Signal Processing) Magazine April 1987,
Volume 4, Number 2, "An Introduction to Computing with Neural Nets" by
Richard P. Lippmann. The article focuses on catagorizers, but is
informative about nets in general.
--mike
------------------------------
Date: Thu 3 Sep 87 23:34:12-PDT
From: plx!titn!jordan@sun.com (Jordan Bortz)
Organization: Higher Level Software, Piedmont, CA
Subject: NEURAL NETWORKS SIMULATIONS IN Smalltalk/LISP/(prolog)
Has anyone implemented any neural network simulations in any of the
above languages?
Huh?
Jordan
=============================================================================
Jordan Bortz Higher Level Software 1085 Warfield Ave Piedmont, CA 94611
(415) 268-8948 UUCP: (decvax|ucbvax|ihnp4)!decwrl!sun!plx!titn!jordan
=============================================================================
------------------------------
Date: Thu 3 Sep 87 23:34:32-PDT
From: hao!boulder!mikek@husc6.harvard.edu (Mike Kranzdorf)
Organization: University of Colorado, Boulder
Subject: Re: NEURAL NETWORKS SIMULATIONS IN Smalltalk/LISP/(prolog)
>Has anyone implemented any neural network simulations in any of the
>above languages?
Try P3 from UCSD Institute of Cognitive Science (LISP)
Contact David Zipser
You can find out more about it from the PDP books (Ch. 13 I believe)
--mike
------------------------------
Date: Thu 10 Sep 87 20:39:58-PDT
From: jr@io.att.com (j.ratsaby)
Subject: Need email names
Organization: AT&T, Middletown NJ USA
I would like to get email names of persons who deal with neural-nets
in CS dept of university of Toronto and/or in CS dept of John Hopkins
university in Baltimore.
thanks in advance,
joel
------------------------------
Date: 14 Aug 87 14:21:01 GMT
From: linus!alliant!sullivan@husc6.harvard.edu (Mike Sullivan)
Subject: Re: Neural Networks
[Excerpt from AIList V5 #199 - MTG]
In article <269@ndmath.UUCP>, milo@ndmath.UUCP (Greg Corson) writes:
> I am looking for some information and/or demo programs on Neural Networks
> and how to simulate them on a computer. Any demo programs would be greatly
> appreciated even if they don't do much.
Nerualtech Inc has a product out for beta test which runs on machines from
PC's to Cray's. Dr John Voevodsky is the developer of this product which
is modeled from the biological processes of the human brain cell. You may not
be in the market for such a product, but it might help you to know what other's
are doing.
for info on PLATO/ARISTOTLE contact:
Dr John Voevodsky
Neuraltech Inc
177 Goya Road
Portola Valley, California 94025
#include <std/disclaimer>
______
/ \ \
Michael J Sullivan / \____\ Alliant
sullivan@alliant.uucp / / \ ComputerSystemsCorporation
/____/_______\
------------------------------
Date: 21 Aug 87 12:47:37 GMT
From: Roel Wieringa <mcvax!botter!roelw@seismo.css.gov>
Organization: VU Informatica, Amsterdam
Subject: references wanted
I am looking for short and accessible references on connection machines,
Boltzmann machines, etc. E.g. published articles, introductions and overviews
in proceedings or books, etc.
Can anyone give me some hints?
Thanks in advance.
Roel Wieringa
Roelw@cs.vu.nl
------------------------------
Date: Fri, 10 Jul 87 11:04:59 +0200
From: mcvax!idefix.laas.fr!helder@seismo.CSS.GOV (Helder Araujo)
Subject: Neural Networks
I am just starting working on a vision system, for which I am
considering several different architectures. I am interested in studying the
utilization of a neural network in such a system. My problem is that I am
lacking information on neural networks. I would be grateful if anyone could
suggest me a bibliography and references on neural networks. As I am not
a regular reader of AIlist I would prefer to receive this information
directly. My address:
mcvax!inria!lasso!magnon!helder
I will select the information and put it on AIlist.
Helder Araujo
LAAS
mcvax!inria!lasso!magnon!helder
7, ave. du Colonel-Roche
31077 Toulouse
FRANCE
------------------------------
Date: 2 Jul 87 20:45:24 GMT
From: ucsdhub!dcdwest!benson@sdcsvax.ucsd.edu (Peter Benson)
Subject: AI Expert source for Hopfield Nets
I am looking for the source mentioned in Bill Thompson's
article on Hopfield Nets in the July, 1987 issue of
AI Expert magazine. At one time, someone was posting all the
sources, but has, apparently, stopped. Could that person,
or some like-minded citizen post the source for this
Travelling Salesman solution.
Thanks in advance !!
--
Peter Benson | ITT Defense Communications Division
(619)578-3080 | 10060 Carroll Canyon Road
ucbvax!sdcsvax!dcdwest!benson | San Diego, CA 92131
dcdwest!benson@SDCSVAX.EDU |
------------------------------
Date: Tue, 8 Sep 87 15:24:10 CDT
From: FETZ%UWALOCKE@WISCVM.WISC.EDU
Subject: Symposium Announcement
NEURAL NETWORK MODELS AND MECHANISMS OF PARALLEL DISTRIBUTED PROCESSING
A Symposium of the Fall Meeting of the American Physiological Society
8 AM - Noon, Oct. 15, 1987
Town and Country Hotel, San Diego, CA
EBERHARD E. FETZ (U of Wash): Chairman's introduction: Towards principles of
parallel distributed processing in the nervous system.
JAMES A. ANDERSON (Brown U): Behavioral implications of distributed
processing.
JOHN J. HOPFIELD (Cal Tech): Neural computation and model "neural" circuits.
DAVID W. TANK (Bell Labs): Model neural circuits and the detection of
time-varying stimuli.
TERRENCE J. SEJNOWSKI (Johns Hopkins): Studies of distributed information
processing with neural network models.
------------------------------
Date: Wed 9 Sep 87 11:11:51-CDT
From: Jim Anderson <ANDERSON@maximillion.cp.mcc.com>
Subject: Technical report abstract...
___________________________________________________________________________
MCC/EI-259-87
A MEAN FIELD THEORY LEARNING ALGORITHM
FOR NEURAL NETWORKS
Carsten Peterson and James R. Anderson
Microelectronics and Computer Corporation
3500 West Balcones Center Drive
Austin, TX 78759-6509
Abstract:
Based on the Boltzmann Machine concept we derive a learning algorithm in
which time consuming stochastic measurements of correlations are replaced
by solutions to deterministic mean field theory equations. The method is
applied to the XOR, encoder and line symmetry problems with substantial
success. We observe speedup factors ranging from 10 to 30 for these
applications and a significantly better learning performance in general.
Request for copies should be sent to HINER@MCC.COM
------------------------------
End of NEURON-Digest
********************