Copy Link
Add to Bookmark
Report

Neuron Digest Volume 05 Number 08

eZine's profile picture
Published in 
Neuron Digest
 · 1 year ago

Neuron Digest   Tuesday,  7 Feb 1989                Volume 5 : Issue 8 

Today's Topics:
genetic search and neural nets
PDP III code
ART help
Wanted: ART simulator
ART 1/ ART 2 source code from the Center for Adaptive Systems
Back Propagation and ART
Weight decay ... a reply
Re: Neuron Digest V5 #4
Neural Network Evaluation
Hidden Markov chains + Multi-layer Layer Perceptrons???
UCSD Cog Sci faculty opening
addendum to UCSD Cog Sci faculty opening


Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"
ARPANET users can get old issues via ftp from hplpm.hpl.hp.com (15.255.16.205).

------------------------------------------------------------

Subject: genetic search and neural nets
From: Mike Rudnick <rudnick@cse.ogc.edu>
Date: Sat, 14 Jan 89 15:05:27 -0800

I am a phd candidate in computer science at Oregon Graduate Center. My
research interest is in using genetic search to tackle artificial neural
network (ANN) scaling issues. My particular orientation is to view
minimizing interconnections as a central issue, partly motivated by VLSI
implementation issues.

I am starting a mailing list for those interested in applying genetic search
to/with/for ANNs. Mail a request to Neuro-evolution-request@cse.ogc.edu to
have your name added to the list.

A bibliography of work relating artificial neural networks (ANNs) and
genetic search is available. It is organized/oriented for someone familiar
with the ANN literature but unfamiliar with the genetic search literature.
Send a request to Neuro-evolution-request@cse.ogc.edu for a copy. If there
is sufficient interest I will post the bibliography here.

Mike Rudnick CSnet: rudnick@cse.ogc.edu
Computer Science & Eng. Dept. ARPAnet: rudnick%cse.ogc.edu@relay.cs.net
Oregon Graduate Center BITNET: rudnick%cse.ogc.edu@relay.cs.net
19600 N.W. von Neumann Dr. UUCP: {tektronix,verdix}!ogccse!rudnick
Beaverton, OR. 97006-1999 (503) 690-1121 X7390

------------------------------

Subject: PDP III code
From: HAHN_K%DMRHRZ11.BITNET@CUNYVM.CUNY.EDU
Date: Mon, 16 Jan 89 16:17:38 +0700

Regarding questions about the PDP III code for Macs etc.:

I've just finished the adaptation of the sources (well, most of them ;-) and
polished it a bit for the Atari ST computer. The programs seem to work
quite well, although up to now I haven't done extensive tests. Some little
features don't work yet, some will be fixed in the near future, some
improvements (perhaps grafics and the like...) will be done in the long run
(read: IFF I find the time). Anyway, if someone wants the code, and if
nobody claims this to be illegal, let me know.

Keep on back-propagating,
Klaus.


Klaus Hahn Bitnet: HAHN_K@DMRHRZ11

Department of Psychology
Gutenbergstr. 18
University of MARBURG
D-3550 Marburg

West-Germany


------------------------------

Subject: ART help
From: pastor@bigburd.PRC.Unisys.COM (Jon Pastor)
Date: 17 Jan 89 17:10:17 +0000

I would like to experiment with ART-1 and ART-2, but am having some
difficulty in figuring out how to implement them (particularly the latter).
I have read all the basic references (e.g., for ART-2, the Applied Optics
article and the material in ICNN-87), and I think that I understand what's
going on -- but going from the equations given in the papers to an
implementation has been a problem. If anyone has any suggestions, I'd
appreciate them. I am looking for any and all of:

- additional references that focus on the operationalization of
the theory
- an ART simulator, preferably written in a source language with
which I'm familiar (C,LISP,Prolog,Pasacal) so that I can
analyze the implementation; I have access to SUNs, VAXen,
and Symbolics, TI, and Xerox LISPMs
- notes, comments, and observations from someone who has successfully
(or even unsuccessfully) implemented ART
- any other information that you may consider useful or relevant

I would also like to hear from anyone who is in the same situation in which
I find myself; if there is sufficient interest, and sufficient response, I
will either distribute or post whatever information I receive. If you have
information (or code) that you don't want to spread too widely (say, you
have a simulator that you wouldn't mind sharing with one person who promises
not to bug you about bugs, but do not want to distribute), I will gladly
omit it from any subsequent broadcasts -- or refer inquiries directly to you
-- if you so request.

Thanks in advance.

------------------------------

Subject: Wanted: ART simulator
From: csrobe@icase.edu (Charles S. [Chip] Roberson)
Date: Thu, 19 Jan 89 11:33:46 -0500

Does anybody know of a working simulator of Grossberg's ART model,
preferably in C or Pascal? Sun or PC based software is acceptable.

Annette Hoag, College of William and Mary
Please reply to csrobe@icase.edu since our mailer is down.

------------------------------

Subject: ART 1/ ART 2 source code from the Center for Adaptive Systems
From: John Merrill <merrill@bucasb.bu.edu>
Date: Fri, 20 Jan 89 11:21:51 -0500

[[ Editor's note: I sent the previous queries on to John, so the ART folks
could reply in kind. Here is the "official" response. -PM ]]

As a general rule, Gail Carpenter and her co-workers on ART do not
distribute their source code. Obviously, they have no objections to its
contents being distributed; my impression is rather that the code is
somewhat less than elegant. The version in use here is written in an
archaic form of FORTRAN that runs on an IBM 3090, and it has grown over the
years in an erratic fashion.

In the past, however, she has been willing to distribute the data she used
in published simulations so that individuals can check their implementations
against her results. She has also been willing to answer questions like "My
routine doesn't work. It seems to break down like this..."
(Apparently,
there are two major places where people make mistakes in implementation.
Usually, a phone call will resolve the problems.)

Dr. Carpenter can be reached at:

Dr. Gail Carpenter
Center for Adaptive Systems
111 Cummington Street, Second Floor
Boston University
Boston, Massachusetts 02125

(617) 353-7857

Electronic mail sent to me will also (eventually) reach her, although
I don't guarantee rapid turn-around.

John Merrill | ARPA: merrill@bucasb.bu.edu
Center for Adaptive Systems |
111 Cummington Street |
Boston, Mass. 02215 | Phone: (617) 353-5765

------------------------------

Subject: Back Propagation and ART
From: Jon Ryshpan <jon@nsc.NSC.COM>
Date: Wed, 01 Feb 89 13:58:38 -0800

As a comparitive (probably) onlooker to NN theory, I receive interesting and
somewhat conflicting signals from the literature. In particular, I read in
Grossberg's "Neural Networks and Natural Intelligence" a focussed critique
on Back Propagation, whereas I sense also that BP R&D is in full bloom.

Grossberg's critiques (in case you don't have the book) are:

1. BP is unstable in complex environments and requires an omniscient teacher.
ART has neither defect.

2. BP cannot self-scale or construct prototypes.
ART has neither defect.

3. Weight transport in BP is physically unrealisable (in the brain).
ART does not back-propagate bottom-up LTM traces.

4. BP matching only changes the LTM traces.
ART deforms exemplars towards prototypes, and can achieve fast processing
related to STM transformations.

5. Associative map learning is not self-stabilising in BP.
ART does not have this defect.

6. BP cannot generalise to a model capable of computing temporal invariances.
ART does not have this defect.

In view of this (to my knowledge) unchallenged and damning critique, why is
it that BP - as opposed to ART - attracts such interest?


Andrew Palfreyman, MS D3969 PHONE: 408-721-4788 work
National Semiconductor 408-247-0145 home
2900 Semiconductor Dr. there's many a slip
P.O. Box 58090 'twixt cup and lip
Santa Clara, CA 95052-8090

DOMAIN: andrew@logic.sc.nsc.com
ARPA: nsc!logic!andrew@sun.com
USENET: ...{amdahl,decwrl,hplabs,pyramid,sun}!nsc!logic!andrew

------------------------------

Subject: Weight decay ... a reply
From: kanderso@BBN.COM
Date: Thu, 05 Jan 89 16:30:15 -0500

I enjoyed John's summary of weight decay, but it raised a few questions.
Just as John did, i'll be glad to summarize the responses to the group.

1. <hinton@ai.toronto.edu> mentioned that "Weight-decay is a version of
what statisticians call "
Ridge Regression"." What do you mean by "version"
is is exactly the same, or just slightly? I think i know what Ridge
Regression is, but i don't see an obvious strong connection. I see a weak
one, and after i think about it more maybe i'll say something about it.

The ideas behind Ridge regression probably came from Levenberg and Marquardt
who used it in nonlinear least squares:

Levenberg K., A Method for the solution of certain nonlinear problems
in least squares, Q. Appl. Math, Vol 2, pages 164-168, 1944.

Marquardt, D.W., An algorithm for least squares estimation of
non-linear parameters, J. Soc. Industrial and Applied Math.,
11:431-441, 1963.

2. John quoted Dave Rumelhart as saying that standard weight decay
distributes weights more evenly over the given connections, thereby
increasing robustness. Why does smearing out large weights increase
robustness? What does robustness mean here, the ability to generalize?

k

------------------------------

Subject: Re: Neuron Digest V5 #4
From: artzi@cpsvax.cps.msu.edu (Ytshak Artzi - CPS)
Organization: Michigan State University, Computer Science Department
Date: Fri, 20 Jan 89 10:31:26 -0500

something striking (psychologically...):

Is it possible that the research on the brain can itself cause an
excitation of the brain's neurons ? During the last to weeks a read a lot of
papers and tried to derive some algorithms; suddenly I noticed that I start
to remember various events from the past, apparently without any reason.
This made me think that maybe the process of research about our brain,
causes the brain to... dig in itself.

I am very curious to know whether some other people have/had the same
experience, or maybe they'll start notice it now.

If I am right then maybe the more we think... the smarter we get...

I am looking forward for your comments !!

Izik.

(Artzi Ytshak, Comp. Sci., MSU
artzi@cpsvax.cps.msu.edu)

[[ Editor's Note: I actually collected a series of speculations on this
subject and was saving them for a special issue. However, traffic and my
schedule has been heavy enough to forstall that issue. Despite the aging of
the original articles, I'll still try... -PM ]]

------------------------------

Subject: Neural Network Evaluation
From: rabin@caen.engin.umich.edu (Rabin Andrew Sugumar)
Date: Fri, 20 Jan 89 15:36:48 -0500


I am working on the evaluation of neural networks, trying to answer
questions like how they compare to conventional computers on measures like
time and space utilisation. Most of the papers I have seen on Neural
Networks are on small scale implementations applied to problems which
conventional computers can handle easily though they will have to be
programmed specifically for doing so. Can somone give references to work
done using big neural networks (I mean networks with neurons in thousands or
millions) applied to problems which are difficult to do with computers. I
feel problems like character recognition on a 4x4 grid are not in this
category.

Any ideas or suggestions are welcome.

E.mail : rabin@caen.engin.umich.edu
Mail : Rabin Sugumar
1841, Shirley lane #7A2
Ann Arbor, MI 48105
Ph : (313)-769-5230

Rabin Sugumar


------------------------------

Subject: Hidden Markov chains + Multi-layer Layer Perceptrons???
From: Thanasis Kehagias <ST401843%BROWNVM.BITNET@VMA.CC.CMU.EDU>
Date: Sat, 21 Jan 89 00:06:08 -0500

I was reading through the abstracts of the Boston 1988 INNS conference and
noticed H. Bourlard and C. Welleken's paper on the relations between Hidden
Markov Models and Multi Layer Perceptron. Does anybody have any pointers to
papers on the subject by the same (preferrably) or other authors? or the
e-mail address of these two authors?


Thanasis Kehagias

------------------------------

Subject: UCSD Cog Sci faculty opening
From: elman@amos.ling.ucsd.edu (Jeff Elman)
Date: Fri, 27 Jan 89 22:24:24 -0800


ASSISTANT PROFESSOR
COGNITIVE SCIENCE
UNIVERSITY OF CALIFORNIA, SAN DIEGO

The Department of Cognitive Science at UCSD expects to receive
permission to hire one person for a tenure-track position at the
Assistant Professor level. The Department takes a broadly based
approach to the study of cognition, including its neurological
basis, in individuals and social groups, and machine
intelligence. We seek someone whose interests cut across
conventional disciplines. Interests in theory, computational
modeling (especially PDP), or applications are encouraged.

Candidates should send a vita, reprints, a short letter
describing their background and interests, and names and
addresses of at least three references to:

Search Committee
Cognitive Science, C-015-E
University of California, San Diego
La Jolla, CA 92093

Applications must be received prior to March 15, 1989. Salary
will be commensurate with experience and qualifications, and will
be based upon UC pay schedules.

Women and minorities are especially encouraged to apply. The
University of California, San Diego is an Affirmative
Action/Equal Opportunity Employer.


------------------------------

Subject: addendum to UCSD Cog Sci faculty opening
From: norman%cogsci@ucsd.edu (Donald A Norman-UCSD Cog Sci Dept)
Date: Sun, 29 Jan 89 10:36:36 -0800


Jef Ellman's posting of the job at UCSD in the Cognitive Science Department
was legally and technically accurate, but he should have added one important
sentence:

Get the application -- or at least, a letter of interest -- to us
immediately.

We are very late in getting the word out, and decisions will have to be made
quickly. The sooner we know of the pool of applicants, the better.
(Actually, I now discover one inaccuracy -- the ad says we "expect to
receive permission to hire ..."
In fact, we now do have that permission.

If you have future interests -- say you are interested not now, but in a
year or two or three -- that too is important for us to know, so tell us.

don norman


------------------------------

End of Neurons Digest
*********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT