Copy Link
Add to Bookmark
Report
Neuron Digest Volume 04 Number 19
Neuron Digest Tuesday, 8 Nov 1988 Volume 4 : Issue 19
Today's Topics:
Administrivia
Re: ART source readings?
Re: ART source readings?
Book Announcement
Re: Cyberspace Implementation Issues
Learning with NNs
Re: MacBrain
Neural Network Companies
Re: Neural Network Companies
Re: Neural Network Companies
Neuron resolution
NIPS computer demos
Outlets for theoretical work
Students for a Better NN Class
Suggestions needed...
Re: Wanted: info about GENESIS program
Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"
------------------------------------------------------------
Subject: Administrivia
From: Neuron-Digest Moderator Peter Marvit
Date: Tue, 8 Nov 88 11:31:27 pst
[[ The digest has been a bit delayed, partially due to the "worm" which
caused our site to be isolted from the Internet until we were sure that it
was under control. I'll try to get up to date in the next week. In the
mean time, any mail you might have sent could have bounced. Please send
it again.
I received one response to my "editorial" from last issue. I welcome
additional feedback as to editorial policy. -PM ]]
------------------------------------------------------------
Subject: Re: ART source readings?
From: demers@beowulf.ucsd.edu (David E Demers)
Organization: EE/CS Dept. U.C. San Diego
Date: 23 Oct 88 00:58:04 +0000
>Could someone please email (or otherwise send) me references to the
>basic papers, books, whatever about Grossberg's ART?
I thought I'd post my response because it may be of wide interest.
Perhaps the best place to start on ART would be a couple of papers from
ICNN-87, found in the proceedings:
Carpenter & Grossberg, ART 2: Self-Organization of stable category
recognition codes for analog input patterns.
Carpenter & Grossberg, Invariant pattern recognition and recall by an
attentive self-organizing ART architecture in a nonstationary world.
as well as some other papers from the same session.
For the beginnings, I think
Grossberg, Adaptive pattern classification and universal recoding, II:
Feedback, expectation, olfaction and illusions. in Biological Cybernetics,
23, (1976) 187-202.
is a good paper showing the underpinnings of ART. I think that this paper
is reprinted in the collection of seminal works put together by James
Anderson, Neurocomputing: Foundations of Research. MIT Press, 1988.
The bibliography at the end of the first two papers, above, will give you
some more to look at.
*Personal caveat* I think Grossberg's work is very important, however, it
takes a long time to read his papers. They are pithy...
Dave DeMers
UCSD Dept. of Computer Science & Engineering
La Jolla, CA 92093
(619) 534-6254
demers@cs.ucsd.edu
------------------------------
Subject: Re: ART source readings?
From: bph@buengc.BU.EDU (Blair P. Houghton)
Organization: Boston Univ. Col. of Eng.
Date: 26 Oct 88 01:47:29 +0000
>>Could someone please email (or otherwise send) me references to the
>>basic papers, books, whatever about Grossberg's ART?
Wot Luck! I was flipping through Kandel and Schwarz just now, and what
should fall out but a library Recall Notice for Grossberg's compendium of
papers, _Neural_Networks_and_Natural_Intelligence_,
call number QP
363.3
N44
--Blair
"Don't recall me, I'll recall you,
if I can get my Natural Intelligence
to adapt and resonate simultaneously..."
------------------------------
Subject: Book Announcement
From: pattis@june.cs.washington.edu (Richard Pattis)
Organization: U of Washington, Computer Science, Seattle
Date: 26 Oct 88 04:20:38 +0000
A new book, called "Cognizers: Neural Networks and Machines that Think"
about neural networks and the community that uses them has just become
available at my local bookstore. The authors are Johnson and Brown. The
publisher is John Wiley and Sons. The title should tell you where the
authors are going.
------------------------------
Subject: Re: Cyberspace Implementation Issues
From: jdb9608@ultb.UUCP (J.D. Beutel )
Organization: South Henrietta Institute of Technology (Info Systems)
Date: 16 Oct 88 20:17:55 +0000
In article <10044@srcsip.UUCP> lowry@srcsip.UUCP () writes:
>
>
>There's been a lot of discussion recently on how something (kind of) like
>c-space might be implemented. The conventional wisdom seems to be that
>you'd need super-high res color monitors and a graphics supercomputer
>providing real-time images.
>
>It seems to me that kind of equipment would only be needed if you were
>going to funnel all the info into an eye. I recall reading somewhere
>that the nerves behind the retina do preprocessing on images before
>sending the data down the optic nerve. If you could "tee" into the
>optic nerve, it seems like you could feed in pre-digested data at
>a much lower rate.
>
>Apologies if this idea/topic has been previously beaten to death.
Beaten to death? Nonsense!
I've heard alot about neural networks, artificial retinas in particular.
Research is producing, on the dry side, theories about how machines can
see, and conversly, on the wet side, how we ourselves see. All the
theories I've heard of concure that the neurons which react immediately to
light are input as groups to other neurons which react to higher forms like
lines and dots and movement.
But, while I think that the resultant information is more useful, I'd also
guess that there is more of that information than there was raw information
from which it was derived.
For example:
the silicon retina that some company (I don't remember the name) is working
on with Carver Mead: every 6 light-sensitive neurons are sampled by 3
edge-sensitive neurons (up:down, (PI/4):(5PI/4), and (3PI/4):(7PI/4)).
However, all the light-sensitive neurons are arranged in a hexigonal
tessilate such that each neuron is part of 3 hexigons. Therefore, as the
number of light-sensitive neurons increases, the ratio of edge-sensitive to
light-sensitive approaches 1. Additionally, there are other higher forms,
like dots and spots and motion in various directions, that will all be
using those same light-sensitive neurons as input.
That's why I think that "pre-digested data" might be 10 times more massive
than the raw visual input. Of course, one could try to digest the data
further, transmitting boxes and circles and motion paths as gestalt instead
of transmitting the lines and corners that make them up. But, the further
you digest the data, the deeper into the brain you must go. Pixels make up
lines; lines make up corners; lines and corners make up squares and
triangles; squares and triangles make up the picture of a house. The
theories I've heard of agree that we are all born with the neurons
pre-wired (or nearly so) to recognize lines, but I've heard of none that
suggest that we are pre-wired with neurons that recognize a box with a
triangle on top. Instead, we've learned to recognize a "house" because
we've seen alot of them when we were young. The problem is that the way I
learn "house" might be different from the way you learn "house."
So, a video screen is a video screen to two different people, but a "tee
into the optic nerve" would have to be very different for two different
people, depending on how far back into the brain you jacked in. The system
would have to be dynamic, since people learn as they age; what a house is
to you at age 10 is not what a house is to you at age 20. Symstim and
consensual hallucinations are taken for granted in cyberpunk, and I took
them for granted too. The more I think about it, however, the less
probable is seems.
I'm cross-posting my lame followup to the neural-nets group in the hope
that someone there will have some comment or idea on how a computer could
possibly generate a consensual hallucination for its operator, hopefully
entirely within the operator's mind, as opposed to controling holograms and
force fields which the operator would 'really' see around him.
11011011
------------------------------
Subject: Learning with NNs
From: Dario Ringach <dario%TECHUNIX.BITNET@CUNYVM.CUNY.EDU>
Date: Wed, 19 Oct 88 13:36:32 +0200
Has anyone tried to approach the problem of learning in NNs from a
computability-theory point of view? For instance, let's suppose we use a
multilayer perceptron for classification purposes. What is the class of
discrimination functions learnable with a polynomial number of examples
such that the probability of misclassification will be less than P (using a
determined learning algorithm, such as back-prop)?
It seems to me that these type of questions are of importance if we really
want to compare between different learning algorithms, and computational
models.
Does anyone have references to such a work? Any references will be
appreciated!
Thanks in advance!
Dario.
- -------------------------------------------------------------
BITNET: dario@techunix | "Living backwards!" Alice repeated in great
Dario Ringach | astonishment. "I never heard of such a thing!"
Retner 12/7 |
32819 Haifa, | "--But there's one great adventage in it, that one's
ISRAEL | memory works both ways" The Queen remarked.
- -------------------------------------------------------------
------------------------------
Subject: Re: MacBrain
From: hsg@romeo.cs.duke.edu (Henry Greenside)
Date: 25 Oct 88 19:48:31 +0000
I would recommend against anyone purchasing MacBrain. Several copies were
purchased here at Duke University for evaluation. MacBrain had a
reasonable mouse interface for placing neurons and for linking neurons, and
had built-in rules for learning such as the delta-rule, back-propagation,
and the Boltzmann machine. There was no language that allowed complicated
networks to be set up, so drawing and linking more than about ten neurons
was tedious and impractical. The program is slow, making scaling studies
also impractical.
The worst part of the product was that it was extremely buggy and many
advertised features had not been implemented. Repeated calls to the makers
of MacBrain at Neuronics led to promises that bug-free versions would be
available any day now, but they never arrived. Neuronics also refused to
refund our money.
Try another product from more trustworthy company.
Henry Greenside
------------------------------
Subject: Neural Network Companies
From: rravula@wright.EDU (R. Ravula)
Organization: Wright State University, Dayton OH, 45435
Date: 25 Oct 88 15:30:59 +0000
About a month ago, I asked for a list of neural network companies. I am
posting the only response I got.
- ------------------------------------------------------
From: ames!ucsd!pnet12.cts.com!bstev (Barry Stevens)
To: pnet101!pnet01!crash!osu-cis!wright!rravula
Subject: neural net companies
Status: RO
HHC, Inc. San Diego CA 619-546-8877 (Dr Robert Hecht-Nielsen)
hardware - coprocessor board, supporting software
SAIC Science Applications, Inc. San Diego - don't have phone
hardware - coprocessor, supporting software
Nestor, Inc. Providence, Rhode Island - don't have phone
software - neural nets and applicationg
AI Ware, Cleveland, Ohio - don't have phone
coprocessor board, software
NeuralTech, Portola Valley, CA - don't have phone
software - "English" language for generating nets
Applied AI Systems, Inc. (My company) San Diego
consulting on commercial applications of neural nets, using them in a
company.
UUCP: {crash ncr-sd}!pnet12!bstev
ARPA: crash!pnet12!bstev@nosc.mil
INET: bstev@pnet12.cts.com
- ------------------------------------------------------
Thank you, Mr. Stevens.
- --
- -------------------------- Ramesh Ravula ------------------------
rravula%wright.edu@csnet-relay | Wright State Research Center
...!osu-cis!wright!rravula | 3171 Research Boulevard
(513) 259-1392 | Kettering, OH 45420
------------------------------
Subject: Re: Neural Network Companies
From: rcsmith@anagld.UUCP (Ray Smith)
Organization: Analytics, Inc., Columbia, MD
Date: 27 Oct 88 14:27:23 +0000
In article <360@thor.wright.EDU> rravula@wright.EDU (R. Ravula) writes:
>SAIC Science Applications, Inc. San Diego - don't have phone
> hardware - coprocessor, supporting software
Some info we have on SAIC follows:
Science Applications International Corporation (SAIC)
Sigma Neurocomputer Systems Division
Jennifer Humphrey, Sales Manager
10260 Campus Point Drive (MS 71)
San Diego, CA 92121
Voice: 619-546-6290
FAX: 619-546-6777
Hope this helps.
Ray
- --
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Ray Smith | ...!uunet!mimsy!aplcen!\
Analytics, Inc. | ...!netsys!---anagld!rcsmith
Suite 200 | ...!ethos! /
9891 Broken Land Parkway |
Columbia, MD 21046 | Voice: (301) 381-4300 Fax: (301) 381-5173
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
------------------------------
Subject: Re: Neural Network Companies
From: muscarel@uicbert.eecs.uic.edu
Date: 04 Nov 88 05:45:00 +0000
> /* ---------- "Neural Network Companies" ---------- */
> About a month ago, I asked for a list of neural network companies. I am
> posting the only response I got.
The magazine AI Expert has been running an ongoing series of articles on
neural networks and connectionism over the past year or so. The August, 1988
issue (Vol 3, No 8) was totally devoted to this topic. The "Software Review"
article in that issue (pp 73-85) contained a review entitled "12-Product
Wrap-Up: Neural Networks" which contained descriptions and reviews of a number
of neural network software products. Although the title indicates that 12
products are reviewed, an extensive table listing comparative features of the
reviewed products has 13 entries. There is also a table of software vendors
listing addresses and phone numbers.
If you can't find this article I could mail you a copy.
Tom Muscarello
Dept EECS
Electronic Mind Control Laboratory
Univ. of IL at Chicago (mc/154)
Chicago, IL 60680
: muscarel@uicbert.eecs.uic.edu
------------------------------
Subject: Neuron resolution
From: rao@enuxha.eas.asu.edu (Arun Rao)
Organization: Arizona State Univ, Tempe
Date: 31 Oct 88 16:27:01 +0000
I'm making some studies on the theoretical capabilities of neural
systems. I need information concerning the resolution of neurons. For
example, what is the order of variance in measured firing frequencies ?
Thiscould be a measure of inherent uncertainty, and henc e of resolution.
Also, how accurate are the methods/instruments used to make such
measurements ?
None of the work I've read so far addresses these issues, and I
would be grateful if someone could post/e-mail potentially useful
references. I will post a summary if there is sufficient interest.
- - Arun Rao
rao@enuxha.asu.edu
rao%enuxha.asu.edu@relay.cs.net
BITNET: agaxr@asuacvax
------------------------------
Subject: NIPS computer demos
From: jbower@bek-mc.caltech.edu (Jim Bower)
Date: Tue, 01 Nov 88 13:38:34 -0800
Concerning: Software demonstrations at NIPS
Authors presenting papers at NIPS are invited to demo any relevant software
either at the meeting itself, or during the post-meeting workshop. The
organizers have arranged for several IBMs and SUN workstations to be
available. For information on the IBMs contact Scott Kirkpatrick at
Kirk@IBM.COM.
Two SUN 386i workstations will be available. Each will have a 1/4 cartrage
tape drive as well as the standard hard floppies. The machines each have 8
MBytes of memory and color monitors. SUN windows as well as X windows
(version 11.3) will be supported. The Caltech neural network simulator
GENESIS will be available.
For further information on the SUN demos contact:
John Uhley (Uhley@Caltech.bitnet)
------------------------------
Subject: Outlets for theoretical work
From: INAM000 <INAM%MCGILLB.BITNET@VMA.CC.CMU.EDU>
Date: Wed, 19 Oct 88 21:18:00 -0500
Department of Psychology,
McGill University,
1205 Avenue Dr. Penfield,
Montreal,
Quebec,
CANADA H3Y 2L2
October 20,1988
Given the recent resurgence of formal analysis of "neural
networks" (e.g. White,Gallant,Geman,Hanson,Burr),and the difficulty
some people seem to have in finding an appropriate outlet for this
work,I would like to remind researchers of the existence of the
Journal of Mathematical Psychology.This is an Academic Press Journal
that has been in existence for over 20 years,and is quite open to all
kinds of mathematical papers in "theoretical" (i.e. mathematical,
logical,computational) "psychology" (broadly interpreted).
If you want further details regarding the Journal,or feedback
about the appropriateness of a particular article,you can contact me
by E-mail or telephone (514-398-6128),or contact the Editor
directly-J.Townsend,Department of Psychology,Pierce Hall,Rm. 365A,West
Lafayette,IND 47907. (E-Mail:KC@BRAZIL.PSYCH.PURDUE.EDU; Telephone:
317-494-6236).The address for manuscript submission is:
Journal of Mathematical Psychology,
Editorial Office,Seventh Floor,
1250 Sixth Avenue,
San Diego,
CA 92101.
Regards
Tony
A.A.J.Marley
Professor
Book Review Editor,Journal of Mathematical Psychology
E-MAIL - INAM@MUSICB.MCGILL.CA
------------------------------
Subject: Students for a Better NN Class
From: doug@feedme.UUCP (Doug Salot)
Organization: Feedme Microsystems, Orange County, CA
Date: 28 Oct 88 04:40:53 +0000
I'm currently enrolled in a masters (CS) level neural-net review course,
and I offered to solicit the net for suggestions concerning successful
approaches and pedagogical tools for such a course. We're currently using
the PDP group's epic trilogy and a single ANZA system for experimentation.
Both have their problems, but I'd specifically like to hear what has worked
for you.
Any experience with Grossberg's "Neural Networks and Natural Intelligence"?
Other comprehensive books? Good simulation packages? (we've got Suns, Macs,
and PCs.) Projects or reading for CS types to understand the properties of
non-linear dynamical systems? Good examples for comparing traditional
solutions with network solutions?
Unfortunately, we're currently studying paradigms that are a couple of
years old and whose limitations are immediately apparent (Hopfield, bam,
backprop, simple competitive schemes, counterprop). What's showing the
most promise in terms of problem solving capability, scalability, and
efficiency (here, CS types' only concern with biological feasibility is
whether or not their brains can grok the stuff).
BTW, if any NN gods or daemons are planning on being in/around Orange
County (CA) in the next six weeks or so and feel like charming or
dissuading a small group of neural-netaly disenchanted (but latently
enthusiastic) students, we'd love to hear from you.
Yours for better credit assignment,
- --
Doug Salot || doug@feedme.UUCP || ...{zardoz,dhw68k}!feedme!doug
------------------------------
Subject: Suggestions needed...
From: spam@sun.soe.UUCP (Crunchy Frog,,,)
Date: 24 Oct 88 02:03:16 +0000
I am currently in the "proposal" stage for a undergraduate independant
study course. The way I have designed the work so far, I will be doing one
credit hour each in Cognitive Psych, Philosophy, and AI.
I have had the "traditional" undergrad AI background, and what we did
didn't seem to add up very well. In psych I learned about activation
networks, and this seemed to me to be the approach for real(tm) A.I.
Unfortunately, I don't have much background in this area.
My plan is to a) do reading on the physical aspects of brain; what
structures exist, what is there "before" anything is learned, and how
things are stored. b) I want to read some of the philosophical arguments
about intelligence and rational thought, and c) I want to implement a
simple network and stick it in an artificial environment.
Questions:
1) Any reading list suggestions?
2) Is there any relatively PD software that I can play with relative to this?
3) I'm going to be using Turbo-C with 768K available. Is this insufficient
memory? What data structures are used for implementing networks? Should
I allow for an arbitrary # of links, or limit the #?
4) Am I reinventing the wheel? Is research so far ahead in this area that
there is no way to investigate the "frontiers" of this field w/o billions
of years of research, or will I (as I hope) be able to try something
novel with the small goal of simulating some tiny aspect of intelligence?
5) What sort of career opportunities are there in this field? Are Universities
doing most of this sort of research? Can I do *anything* with my Comp
Sci BS degree this May in this area? Am I doomed to writing subroutines
in the bowels of IBM mainframes?
Thanks for any help
- -Roger Gonzalez
Please E-mail responses to...
- --------
Roger Gonzalez spam@clutx.clarkson.edu
Clarkson University spam@clvm.BITNET
(315) 268-3748
------------------------------
Subject: Re: Wanted: info about GENESIS program
From: mesard@bbn.com (Wayne Mesard)
Date: 23 Oct 88 20:29:09 +0000
>From article <15825@agate.BERKELEY.EDU>, by muffy@violet.berkeley.edu:
> I would like some information about the GENESIS neural
> simulation program being developed at Cal Tech.
It's not a NN simulator. It's a genetic algorithm system for function
optimization. Its author is
John J. Grefenstette
Navy Center for Applied Research in Artificial Intelligence
Naval Research Laboratory
Washington, D.C. 20375-5000
<gref@AIC.NRL.NAVY.MIL>
- --
unsigned *Wayne_Mesard(); I never met a stochastic algorithm I
MESARD@BBN.COM didn't like.
BBN, Cambridge, MA
------------------------------
End of Neurons Digest
*********************