Copy Link
Add to Bookmark
Report

Neuron Digest Volume 06 Number 14

eZine's profile picture
Published in 
Neuron Digest
 · 1 year ago

Neuron Digest   Monday, 19 Feb 1990                Volume 6 : Issue 14 

Today's Topics:
Administrivia - Clarification
Re: Sci. Am. Debate - Can Programs Think
New Student Society
ART3
Re: ART3
Using ART1 to recognize line patterns
Wanted: ART simulator/code
Re: ART3
Re: ART3
Re: ART3


Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"
Use "ftp" to get old issues from hplpm.hpl.hp.com (15.255.176.205).

------------------------------------------------------------

Subject: Administrivia - Clarification
From: "Neuron-Digest Moderator -- Peter Marvit" <neuron@hplabs.hp.com>
Date: Mon, 19 Feb 90 13:50:37 -0800

In Neuron Digest Vol 6 #13, an announcement was made for a bibliography
for neural networks and defense applications. At the end of the message,
it stated:

Please send your *US MAIL ONLY* address to the above netmail
address or to:

Dr. Mark A. Gluck
Jordan Hall; Bldg. 420
Stanford University
Stanford, CA 94305-2130

Please send e-mail requests to:

gluck@psych.Stanford.EDU (Mark Gluck)

Do *not* send them to me, since I do not have control over distribution!

: Peter Marvit, Neuron Digest Moderator
: Courtesy of Hewlett-Packard Labs in Palo Alto, CA 94304 (415) 857-6646
: neuron-request@hplabs.hp.com OR {any backbone}!hplabs!neuron-request

------------------------------

Subject: Re: Sci. Am. Debate - Can Programs Think
From: chicks@ai.mit.edu (Craig Hicks)
Date: Sat, 17 Feb 90 18:22:34 -0500


I would like to respond to Carl W. Davids article about the insuperable
obstacles to creating artificial intelligence, in particular
'conciousness', which he claims is the unreproducability of the principles
of pleasure and pain. I would argue that the definition of consciousness
which he gives is not at all clear, and that once we pinpoint the set of
possible definitions, the 'insuperable obstacle' of 'motivation' to
intelligence will vanish before our very eyes.

David starts by making the argument that an indication of
intelligence is exhibition of behavior similar to that of human beings.
While this seems true, it is misleading because it misses the question of
whether this is a prerequisite to intelligent behavior, the answer to which
I believe is no. In fact I believe human intelligence is a rather special
case in the set of all types of intelligence. First there are our
precursors, all the 'lower' forms of intelligence of which we find abundant
examples on this planet. Then of course there maybe life forms even smarter
(GASP!) than us, the existence of whom is known only to a selected few on
the staff of the National Enquirer.

David points out that intelligence without input or output is
meaningless because it displays no intelligent behavior. I suppose there
could be dormant intelligence, but the only way of finding this out would
be by eliciting some kind of intelligent behavior, in which case it would
no longer be dormant. (This poses a real problem for investigators of
life forms which have realized that the most intelligent thing to do is play
dumb.) Anyway, I agree wholeheartedly with his proposition that
intelligent behavior is the best indicator of intelligent.

David then comes to some conclusions as to what are the minimal set
of input and outputs an intelligent system must have to be intelligent and
have a concept of self. For example, he states a system must have at least
two inputs in order to develop the concept of cow.

> If the brain only has one input device, then the nature of its
>perception of the outside world is unimodal, not terribly rich in detail.
>If it has two different inputs, then there can be a correlation between
>them, when the two inputs are simultaneously stimulated under training.
>Thus hearing the word 'cow' and seeing a 'cow' becomes, under training,
>associating a set of learned sounds with a set of learned images.

Helen Keller ?

But actually the point I want to make is that what David describes
as one input is actually an incredibly richly integrated system of neural
connections, not a 9600 baud RS-232 connection, and so it one input
'system', and not one input.

Davids conclusion reflects this over trivialization of system
interface.

> The number of input and output functions is clearly simple, and will
>become trivial in the future. It is the motivation problem that, were it
>to be addressed and shown to be solvable, might lead to true artificial
>intelligence.

Here I feel he has drifted away from his initial correct idea that
intelligent behavior ought to be the basic criteria for determining
intelligence. I would say that 'pain', 'pleasure', and 'motivation' are
labels we have given to behavior patterns we recognize within ourselves.
Many forms of lower intelligence display these same behavior patterns.
(Mommy, mommy, look what the ant does when I pull his legs off!) Do they
have 'conciousness'? If they do, then 'conciousness' ranges continuously
in complexity from the very simple to the very complex, and the prospect
that the usual 'build, test, analyze' cycle of engineering systems will
provide continuous improvements in intelligent models is bright. If these
life forms are not 'concious', then then perhaps 'conciousness' is not that
important a factor in the path to exhibiting intelligent behavior; perhaps
it will fall out naturally as a result of other advances in creating
artificially intelligent behavior. Some engineer will say, "hey, lets
stick a couple of symbolic accelerators on the front of the main processing
unit"
and presto: the first machine identity crisis.


Craig Hicks (chicks@ai.mit.edu)

------------------------------

Subject: New Student Society
From: gaudiano@bucasb.bu.edu
Date: Mon, 19 Feb 90 12:43:00 -0500


This is the first official announcement of the:

Neural Networks Student Society
-------------------------------

The purpose of the Society is to (1) provide a means of
exchanging information among students and young professionals within
the area of Neural Networks; (2) create an opportunity for interaction
between students and professionals from academia and industry; (3)
encourage support from academia and industry for the advancement of
students in the area of Neural Networks; (4) insure that the interest
of all students in the area of Neural Networks is taken into
consideration by other societies and institutions that promote Neural
Networks; and (5) lay down a solid, UNBIASED foundation upon which the
study of Neural Networks will be developed into a self-contained
discipline.

The society is specifically intended to avoid discrimination
based on age, sex, race, religion, national origin, annual income or
graduate advisor.

An organizational meeting was held at the Washington IJCNN
meeting. We had about 60 students and other interested bodies there,
and later that evening many of us went out to get to know each other
over some fine ales. Many of the participants came from outside of the
US, and the general consenus is that this is a society whose time has
come.

We have many action items on our agenda, including:

1) a quarterly newsletter
2) an e-mail newsgroup
3) a resume exchange service for neograduates in the field
4) summer co-ops with NN companies
5) a database of existing graduate programs in NNs
6) an ftp site for NN simulation code and other info
7) corporate sponsorships to support student travel expenses
.
.
.
n) many, many more activities and ideas

A booth for our Society has been donated by the organizers of
the Paris INNC conference (July 90), and we may also get one at the
San Diego IJCNN conference. We will use the booth to advertise our
society, and to promote student ideas and projects. More details will
be given in the first newsletter, which is scheduled to come out March
21. It will include our official bylaws, and other introductory
information.

WHO TO CONTACT:
--------------

If you'd like to be on the mailing list to receive newsletters
by surface or electronic mail, and you did not already give us your
name at the IJCNN Washington meeting, send a note to:

nnss-request@thalamus.bu.edu (Newsletter requests)

with your name, affiliation, and address. The first issue will contain
all the necessary information to become a member for the rest of the
year. Once the Society becomes official, there will be a nominal
annual fee (about $5) to help with costs for publications and
activities, but for now you can get our info to see what it's all
about at no cost. You will only remain a member if you are interested
in the society and send in the official membership form.

In the meantime, if you are thinking about a job in NNs in the
near future, and would like information about our resume exchange
program, send a message to:

khaines@galileo.ece.cmu.edu

and if you have general questions about the society (other than a
request for the first newsletter, or about the resume service), send
mail to:

nnss@thalamus.bu.edu

Also, if you are willing to volunteer some time to help with the
society, send a note to:

gaudiano@thalamus.bu.edu

we will definitely need some help at the upcoming conferences, and may
also need some assistance with other odds & ends before that time.

Finally, we will soon circulate a proposal for a new USENET
newsgroup, so if you read usenet news keep your eyes open for an
opportunity to vote in the next few weeks.

Karen Haines and Paolo Gaudiano
co-founders, NNSS


------------------------------

Subject: ART3
From: jwhite@SRC.Honeywell.COM (Jim White)
Date: 09 Jan 90 15:46:50 +0000


Can anyone provide a reference to any papers/reports describing ART3 ?


James A. White Phone: (612) 782-7355
Research Fellow Fax: (612) 782-7438
Honeywell Systems and Research Center
Mail Stop MN65-2100
3660 Technology Drive Internet: jwhite@src.honeywell.com
Minneapolis, MN 55418.


------------------------------

Subject: Re: ART3
From: ravula@neuron.uucp (Ramesh Ravula)
Date: 09 Jan 90 17:33:10 +0000


This is in response to Jim White's query about ART3.

I was just reading an article in the Technology section of Electronic

Engineering Times (January 8, 1990) describing ART 3. It doesnot go into

too much detail, but provides the gist of the concept.

Hope this helps.


--Ramesh Ravula
GE Medical Systems, W-826,
3200 N. Grandview Blvd.,
Waukesha, Wisconsin 53188.

email : {att|mailrus|uunet|phillabs}!steinmetz!gemed!ravula

or

{att|uwvax|mailrus}!uwmcsd1!mrsvr!gemed!ravula

------------------------------

Subject: Using ART1 to recognize line patterns
From: jem97@leah.Albany.Edu (Jim Mower)
Organization: University at Albany, Computer Services Center
Date: 12 Jan 90 01:48:14 +0000

I'm interested in using ART1 to cluster line segments, similar to the
examples show in Carpenter and Grossberg (1987), "ART2: Self-organization
of Stable Category Recognition Codes for Analog Input Patterns."
The
authors show a number of examples of clusters generated by applying
different 'vigilance' parameters to line patterns, but don't elaborate
upon the dimensions of shape that they used to define the patterns. Does
anyone know how they did this? Also, has anyone out there implemented
ART1? I would be interested in hearing your success or (gulp) horror
stories. Thanks.

Jim Mower
Department of Geography and Planning
University at Albany
jem97@gpsun.albany.edu

------------------------------

Subject: Wanted: ART simulator/code
From: raja@frith.egr.msu.edu
Organization: Michigan State University, Engineering, E. Lansing
Date: 21 Jan 90 20:57:22 +0000


Could someone please let me know from where I can get the public domain
software for simulating ART networks? Thanks in advance!

Narayan Sriranga Raja.

raja@frith.egr.msu.edu OR
raja%frith.egr.msu.edu@uunet.uu.net

------------------------------

Subject: Re: ART3
From: rosen@cochlea.bu.edu (David B. Rosen)
Organization: Boston University Center for Adaptive Systems
Date: 24 Jan 90 21:41:01 +0000


> What's all this about ART3?

ART 3 uses the same network topology as ART 2, but it uses equations that
model the dynamics of chemical neurotransmitters. This enables it to
produce distributed category representations, and to use input and output
layers that are essentially the same so that it can be embedded in
hierarchies of ART modules where the output layer of one module is the
input layer of another. Also, it can accomodate input patterns that
change continuously in real time, without any external signal needed to
tell it that a new input pattern has arrived. When the input pattern
changes sufficiently, ART 3 internally generates a reset event that
initiates a new search for a recognition code.

A full article on ART 3 by G.A. Carpenter and S. Grossberg is in press (I
think in the journal Neural Networks). This work was described by the
same authors in "ART 3 hierarchical search: chemical transmitters in
self-organizing pattern recognition architectures"
, International Joint
Conference on Neural Networks, January 15-19, Washington, DC; Hillsdale,
NJ: Lawrence Earlbaum, pp. II-30 - II-33. Some of the ideas behind it
were discussed by the same authors in "Search mechanisms for adaptive
resonance theory (ART) architectures"
, International Joint Conference on
Neural Networks, June 18-22, 1989; IEEE TAB Neural Network Committee, pp.
I-201 - I-205.

------------------------------

Subject: Re: ART3
From: pa1159@sdcc13.ucsd.edu (Matt Kennel)
Organization: University of California, San Diego
Date: 25 Jan 90 23:38:37 +0000


I have some general questions about the ART[1-3] systems that's been
bugging me for a while. As far as I can discern, an ART network does
unsupervised classification of inputs. First of all, what is the input
domain consist of? I guess there are a # of answers, but I'm thinking of
the general abstract formulation of the model, not any specific neural
example. I.e. just a simple vector of real numbers from -1 to 1?

Is this vector discretely timed or could it possibly take on any value
over time.

Ok, that's the preliminary question. Now, here's the real one: as ART is
an unsupervised "clustering"/classification algorithm, iut must have
"built" in some idea of a metric, i.e. how does it decide how "similar"
inputs are? What is it? The question now becomes, how is this
good/useful/or realistic? Does it have any provision for an external
signal to modify somehow the classification based on some other
performance parameter?

What is the "output" of such a network? Does it essentially say, "I have
found an example of class #j?"
or does it say, "This one is in class #j
with strength .78, class #k with strength .34 and in "
anti-class" #l with
strength .82"
or something like that or what?

Why is what ART does useful/good/realistic?

Thanks,
Matt Kennel
pa1159@sdcc13.ucsd.edu

------------------------------

Subject: Re: ART3
From: gaudiano@retina.bu.edu (Paolo Gaudiano)
Organization: Boston University Center for Adaptive Systems
Date: 26 Jan 90 21:16:16 +0000


In article <6494@sdcc6.ucsd.edu> pa1159@sdcc13.ucsd.edu (Matt Kennel) writes:

[[ ... various questions ... ]]

Matt> I'm thinking of the general abstract formulation of the model, not
Matt> any specific neural example. I.e. just a simple vector of real
Matt> numbers from -1 to 1?

ART1 takes binary [0-1] vectors as input, ART[2-3] take analog input
vectors. I have run simulations that use normalized [0.0,1.0] analog
vectors, but any range of Reals should be OK. Normalization is carried
out within ART layers anyway.

Matt> Is this vector discretely timed or could it possibly take on any
Matt> value over time.

All ART architectures are "real time" in the sense that the behavior of
each element (cell,layer...) is described in terms of ordinary diffeq's.
The "grain" of time is thus determined exclusively by your time step
selection, but in principle it is continuous (could be done, for example,
with analog hardware).

Matt> [[ ... ]] Does it have any provision for an external signal to modify
Matt> somehow the classification based on some other performance parameter?

The published models use a "matching criterion" based on the L2 norm
(Euclidean distance), although it is suggested that other norms could be
used. Depending on the particular configuration, two vectors are
compared: one representing the input (usually normalized), another
representing the top layer's expectation of what that input should be,
based on prior learning. The exact comparison consists of forming a
normalized vector R, where each element R_i is the normalized sum of the
two vectors being compared (input vs. expectation). The norm of R is then
compared to a "vigilance" parameter. This vigilance parameter (rho)
determines how "picky" the mismatch criterion should be (rho=1 means you
need a perfect match). The vigilance parameter is controlled externally.
For instance, the ART architectures have been used in classical
conditioning models, where the vigilance can be raised by a general
"arousal" signal (hey -- keep your eyes open for tall, dark-haired
women), or it can be raised when a behavioral choice leads to incorrect
or harmful results (not unlike the pole-balancing adaptive critic element
of Sutton and Barto, if you are familiar with that work). Note that the
mismatch rule is such that in computing the "similarity" between two
analog vectors, the system takes into account "magnitude" of match but
also how "parallel" two vectors are. The result is that the system is
capable of treating the same information either as "noise", or as
"relevant" depending solely on context (this is explained much better and
in more detail in the articles I list below).

Matt> What is the "output" of such a network? Does it essentially say, "I
Matt> have found an example of class #j?"
or does it say, "This one is in
Matt> class #j with strength .78, class #k with strength .34 and in
Matt> "
anti-class" #l with strength .82" or something like that or what?

The output for ART[1-2] consists of activation of a single node
(representing a category) in the output (top) layer. Note that no 'a
priori' significance is attached to any given node. The final
"clustering" depends on the input history, and on the vigilance parameter
during learning (as well as a bunch of other details). IN addition, it is
possible that an input will not give a "satisfactory" match with ANY of
the existing categories (this will happen if all "category" nodes have
been selected by some previous input).

The ART3 system differs in that distributed "category" patterns can
exist, i.e., more than one top-level node can be active. Also, ART3 does
not require that a signal is given every time the input vector is
changed. The input can just gradually shift, and at some point an
internal reset will cause a new category to be selected.

Matt> Why is what ART does useful/good/realistic?

Well, in a nutshell, it seems to do a type of categorization that is very
much like the way we (humans) do it. It has a lot of other nifty
characteristics that make it useful. In addition, it is not simply
biologically plausible, but fundamentally "biologically inspired" (this
is such a misused label that I hesitate to use it here). The idea of
Adaptive Resonance Theory was first proposed in Grossberg's 1976 article
"Adaptive Pattern classification and universal recoding, II: Feedback,
expectation, olfaction, and illusion"
(Biological Cybernetics, VOl. 23,
187-202). ART was derived from consideration (over many years) of the
need to stabilize coding by neural populations through competitive
learning systems. Since then much work has been done on ART and with ART.
One interesting aside is to look over some of the predictions he made
about functional and structural properties of the brain. For example, he
suggested the necessity of a recurrent loop between hippocampus and
neocortex, the need for gated learning through non-specific arousal, the
importance of opponent processing, and various other ideas. Some had been
already shown to exist (although typically without a reason why), and
many others were later shown correct (some still waiting). From a
slightly different tack, an interesting article came out in the 1987
special issue of Applied Optics on Neural Networks: Grossberg and J.P.
Banquet report some analytical and experimental findings that provide a
stunning correlation between Event-Relted Potentials (ERPs) and many of
the dynamical characteristics of the ART.

A lot of Grossberg's articles are difficult to read, and unguided reading
attempts can be quite frustrating and unfruitful. Here are a few articles
that can help you if you want to know more about ART:

S. Grossberg (1976) "Adaptive Pattern classification and universal recoding,
I: Parallel development and coding of neural feature detectors"
. Biological
Cybernetics, Vol. 23, 187-202 (also reprinted in "Studies of Mind an
Brain"
, Boston: Reidel. 1982 -- a great primer)
*This article can be a bit threatening at times (heavy notation in
*places), but good intro. The Sparse Pattern theorem (sect. 8) is an
*important theorem, but hard to get for non-math types.

S. Grossberg (1976) "Adaptive Pattern classification and universal recoding,
II: Feedback, expectation, olfaction, and illusion"
. Biological
Cybernetics, Vol. 23, 187-202
*Great stuff! One of his most inspiring pieces. Kind of blows you
*through a bunch of ideas, many of which have since developed into
*full models. It's worth reading it even if the math looks bad. Just
*skip the equations and read the ideas!

G. Carpenter and S. Grossberg (1987) "A massively parallel
architecture for a self-organizing neural pattern recognition
machine"
. Computer Vision, Graphics, and Image Processing, V. 37, 54-115.
*everything you always wanted to know about ART1. Not the easiest
*because a lot of the article is devoted to fundamental theorems.
*Sections 1-11 are a good introduction to the ideas of ART.

G. Carpenter and S. Grossberg (1987) "ART 2: self-organization of stable
category recognition codes for analog input patterns"
. Applied Optics,
Vol. 26-23, 4919-4930.
*A relatively short article, easy if you have ART1 down pat. The math
*is actually quite straight forward if you can handle simple
*calculus-level stuff (easy diffeq's -- and algebra). Again, good intro.

S. Grossberg and J.P. Banquet (1987) "Probing cognitive processes
through the structure of event-related potentials during learning: an
experimental and theoretical analysis"
. Applied Optics, Vol. 26-23,
4919-4930.
*Very hard to get the whole thing unless you know a lot about ERPs
**and* ART. However, once you know ART, even minor ERP knowledge
*allows you to skim the article and make up your mind about how
*impressive it is.

G. Carpenter and S. Grossberg (1990) "ART 3: hierarchical search using
chemical transmitters in self-organizing pattern recognition
architectures"
. Neural Networks (not out yet. Maybe next issue?).
*Great new stuff. Many of the "flaky" points of ART2 are resolved here
*(e.g., distributed categories, no need to "reset" between inputs, etc.).


**** The bottom line: for those of you that followed all this --
Grossberg is by no means easy, particularly if you don't know which
one of his 3,675,987 articles to start with. I hope this short list
helps. Note that a bunch of us here at BU now have been through much
of this, including many simulations: ART1, ART2, READ
(Grossberg-Schmajuk 1987), Classical conditioning (Grossberg-Levine,
1987), BCS-FCS (Grossberg-Mingolla), VITE (Grossberg-Bullock), AVITE
(Grossberg-Gaudiano), gated dipoles, shunting nets, etc... If there
are any questions I can answer, I'll be glad to.

Have fun (it's what it should be all about!)

Paolo Gaudiano - Boston University - Cognitive and Neural Systems
e-mail: gaudiano@thalamus.bu.edu

------------------------------

End of Neuron Digest [Volume 6 Issue 14]
****************************************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT