Copy Link
Add to Bookmark
Report

Neuron Digest Volume 09 Number 02

eZine's profile picture
Published in 
Neuron Digest
 · 14 Nov 2023

Neuron Digest   Wednesday, 22 Jan 1992                Volume 9 : Issue 2 

Today's Topics:
Re: Neural Network Applications in Natural Language Processing (NLP)
selective attention?
Fwd: Scientists Produce Electronic Device To Mimic Brain
Re: Cultural evolution and AI
Re: Fuzzy Logic vs. Neural Networks
Neural Net training methods...
Invariant pattern recognition with ANNs
Graduate & Postdoctoral Study in Cognitive & Neural Bases of Learning at Rutgers Univ.
Update Release of PD LVQ Program Package


Send submissions, questions, address maintenance, and requests for old
issues to "neuron-request@cattell.psych.upenn.edu". The ftp archives are
available from cattell.psych.upenn.edu (128.91.2.173). Back issues
requested by mail will eventually be sent, but may take a while.

----------------------------------------------------------------------

Subject: Re: Neural Network Applications in Natural Language Processing (NLP)
From: mock%madrone.eecs.ucdavis.edu@ucdavis.ucdavis.edu (Kenrick J. Mock)
Organization: U.C. Davis - Dept of Electrical Engineering and Computer Science
Date: 19 Dec 91 19:33:58 +0000

In response to Angela's post on NLP:

In the July-September 1991 issue of Cognitive Science, there is an
interesting article on using networks for NLP. In particular, their
network performs sentence level processing and also shows a way to do
scripts with a connectionist network. The paper is by Risto Miikkulainen
and Michael Dyer from UCLA.


------------------------------

Subject: selective attention?
From: usenet%kum.kaist.ac.kr%news.hawaii.edu@ames.arc.nasa.gov (Tim Poston)
Organization: POSTECH in Pohang
Date: 20 Dec 91 03:03:14 +0000


Can anybody direct me to papers on artificial neural nets where some part
of the system has fast-moving selective attention?

For instance, following a dot in a visual field, or following one moving
tone against a background of other varying tones (a simplified version of
the Cocktail Party Factor, following one voice). Simplest tracking would
be by continuity; more elaborate trackers could use prediction (where do
you expect the dot/the melody to reappear?) But first and foremost is
what `attention' should m e a n in network terms.

I can think of approaches with fast-changing connection weights, as
opposed to the slowly changing ones in typical learning algorithms; or
with attention units (pay attention to input I_sub_ij when A_sub_ij is
excited), but that one looks like a lot of units.

Any work out there on what works, what integrates with other needs of a
system, what is efficient, what is biologically plausible?

Timothy Poston, Math Dept, Postech, Korea


------------------------------

Subject: Fwd: Scientists Produce Electronic Device To Mimic Brain
From: dg1v+@andrew.cmu.edu (David Greene)
Organization: Doctoral student, Industrial Administration, Carnegie Mellon, Pittsburgh, PA
Date: 20 Dec 91 11:56:20 +0000

The view from the popular press (from Dow-Jones news without permission)

---------- Forwarded message begins here ----------

Subject: Scientists Produce Electronic Device To Mimic Brain
Date: Thu, 19 Dec 1991 17:58:29 -0500 (EST)


LONDON -- A British neuroscientist and a U.S. computer scientist said
they have collaborated to produce a tiny electronic device that mimics
the behavior of a human brain cell.

They said the development may make it easier to develop an
understanding of how the brain works, and could ultimately lead to
electronic systems with some of the same capabilities as a biological
nervous system.

Until now, attempts to model brain activity have centered on complex
software programs that, in essence, create imaginary networks of brain
cells based on the latest theory of how these cells work. These attempts
have met with some success: A major software effort at International
Business Machines Corp., for example, last year surprised even its
inventors by spontaneously producing the computerized equivalent of
"brain waves."


But such progress, which has been slow, eats up huge amounts of
computer time. In the IBM experiment, several hours on a high-powered
mainframe computer were required to simulate a few seconds of activity in
a small section of a guinea pig's brain. By contrast, a hardware-based
approach might allow faster development of knowledge, since large numbers
of electronic devices can be hooked together so that they operate in
"real time."


In their report, which appears in today's edition of Nature, a weekly
U.K. science magazine, the two scientists stressed that their device is
radically different from existing electronic components used in
"neural-net- work" computers, which are often cited as producing
brainlike results such as pattern recognition. Such electronic components
are always switched either on or off, generating a staccato computer
language of ones and zeros.

The new device, on the other hand, speaks the same variable language as
human nerve cells, sending electrical signals that can fade or grow
stronger. Measured with the same testing methods used to monitor human
nerve cells, the device's response was "essentially the same," reported
the scientists, Rodney Douglas of Oxford University and Misha Mahowald of
the California Institute of Technology.

It isn't yet clear that such behavior would be more useful in modelling
brain structure than the on-off world of conventional computer components
- -- or even than slow software-based approaches. At Nestor Inc., a
neural-netwo- rk company based in Providence, R.I., co-founder Leon
Cooper says the work of Drs. Douglas and Mahowald sounds very
interesting, but that other hardware and software approaches are becoming
mature, and he feels they might bear fruit earlier.

Nonetheless, in a commentary that also appears in today's Nature,
computer scientist Andreas Andreou of Johns Hopkins University says the
new "silicon neuron" could emerge as "the technology of choice for
modelling the nervous system."


------------------------------

Subject: Re: Cultural evolution and AI
From: sw%cs.brown.edu%brunix@uunet.uu.net (Supriya Wickrematillake)
Organization: Brown Computer Science Dept.
Date: 20 Dec 91 20:14:43 +0000


Can someone point me to more literature in the area of relationships
between evolution in AI systems and human cultural systems.

I have only two references in this area,

`The Evolution of Information: Lineages in Gene, Culture and
Artefact', S. Goonatilake, Pinter Publishers, London, 1991

and

`Human Culture: A Genetic Takeover Underway', H. Morovec in
Artificial Life, ed. C. Langton, Santa Fe Institute, 1989.


Thanks,

Supriya.


------------------------------

Subject: Re: Fuzzy Logic vs. Neural Networks
From: arms%alberta%aunro@lll-winken.llnl.gov (Bill Armstrong)
Organization: University of Alberta, Edmonton, Canada
Date: 21 Dec 91 12:06:59 +0000

[[ Editor's Note: I'm not sure where this came from. It appears to be
part of an ongong discussion on comp.ai.neural-nets. However, I think
readers of the Digest might find it interesting. I'm sorry I don't have
the rest of the conversation. The final paragraph is especially "to the
point." -PM ]]

sinster@cse.ucsc.edu (Darren Senn) writes:

>In article <arms.693092850@spedden> arms@cs.UAlberta.CA (Bill Armstrong) writes:
>>sinster@cse.ucsc.edu (Darren Senn) writes:

>>Certainly your prototyping
>>>time is faster on the computer, but your execution time is horrible.
>>
>>Whoa, execution of a complete function in 20 nanoseconds, at least in
>>the case of logic networks, is not horrible!
...
>No, 20ns isn't horrible. But you were talking about hardware, right?

Right.

>I find impossible to accept that any digital computer, regardless of its
>speed or number of processors could simulate a signal's propagation through
>a neural network within 20ns. That goes for single-neuron networks as
>well: find me a computer whose main memory is significantly faster
>than 20ns access time (by significantly, I mean an order of magnitude).
>... And research machines don't count. :)

The assumptions were that the inputs were in place, and that must
include that the function-defining parameters have to already be in
units that perform the operations, to make a comparison of execution
time fair for both hardware MLPs and logic networks.

Even under those assumptions, I am not claiming anything near 20ns
time for neural networks that have to do multiplications and
additions, even with special hardware. On the contrary, they can only
do about one multiplication in the time we are talking about (e.g.
they might have a 25MHz clock). This makes them slow, lumbering
beasts compared to combinational logic networks.

>>To overcome your problems with simulation time of MLPs, maybe you
>>should try adaptive logic networks on your digital monoprocessor.
>>They are much faster than backprop type nets because combinational
>>logic can be lazily evaluated, e.g. a 0 entering an AND makes it
>>unnecessary to evaluate (via simulation)the part of the net producing
>>the other input to the AND.

>Yeah, you're probably correct. At this stage, however, I enjoy the
>results of my analog networks too much to switch.

There is a lot of excitement about the potential of analog
neurocomputers. Today there was an article about Carver Mead's work
on the front page the local paper. I can't blame you for being
interested in analog circuits. Chacun `a son gout!

However, submicron CMOS transistors turn on in 100 ps, and the major
delays in some circuits may be in the interconnects (e.g. 2 ns to go 1
cm). Analog hardware also has to contend with interconnect delays, which
may mean that it isn't *that* much faster. The main advantage would be
in the complexity of the operation realized. But then, if you don't
*need* a complicated operation, e.g. weighted sums of inputs and a
sigmoid, and ALNs show that you don't, then you gain nothing from using
analog hardware. You even lose reproducibility.

Besides, in large systems which use hardware iteratively, you gain so
much speedup by lazy evaluation of boolean logic (factors of hundreds or
thousands that grow with the size of the problem) that no advances in
component technology, either digital or analog, can make up for it. When
they build "artificial brains" that work, cost considerations alone
dictate it will be with devices which allow lazy evaluation.

Hell, I haven't
>even read a paper about ALN's yet. :)

There was one in the ICANN-91 International Conference on Artificial
Neural Networks, "Learning and Generalization in Adaptive Logic
Networks." North Holland, pp. 1173 -1176. There's a paper available via
anonymous ftp where the ALN software also located: on
menaik.cs.ualberta.ca [129.128.4.241] in files pub/atree2.ps.Z and
pub/atree2.tar.Z. I can provide other references too.

Program committees (in all fields) are full of people who have vested
interests in keeping attention on the current fad which they happen to be
knowledgeable about. Submitted ALN papers have been shunted off to
poster sessions where they can't do much harm to those interests, and, as
in the case of the Seattle IJCNN, may not even be published. The reason
that ALNs are gaining converts anyway is because it is obvious to people
in the know that a combinational logic circuit is the fastest possible
way to make a decision in a digital system, and the ALN demo software is
an existence proof that it really works.

***************************************************
Prof. William W. Armstrong, Computing Science Dept.
University of Alberta; Edmonton, Alberta, Canada T6G 2H1
arms@cs.ualberta.ca Tel(403)492 2374 FAX 492 1071


------------------------------

Subject: Neural Net training methods...
From: as666%cleveland.Freenet.Edu%usenet.ins.cwru.edu%agate@apple.com (Jonathan Roy)
Organization: Case Western Reserve University, Cleveland, Ohio, (USA)
Date: 22 Dec 91 06:18:06 +0000

[[ Editor's Note: Many readers, like this one, feel that they are
more-or-less beginners. I hope some of you who know the field well will
pause to answer these questions. Perhaps it's time for someone (many
someones?) to submit their favorite surevy articles or beginners books?
I still recommend the PDP series by RUmelhart and McClelland, but I'm
sure there are newer ones whic you may like better. -PM ]]

Could someone post (or mail to me) a list of the various training
methods, etc, and a brief desription of how they differ from the other
methods.. .(ie: Back-prop, feed forward, real-time methods,
(un)supervised, etc..)

Also, is it possible to use a NN in a situation where it's difficult,
if not impossible, to form training sets?

Thank you.


||| Jonathan Roy (The Ninja) Internet: as666@cleveland.freenet.edu
||| -- BBS: Darkest Realm - (Down for now) - Public UUCP/Usenet --
/ | \ "...make him want to change his name,take him to the cleaners,
devastate him, wipe him out, humiliate him." -CHESS GEnie: J.ROY18


------------------------------

Subject: Invariant pattern recognition with ANNs
From: Papadourakis Giwrgos <papadour@csi.forth.gr>
Date: Mon, 23 Dec 91 11:35:15 +0200


In Reply to Randolf Werner requesting translation, rotation and scaling
invariant pattern recognition with ANNs we can inform you that in the
January 1992 Pattern Recognition Journal we present a paper on this
subject. We use Kohonen and Multilayer ANNs and as inputs we use
centroidal profile or cumulative angular or curvature representations.
We compare these six implementations with the traditional methods of
Fourier Descriptors and Invariant Moments

George Papadourakis



------------------------------

Subject: Graduate & Postdoctoral Study in Cognitive & Neural Bases of Learning at Rutgers Univ.
From: Mark Gluck <gluck@pavlov.Rutgers.EDU>
Date: Wed, 25 Dec 91 18:40:03 -0500

-> Please Post or Distribute

Graduate and Postdoctoral Training in the:

COGNITIVE & NEURAL BASES OF LEARNING
at the
Center for Molecular & Behavioral Neuroscience
Rutgers University; Newark, NJ

Graduate and postdoctoral positions are available for those
interested in joining our lab to pursue research and training in
the cognitive and neural bases of learning and memory, with a
special emphasis on computational neural-network theories of
learning. Current research topics include:

* Empirical and Computational Studies of Human Learning
Experimental research involves studies of human learning and
judgment -- especially classification learning -- motivated
by a desire to evaluate adaptive network theories of human
learning and better understand the relationship between
animal and human learning. Theoretical (computational) work
seeks to develop and extend adaptive network models of
learning to more accurately reflect a wider range of animal
and human learning behaviors. Applications of these
behavioral models to analyses of the neural bases of animal
and human learning are of particular interest.

* Computational Models of the Neurobiology of Learning & Memory
Understanding the neural bases of learning through
computational models of neural circuits and systems,
especially the cerebellar and hippocampal areas involved in
classical Pavlovian conditioning of motor-reflex learning,
is our primary goal. Related work seeks to understand
hippocampal function in a wider range of animal and human
learning behaviors.

______________________________________________________________________
Other Information:

RESEARCH FACILITIES: A new center for graduate study and research
in cognitive, behavioral, and molecular neuroscience. The
program emphasizes interdisciplinary and integrative analyses of
brain and behavior. Located in the new Aidekman Neuroscience
Research Center, the C.M.B.N. has state-of-the-art communication
facilities, computers, offices, and laboratories.

LOCATION: Newark, New Jersey: 20 minutes from Manhattan but also
close to rural New Jersey countryside. Other nearby
universities and industry research labs with related research
programs include: Rutgers (New Brunswick), NYU, Princeton,
Columbia, Siemens, NEC, AT&T, Bellcore, and IBM.

CURRENT FACULTY: Elizabeth Abercrombie, Gyorgi Buzsaki, Ian
Creese, Mark Gluck, Howard Poizner, Margaret Shiffrar, Ralph
Siegel, Paula Tallal, and James Tepper. Five additional faculty
will be hired.

SUPPORT: The Center has 10 state-funded postdoctoral positions
with additional positions funded from grants and fellowships.
The graduate program is research-oriented and leads to a Ph.D.
in Behavioral and Neural Sciences; all students are fully funded.

SELECTION CRITERIA & PREREQUISITES: Candidates with any (or all)
of the following skills are encouraged to apply:
(1) familiarity with neural-network theories and algorithms, (2)
strong computational and analytic skills, and (3) experience
with experimental methods in cognitive psychology. Evidence of
prior research ability and strong writing skills are critical.

______________________________________________________________________

For more information on graduate or postdoctoral training in
learning and memory at CMBN/Rutgers, please send a letter with a
statement of your research and career interests, and a resume
(C.V.), to:

Dr. Mark A. Gluck Phone: (201) 648-1080 (x3221)
Center for Molecular
& Behavioral Neuroscience
Rutgers University
197 University Ave.
Newark, New Jersey 07102 Email: gluck@pavlov.rutgers.edu



------------------------------

Subject: Update Release of PD LVQ Program Package
From: lvq@cochlea.hut.fi (LVQ_PAK)
Organization: Helsinki University of Technology, Finland
Date: 31 Dec 91 12:46:55 +0000

************************************************************************
* *
* LVQ_PAK *
* *
* The *
* *
* Learning Vector Quantization *
* *
* Program Package *
* *
* Version 1.1 (December, 1991) *
* *
* Prepared by the *
* LVQ Programming Team of the *
* Helsinki University of Technology *
* Laboratory of Computer and Information Science *
* Rakentajanaukio 2 C, SF-02150 Espoo *
* FINLAND *
* *
* Copyright (c) 1991 *
* *
************************************************************************

Public-domain programs for Learning Vector Quantization (LVQ) algorithms
are available via anonymous FTP on the Internet.

This is the announcement of the first update release of the package. If
you have already obtained version 1.0 of the programs, you may use the
'patch' program and the diff listing appended into the end of this mail
to update your version of the package to 1.1.

"What is LVQ?", you may ask --- See the following reference, then: Teuvo
Kohonen. The self-organizing map. Proceedings of the IEEE,
78(9):1464-1480, 1990.

In short, LVQ is a group of methods applicable to statistical pattern
recognition, in which the classes are described by a relatively small
number of codebook vectors, properly placed within each class zone such
that the decision borders are approximated by the nearest-neighbor rule.
Unlike in normal k-nearest-neighbor (k-nn) classification, the original
samples are not used as codebook vectors, but they tune the latter. LVQ
is concerned with the optimal placement of these codebook vectors into
class zones.

This package contains all the programs necessary for the correct
application of certain LVQ algorithms in an arbitrary statistical
classification or pattern recognition task. To this package two
particular options for the algorithms, the LVQ1 and the LVQ2.1, have been
selected.

This code is distributed without charge on an "as is" basis. There is no
warranty of any kind by the authors or by Helsinki University of
Technology.

In the implementation of the LVQ programs we have tried to use as simple
code as possible. Therefore the programs are supposed to compile in
various machines without any specific modifications made on the code. All
programs have been written in ANSI C. The programs are available in two
archive formats, one for the UNIX-environment, the other for MS-DOS. Both
archives contain exactly the same files.

These files can be accessed via FTP as follows:

1. Create an FTP connection from wherever you are to machine
"cochlea.hut.fi". The internet address of this machine is
130.233.168.48, for those who need it.

2. Log in as user "anonymous" with your own e-mail address as password.

3. Change remote directory to "/pub/lvq_pak".

4. At this point FTP should be able to get a listing of files in this
directory with DIR and fetch the ones you want with GET. (The exact
FTP commands you use depend on your local FTP program.) Remember
to use the binary transfer mode for compressed files.

The lvq_pak program package includes the following files:

- Documentation:
README short description of the package
and installation instructions
document.ps documentation in (c) PostScript format
document.ps.Z same as above but compressed
document.txt documentation in ASCII format

- Source file archives (which contain the documentation, too):
lvq_p1r1.exe Self-extracting MS-DOS archive file
lvq_pak-1.1.tar UNIX tape archive file
lvq_pak-1.1.tar.Z same as above but compressed


An example of FTP access is given below

unix> ftp cochlea.hut.fi (or 130.233.168.48)
Name: anonymous
Password: <your email address>
ftp> cd /pub/lvq_pak
ftp> binary
ftp> get lvq_pak-1.1.tar.Z
ftp> quit
unix> uncompress lvq_pak-1.1.tar.Z
unix> tar xvfo lvq_pak-1.1.tar

See file README for further installation instructions.

All comments concerning this package should be
addressed to lvq@cochlea.hut.fi.

************************************************************************

If you have obtained the version 1.0 of this package, it is advisable to
use the following diff listing together with the program 'patch', to update
your version to 1.1.

An example of the patch usage is given below (it should be run in the
same directory where the program codes are kept)

unix> cat this.patch.mail | patch

*** lvq_pak.c Tue Dec 31 10:40:01 1991
- --- lvq_pak.c Tue Dec 31 08:13:54 1991
***************
*** 185,191 ****
ofree(clab->label);
clab = clab->next;
while (clab != NULL) {
! ofree(clab->next->label);
olab = clab->next;
ofree(clab);
clab = olab;
- --- 185,191 ----
ofree(clab->label);
clab = clab->next;
while (clab != NULL) {
! ofree(clab->label);
olab = clab->next;
ofree(clab);
clab = olab;

************************************************************************

------------------------------

End of Neuron Digest [Volume 9 Issue 2]
***************************************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT