Copy Link
Add to Bookmark
Report
Neuron Digest Volume 05 Number 22
Neuron Digest Saturday, 13 May 1989 Volume 5 : Issue 22
Today's Topics:
Preprints of two recent publications are available
TR available
research reports available
CVPR89 Announcement
TR: Virtual Memories and Massive Generalization
POST-DOC: SPEECH & NEURAL NETS
Peripheral N.S. and Homeostasis: BBS Call for Commentators
ERPs, Memory and Attention: BBS call for Commentators
Neural Networks Symposium
Vision and Image Analysis
Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"
ARPANET users can get old issues via ftp from hplpm.hpl.hp.com (15.255.16.205).
------------------------------------------------------------
Subject: Preprints of two recent publications are available
From: Jose Ambros-ingerson <orion.cf.uci.edu!uci-ics!glacier.ics.uci.edu!jose@OBERON.USC.EDU>
Organization: University of California, Irvine - Dept of ICS
Date: 31 Mar 89 07:40:22 +0000
Preprints of two recent publications are available from the Computational
Neuroscience Program at the University of California at Irvine:
==================================================================
DERIVATION OF ENCODING CHARACTERISTICS OF LAYER II CEREBRAL CORTEX
Richard Granger, Jose Ambros-Ingerson, and Gary Lynch
Center for the Neurobiology of Learning and Memory
University of California
Irvine, CA. 92717
Computer simulations of layers I and II of piriform (olfactory) cortex
indicate that this biological network can generate a series of distinct
output responses to individual stimuli, such that different responses encode
different levels of information about a stimulus. In particular, after
learning a set of stimuli modeled after distinct groups of odors, the
simulated network's initial response to a cue indicates only its group or
category, whereas subsequent responses to the same stimulus successively
subdivide the group into increasingly specific encodings of the individual
cue. These sequences of responses amount to an automated organization of
perceptual memories according to both their similarites and differences,
facilitating transfer of learned information to novel stimuli without loss
of specific information about exceptions. Human recognition performance
robustly exhibits multiple levels: a given object can be identified as a
vehicle, as an automobile, or as a Mustang. The findings reported here
suggest that a function as apparently complex as hierarchical recognition
memory, which seems suggestive of higher `cognitive' processes, may be a
fundamental intrinsic property of the operation of this single cortical cell
layer in response to naturally-occurring inputs to the structure. We offer
the hypothesis that the network function of superficial cerebral cortical
layers may simultaneously acquire and hierarchically organize information
about the similarities and differences among perceived stimuli.
Experimental manipulation of the simulation has generated hypotheses of
direct links between the values of specific biological features and
particular attributes of behavior, generating testable physiological and
behavioral predictions.
(Appears in Journal of Cognitive Neuroscience, 1:61-84, 1989).
====================================================
MEMORIAL OPERATION OF MULTIPLE, INTERACTING SIMULATED BRAIN STRUCTURES
Richard Granger, Jose Ambros-Ingerson, Ursula Staubli and Gary Lynch
Center for the Neurobiology of Learning and Memory
University of California
Irvine, CA. 92717
Primary findings from simulations of the superficial layers of olfactory
cortex have been that repeated sampling of stimuli has two major effects:
first, multiple samples greatly increase the information capacity of a
network compared to that for a single sample, and second, the breaking of
the response into distinct samples imposes an organization on the memories
thus read out. It was found that repetitive sampling allows the network to
form and read out a sequence of different representations of a stimulus,
denoting information ranging from the membership of that stimulus in a group
of similar stimuli, to specific information unique to the stimulus itself.
This led us to the hypothesis that the combination of particular cellular
physiological features, anatomical designs, and repetitive sampling
performance, allows cortical networks to construct perceptual hierarchies
(Lynch and Granger, 1989)*. Those initial simulation experiments did not
address what is presumably an essential feature of repetitive sampling:
namely, the interaction between the cortex and its inputs. The present
paper reviews both our isolated cortical simulations and our first efforts
to explore the issue of interaction between cortex and peripheral
structures. New findings indicate that the mechanism of repeated sampling
enables active analysis of stimuli into their learned components.
*[Lynch, G. and Granger, R. (1989). Simulation and analysis of a cortical
network. The Psychology of Learning and Motivation, Vol.23 (in press).]
(To appear in: Neuroscience and Connectionist Models, M.Gluck and
D.Rumelhart, Eds., Hillsdale: Erlbaum Associates, 1989.)
===================================================
Send requests for reprints to:
Richard Granger
Computational Neuroscience Program
Bonney Center
University of California
Irvine, California 92717
(granger@ics.uci.edu)
------------------------------
Subject: TR available
From: "Jose del R. MILLAN" <mcvax!fib.upc.es!millan@uunet.UU.NET>
Date: 31 Mar 89 17:09:00 +0800
The following Tech. Report is available. Requests should be sent to
MILLAN@FIB.UPC.ES
________________________________________________________________________
Learning by Back-Propagation:
a Systolic Algorithm and its Transputer Implementation
Technical Report LSI-89-15
Jose del R. MILLAN
Dept. de Llenguatges i Sistemes Informatics
Universitat Politecnica de Catalunya
Pau BOFILL
Dept. d'Arquitectura de Computadors
Universitat Politecnica de Catalunya
ABSTRACT
In this paper we present a systolic algorithm for back-propagation, a
supervised, iterative, gradient-descent, connectionist learning rule. The
algorithm works on feedforward networks where connections can skip layers
and fully exploits spatial and training parallelisms, which are inherent to
back-propagation. Spatial parallelism arises during the propagation of
activity ---forward--- and error ---backward--- for a particular
input-output pair. On the other hand, when this computation is carried out
simultaneously for all input-output pairs, training parallelism is obtained.
In the spatial dimension, a single systolic ring carries out sequentially
the three main steps of the learnng rule ---forward, backward and weight
increments update. Furthermore, the same pattern of matrix delivery is used
in both the forward and the backward passes. In this manner, the algorithm
preserves the similarity of the forward and backward passes in the original
model. The resulting systolic algorithm is dual with respect to the pattern
of matrix delivery ---either columns or rows. Finally, an implementation of
the systolic algorithm for the spatial dimension is derived, that uses a
linear ring of Transputer processors.
------------------------------
Subject: research reports available
From: Richard Zemel <zemel@ai.toronto.edu>
Date: Wed, 05 Apr 89 18:11:27 -0400
The following two technical reports are now available. The first report
describes the main ideas of TRAFFIC. It appeared in the Proceedings of the
1988 Connectionist Summer School, Morgan Kaufmann Publishers, edited by D.S.
Touretzky, G.E. Hinton, and T.J. Sejnowski.
The second report is a revised version of my Master's thesis. It contains a
thorough description of the model, as well as implementation details and
some experimental results. This report is rather long (~75 pages), so if
you are curious about the model we'll send you the first one. On the other
hand, if you want to plough through the details, ask specifically for the
second one.
***************************************************************************
"TRAFFIC: A Model of Object Recognition
Based On Transformations of Feature Instances"
Richard S. Zemel, Michael C. Mozer, Geoffrey E. Hinton
Department of Computer Science
University of Toronto
Technical report CRG-TR-88-7 (Sept. 1988)
ABSTRACT
Visual object recognition involves not only detecting the presence of
salient features of objects, but ensuring that these features are in the
appropriate relationships to one another. Recent connectionist models
designed to recognize two-dimensional shapes independent of their
orientation, position, and scale have primarily dealt with simple objects,
and they have not represented structural relations of these objects in an
efficient manner. A new model is proposed that takes advantage of the fact
that given a rigid object, and a particular feature of that object, there is
a fixed viewpoint-independent tranformation from the feature's reference
frame to the object's. This fixed transformation can be expressed as a
matrix multiplication that is efficiently implemented by a set of weights in
a connectionist network. By using a hierarchy of these transformations,
with increasing feature complexity in each successive layer, a network can
recognize multiple objects in parallel.
******************************
"TRAFFIC: A Connectionist Model of Object Recognition"
Richard S. Zemel
Department of Computer Science
University of Toronto
Technical report CRG-TR-89-2 (March 1989)
ABSTRACT
Recent connectionist models designed to recognize two-dimensional shapes
independent of their orientation, position, and scale have not represented
structural relations of the objects in an efficient manner. A new model is
described that takes advantage of the fact that given a rigid object, and a
particular feature of that object, there is a fixed viewpoint-independent
transformation from the feature's reference frame to the object's. This
fixed transformation can be expressed as a matrix multiplication that is
efficiently implemented by a set of weights in a connectionist network. The
model, called TRAFFIC (a loose acronym for ``transforming feature
instances''), uses a hierarchy of these transformations, with increasing
feature complexity in each successive layer, in order to recognize multiple
objects in parallel. An implementation of TRAFFIC is described, along with
experimental results demonstrating the network's ability to recognize
constellations of stars in a viewpoint-independent manner.
*************************************************************************
Copies of either report can be obtained by sending an email request to:
INTERNET: carol@ai.toronto.edu
UUCP: uunet!utai!carol
BITNET: carol@utorgpu
------------------------------
Subject: CVPR89 Announcement
From: "Worthy N. Martin" <haven!uvaarpa!virginia!uvacs!wnm@PURDUE.EDU>
Organization: U.Va. CS Department, Charlottesville, VA
Date: 09 Apr 89 22:28:12 +0000
IEEE Computer Society Conference
on
COMPUTER VISION AND PATTERN RECOGNITION
Sheraton Grand Hotel
San Diego, California
June 4-8, 1989
General Chair
Professor Rama Chellappa
Department of EE-Systems
University of Southern California
Los Angeles, California 90089-0272
Program Co-Chairs
Professor Worthy Martin Professor John Kender
Dept. of Computer Science Dept. of Computer Science
Thornton Hall Columbia University
University of Virginia New York, New York 10027
Charlottesville, Virginia 22901
Program Committee
Chris Brown Avi Kak Theo Pavlidis
Allen Hansen Rangaswamy Kashyap Alex Pentland
Robert Haralick Joseph Kearney Azriel Rosenfeld
Ellen Hildreth Daryl Lawton Roger Tsai
Anil Jain Martin Levine John Tsotsos
Ramesh Jain David Lowe John Webb
John Jarvis Gerard Medioni
General Conference Sessions will be held
June 6-8, 1989
Conference session topics include:
-- Edge Detection
-- Shape from _____ (Shading, Contour, ...)
-- Feature Extraction
-- Motion
-- Morphology
-- Neural Networks
-- Range Data: Generation and Processing
-- Image and Texture Segmentation
-- Monocular, Polarization Cues
-- Stereo
-- Object Recognition
-- Visual Navigation
-- Preprocessing
-- Applications of Computer Vision
-- Vision Systems and Architectures
Invited Speakers:
June 6 June 7 June 8
Prof. J. Feldman Prof. V.S. Ramachandran Prof. M.A. Arbib
ICSI, Berkeley Univ. Calif., San Diego Univ. of Southern Calif.
Time, Space and Form Visual Perception in Schemas, Computer Vision
in Computer Vision Humans and Machines and Neural Networks
Tutorials
June 4, am June 5, am June 5, pm
1. Morphology and 3. Robust Methods for 5. Analog Networks for
Computer Vision Computer Vision Computer Vision:
R.M. Haralick W. Forstner Theory and Applications
2. Intermediate and 4. Parallel Algorithms C. Koch
Low Level Vision and Architectures for 6. Model Based Vision
M.S. Trivedi Computer Vision W.E.L. Grimson
V.K.P. Kumar
The IEEE Computer Society will also hold a workshop entitled:
Artificial Intelligence in Computer Vision
June 5, 1989
General Chair: Professor Rama Chellappa
Program Co-Chairs: Professor J.K. Aggarwal and Professor A. Rosenfeld
Conference Registration
(for CVPR and Tutorials)
Conference Department
CVPR
IEEE Computer Society
1730 Massachusetts Ave
Washington, D.C. 20036-1903
(202)371-1013
Fees, before May 8
CVPR - $200 (IEEE Members, includes proceedings and banquet)
- $100 (Students, includes proceedings and banquet)
Tutorials - $100 per session (IEEE Members and Students)
Hotel Reservations
Sheraton Grand Hotel on Harbor Island
1590 Harbor Island Drive
San Diego, CA 92101
(619)692-2265
Rooms - $102 per night (single or double)
The Advance Program with registration forms, etc. will
be mailed out of the IEEE offices shortly.
------------------------------
Subject: TR: Virtual Memories and Massive Generalization
From: Paul Smolensky <pauls@boulder.Colorado.EDU>
Date: Thu, 13 Apr 89 09:45:53 -0600
Virtual Memories and Massive Generalization
in Connectionist Combinatorial Learning
Olivier Brousse & Paul Smolensky
Department of Computer Science &
Institute of Cognitive Science
University of Colorado at Boulder
We report a series of experiments on connectionist learning that
addresses a particularly pressing set of objections on the plau-
sibility of connectionist learning as a model of human learning.
Connectionist models have typically suffered from rather severe
problems of inadequate generalization (where generalizations are
significantly fewer than training inputs) and interference of
newly learned items with previously learned items. Taking a cue
from the domains in which human learning dramatically overcomes
such problems, we see that indeed connectionist learning can es-
cape these problems in *combinatorially structured domains.* In
the simple combinatorial domain of letter sequences, we find that
a basic connectionist learning model trained on 50 6-letter se-
quences can correctly generalize to over 10,000 novel sequences.
We also discover that the model exhibits over 1,000,000 *virtual
memories*: new items which, although not correctly generalized,
can be learned in a few presentations while leaving performance
on the previously learned items intact. Virtual memories can be
thought of states which are not harmony maxima (energy minima)
but which can become so with a few presentations, without in-
terfering with existing harmony maxima. Like generalizations,
virtual memories in combinatorial memories are largely novel com-
binations of familiar subpatterns extracted from the contexts in
which they appear in the training set. We conclude that, in com-
binatorial domains like language, connectionist learning is not
as harmful to the empiricist position as typical connectionist
learning experiments might suggest.
Submitted to the annual meeting of the Cognitive Science Society.
Please send requests to conn_tech_report@boulder.Colorado.EDU and
request report CU-CS-431-89. These will be available for mailing
shortly.
------------------------------
Subject: POST-DOC: SPEECH & NEURAL NETS
From: Ron Cole <cole@cse.ogc.edu>
Date: Mon, 17 Apr 89 19:08:54 -0700
POST-DOCTORAL POSITION AT OREGON GRADUATE CENTER
Speech Recognition with Neural Nets
A post-doctoral position is available at the Oregon Graduate Center to study
connectionist approaches to computer speech recognition, beginning Summer or
Fall, 1989.
The main requirements are
(1) a strong background in the theory and application of neural networks,
and
(2) willingness to learn about the wonderful world of speech.
Knowledge of computer speech recognition is helpful but not required; the PI
has extensive experience in the area and is willing to teach the necessary
skills.
The goal of our research is to develop speech recognition algorithms that
are motivated by research on hearing, acoustic phonetics and speech
perception, and to compare performance of algorithms that use neural network
classifiers to more traditional techniques. In the past year, our group has
applied neural network classification to several problem areas in
speaker-independent recognition of continuous speech: Pitch and formant
tracking, segmentation, broad phonetic classification and fine phonetic
discrimination. In addition, we have recently demonstrated the feasibility
of using multi-layered networks to identify languages on the basis of their
temporal characteristics (preprint available from vincew@ogc.cse.edu).
OGC provides an excellent environment for research in speech recognition and
neural networks, with state-of-the-art speech processing software (including
Dick Lyon's cochleogram, a representation based on a computational model of
the auditory system), speech databases, and simulation tools. The
department has a Sequent Symmetry multiprocessor, Intel Hypercube and Cogent
Research XTM parallel workstations, and the speech project has several
dedicated Sun4 and Sun3 workstations. The speech group has close ties with
Dan Hammerstrom's Cognitive Architecture Project at OGC, and with Les Atlas
and his group at the University of Washington.
OGC is located ten miles west of Portland on a spacious campus in the heart
of Oregon's technology corridor. Nearby companies include Sequent, Intel,
Tektronix, Cogent Research, Mentor Graphics, BiiN, NCUBE, and FPS Computing.
The cultural attractions of Portland are close by, and the Columbia River
Gorge, Oregon Coast and Cascade Mts (skiing through September) are less than
90 minutes away. Housing is inexpensive and quality of life is excellent.
Please send resume to:
Ronald Cole
Computer Science and Engineering
Oregon Graduate Center
19600 N.W. Von Neumann Drive
Beaverton, OR 97006-1999
503 690 1159
------------------------------
Subject: Peripheral N.S. and Homeostasis: BBS Call for Commentators
From: harnad@Princeton.EDU (Stevan Harnad)
Date: Fri, 21 Apr 89 23:12:06 -0400
Below is the abstract of a forthcoming target article to appear in
Behavioral and Brain Sciences (BBS), an international, interdisciplinary
journal that provides Open Peer Commentary on important and controversial
current research in the biobehavioral and cognitive sciences. Commentators
must be current BBS Associates or nominated by a current BBS Associate. To
be considered as a commentator on this article, to suggest other appropriate
commentators, or for information about how to become a BBS Associate, please
send email to:
harnad@confidence.princeton.edu or harnad@pucc.bitnet or write to:
BBS, 20 Nassau Street, #240, Princeton NJ 08542 [tel: 609-921-7771]
____________________________________________________________________
BETA-AFFERENTS:
A FUNDAMENTAL DIVISION OF THE NERVOUS SYSTEM MEDIATING HOMEOSTASIS?
James C. Prechtl & Terry L. Powley
Laboratory of Regulatory Psychobiology
Department of Physiological Sciences
Purdue University
West Lafayette, IN 47907
Keywords: autonomic nervous system; capsaicin; dorsal root ganglion;
neuroimmunology, nerve growth factor; nociception; sypathetics; substance P;
sensory neurons; tachykinins; visceral afferents
The peripheral nervous system (PNS) has classically been subdivided into a
somatic division composed of both afferent and efferent pathways and an
autonomic division containing only efferents. Langley, who codified this
asymmetrical plan at the beginning of the 20th century, considered different
afferents, including visceral ones, as candidates for inclusion in his
concept of the "autonomic nervous system" (ANS), but he finally excluded all
candidates for lack of any distinguishing histological markers. Langley's
classification has been enormously influential in shaping modern ideas about
both the structure and the function of the PNS. Here we survey modern
information about the PNS and argue that many of the sensory neurons
designated as "visceral" and "somatic" are in fact part of a histologically
distinct group of afferents dedicated to autonomic function. These afferents
have traditionally been known as "small dark" neurons or B-neurons. In this
target article we outline an association between autonomic and B-neurons
based on ontogeny, cell phenotype and functional relations, grouping them
together as part of a common reflex system involved in homeostasis. This
more parsimonious classification of the PNS, provided by the identification
of a group of afferents associated primarily with the ANS, avoids a number
of confusions produced by the classical orientation. It may also have
practical implications for our understanding of nociception, homeostatic
reflexes and the evolution of the nervous system.
------------------------------
Subject: ERPs, Memory and Attention: BBS call for Commentators
From: harnad@Princeton.EDU (Stevan Harnad)
Date: Fri, 21 Apr 89 23:17:58 -0400
Below is the abstract of a forthcoming target article to appear in
Behavioral and Brain Sciences (BBS), an international, interdisciplinary
journal that provides Open Peer Commentary on important and controversial
current research in the biobehavioral and cognitive sciences. Commentators
must be current BBS Associates or nominated by a current BBS Associate. To
be considered as a commentator on this article, to suggest other appropriate
commentators, or for information about how to become a BBS Associate, please
send email to:
harnad@confidence.princeton.edu or write to:
BBS, 20 Nassau Street, #240, Princeton NJ 08542 [tel: 609-921-7771]
____________________________________________________________________
Keywords: selective attention, echoic memory, cortical localization,
audition, orienting response, automatic processing
THE ROLE OF ATTENTION IN AUDITORY INFORMATION PROCESSING
AS REVEALED BY EVENT-RELATED POTENTIALS
Risto Naatanen
Department of Psychology
University of Helsinki
Helsinki, Finland
This target article examines the roles of attention and automaticity in
auditory processing as revealed by event-related potential (ERP) research.
An ERP component called the "mismatch negativity" indicates that physical
and temporal features of auditory stimuli are fully processed whether or not
they are attended. It also suggests that there exists a mechanism of passive
attention switching with changes in repetitive input. ERPs also reveal some
of the cerebral mechanisms by which acoustic stimulus events produce and
control conscious perception. The "processing negativity" component
implicates a mechanism for attending selectively to stimuli defined by
certain physical features. Stimulus selection occurs in the form of a
matching process in which each input is compared to the "attentional trace,"
a voluntarily maintained representation of the task-relevant features of the
stimulus to be attended.
------------------------------
Subject: Neural Networks Symposium
From: ukrainec@maccs.McMaster.CA (Andy Ukrainec)
Organization: McMaster U., Hamilton, Ont., Can.
Date: Mon, 24 Apr 89 20:48:58 +0000
-----------------
| Neural Networks |
-----------------
Communications Research Laboratory / TRIO One-Day Symposium
June 26th, 1989
8:30 a.m. - 5:00 p.m.
Pre-registration by mail: ($100.00) (Students - $25.00) Canadian
Deadline for registrations: June 1st, 1989
For further information contact: Anne Myers
CRL, McMaster University
1280 Main St. West
Hamilton, Ontario, Canada
L8S 4K1
(416) 525-9140 ext. 4085
Guest Speakers:
- Dr. Isabelle Guyon, AT&T Bell Labs
- Mr. Gary Josin, Neural Systems Inc., B.C.
- Dr. Paul J. Werbos, Nat. Sc. Foundation, Washington, D.C.
- Dr. B. Widrow, Stanford University, California
! Andrew Ukrainec ukrainec@maccs.mcmaster.ca "I am what I am!" !
! Communications Research Laboratory, McMaster University !
! Hamilton, Ontario, Canada L8S 4K1 !
------------------------------
Subject: Vision and Image Analysis
From: j daugman <daugman%charybdis@harvard.harvard.edu>
Date: Tue, 02 May 89 10:56:56 -0400
Request for Technical Reports and Papers
(Second Request)
In preparation for upcoming Reviews and Tutorials at 1989 Conferences, I
would be grateful to receive copies of any papers or technical reports
pertaining to applications of neural nets to vision and image analysis.
(This repeats an earlier request sent out in February.)
Please send any material to the following address. Thank you in advance.
John Daugman
950 William James Hall
Harvard University
Cambridge, Mass. 02138
------------------------------
End of Neurons Digest
*********************