Copy Link
Add to Bookmark
Report

Neuron Digest Volume 11 Number 11

eZine's profile picture
Published in 
Neuron Digest
 · 1 year ago

Neuron Digest   Thursday, 11 Feb 1993                Volume 11 : Issue 11 

Today's Topics:
Re: What are neural columns?
Re: Neural Columns
Re: Neural Columns
5th edition of Neural Network Introduction book
correction: 5th edition of neural network intro book
visual tracking


Send submissions, questions, address maintenance, and requests for old
issues to "neuron-request@cattell.psych.upenn.edu". The ftp archives are
available from cattell.psych.upenn.edu (130.91.68.31). Back issues
requested by mail will eventually be sent, but may take a while.

----------------------------------------------------------------------

Subject: Re: What are neural columns?
From: wcalvin@stein.u.washington.edu (William Calvin)
Organization: University of Washington
Date: 19 Jan 93 18:58:46 +0000

[[ Editor's Note: This message and the next two are from a Neuroscience
mailing list. I thought readers of this Digest would find the following
discussion and references interesting, especially since many people are
movingh connectionist models from eth "brain inspired" to the "brain
plausible." -PM ]]

tbrannon@CSEE.Lehigh.Edu (tbrannon) writes:
>Can someone provide a brief overview of what neural columns are?

The newer parts of cerebral cortex seem to have two levels of "modular"
organization, about 300 minicolumns (about 30-50 microns in diameter,
possibly corresponding to those lined-up cells you see in Nissl stains,
radiating up from the white matter) nestled inside a macrocolumn (maybe
0.5 mm in size).

Physiologically, they are defined by clustering of response types: a
typical minicolumn is the orientation column of visual cortex, all of
whose neurons seemingly interested in lines/edges at about the same
(within 10 degrees) orientation. A typical macrocolumn is the ocular
dominance column, best defined in layer IVc whose cells are only
responsive to one eye; move laterally and you find another 0.5mm wide
group that are interested in the other eye.

Some other collections have been called "columns" such as Hubel &
Wiesel's hypercolumns (essentially two ocular dominance columns, left and
right, taken together, each of which has a full range of orientation
columns embedded in it). Somatosensory cortex is where macrocolumns were
discovered in the first place by Mountcastle in 1957, where modalities
(skin vs joint sensation, for instance) cluster. They have been seen in
several of the secondary visual areas, specializing in various kinds of
feature detection. There is no general theory for what association
cortex might be doing with this kind of organization, though I find
myself working on one (and NO, there is nothing on this in my books such
as CEREBRAL SYMPHONY or ASCENT OF MIND; it's all happened recently, e.g.,
Soc. Neurosci. Abstr. 18:214.18, 1992).

William H. Calvin WCalvin@U.Washington.edu
University of Washington NJ-15
Psychiatry & Behavioral Sciences
Seattle, Washington 98195 FAX:1-206-720-1989

------------------------------

Subject: Re: Neural Columns
From: ddoherty@ics.uci.edu (Donald Doherty)
Organization: Univ. of Calif., Irvine, Info. & Computer Sci. Dept.
Date: 23 Jan 93 01:27:38 +0000

In article <1jfig7INNf8o@shelley.u.washington.edu> wcalvin@stein.u.washington.edu (William Calvin) writes:
>erwin@trwacs.fp.trw.com (Harry Erwin) writes:
>
>>I'm looking for a good reference on the functions and dynamics of neural
>>columns.
>
>>--
>>Harry Erwin
>>Internet: erwin@trwacs.fp.trw.com
>
>Vernon Mountcastle's chapter in THE NEUROSCIENCES FOURTH STUDY PROGRAM
>(MIT Press 1979) is a good summary. Shaw, Harth and Scheibel in
>Experimental Neurology 77:324-358 (1982) is a good theoretical take on the
>subject.
> William H. Calvin WCalvin@U.Washington.edu
>

Hi Harry,

Dr. Calvin's suggestion by Vernon Mountcastle is your best bet (Mouncastle
is one of the originators of the idea). FYI, the same chapter is
contained in a book titled "The Mindful Brain: Cortical Organization and
the Group- Selective Theory of Higher Brain Function," by Gerald Edelman
and Vernon Mountcastle (1978) MIT press.

Also what you are refering to as neural columns are more commonly refered to
as "cortical columns." And it is good to keep in mind that cortical
columns are an idea that may have positive heuristic value but may or may
not actually exist.

Don

- -------------------------------------------------------------------------------
Donald Doherty
Dept. of Psychobiology Email: ddoherty@ics.uci.edu
University of California CompuServe: 76646,1321
Irvine, CA 92717-4550 FAX: (714) 725-2447
U.S.A. Voice: (714) 856-1776

------------------------------

Subject: Re: Neural Columns
From: falex@coue.loria.fr (Frederic Alexandre)
Organization: Crin-Inria-Lorraine
Date: 25 Jan 93 13:55:34 +0000

We have been working for 6 years on a modelization of the cortical column
as a basic unit of connectionist networks. The basic idea is to define a
functional micro-circuit of neurons (cortical column) as a basic unit
with specific input/output, learning and functioning rules. These rules
and architecture have been defined with neurobiological consideration.

From a computer science point of view, we have applied this model to
speech processing, image interpretation and robotics.

If you are interested, you may have a look to:

Alexandre, F., Burnod, Y., Guyot, F., Haton, J.P. (1991) "The
Cortical Column: a new processing unit for multilayered networks",
Neural Networks, Vol 4, n 1, pp. 15-25, 1991.

Guyot, F., Alexandre, F., Haton, J.P. (1990) "Principles and
applications of the cortical column symbolic neural model", IJCNN'90,
San Diego.

Alexandre, F., Guyot, F., Haton, J.P. (1990) "A connectionist
network with two complementary visual processing systems for x-ray
image interpretation", INNC'90, Paris.

I hope this can help

alex

E-mail: falex@loria.fr

------------------------------

Subject: 5th edition of Neural Network Introduction book
From: Patrick van der Smagt <smagt@fwi.uva.nl>
Date: Fri, 29 Jan 93 16:19:31 +0100

The fifth edition of the neural network introductory text

An Introduction to Neural Networks

Ben Kr\"ose and Patrick van der Smagt
Dept. of Computer Systems
University of Amsterdam

is now available by anonymous ftp. This text is in use at
our department for an introductory neural network course,
given by the authors.

This version differs from the previous (1991) one in several
aspects:
- many corrected errors & prettified figures
- the chapter on Unsupervised Learning is rewritten & expanded
- the chapter on Robot Control is adapted
- the chapter on Vision is expanded
- the chapter on simulators has been removed
- the complete list of references (which are also available
per chapter) has been removed

The book consists of 131 pages.

Comments on its context, additions, corrections, and flames are
very much appreciated at smagt@fwi.uva.nl.

For those people who want to use this manuscript for their
courses, please get in touch with me.

Patrick van der Smagt

- -----------------------------------------------------------------------------
TABLE OF CONTENTS

Preface 9


I FUNDAMENTALS 11


1 Introduction 13


2 Fundamentals 15
2.1 A framework for distributed representation 15
2.1.1 Processing units 16
2.1.2 Connections between units 16
2.1.3 Activation and output rules 17
2.2 Network topologies 17
2.3 Training of artificial neural networks 18
2.3.1 Paradigms of learning 18
2.3.2 Modifying patterns of connectivity 18
2.4 Notation and terminology 19
2.4.1 Notation 19
2.4.2 Terminology 20



II THEORY 23


3 Adaline and Perceptron 25
3.1 The adaptive linear element (Adaline) 25
3.2 The Perceptron 26
3.3 Exclusive-or problem 27
3.4 Multi-layer perceptrons can do everything 28
3.5 Perceptron learning rule and convergence theorem 30
3.6 The delta rule 31


4 Back-Propagation 33
4.1 Multi-layer feed-forward networks 33
4.2 The generalised delta rule 34
4.3 Working with back-propagation 36
4.4 Other activation functions 37
4.5 Deficiencies of back-propagation 38
4.6 Advanced algorithms 39
4.7 Applications 42


5 Self-Organising Networks 45

5.1 Competitive learning 46
5.1.1 Clustering 46
5.1.2 Vector quantisation 49
5.1.3 Using vector quantisation 49
5.2 Kohonen network 52
5.3 Principal component networks 55
5.3.1 Introduction 55
5.3.2 Normalised Hebbian rule 56
5.3.3 Principal component extractor 56
5.3.4 More eigenvectors 57
5.4 Adaptive resonance theory 58
5.4.1 Background Adaptive resonance theory 58
5.4.2 ART1 The simplified neural network mo del 58
5.4.3 ART1 The original model 61
5.5 Reinforcement learning 63
5.5.1 Associative search 63
5.5.2 Adaptive critic 64
5.5.3 Example The cartpole system 65

6 Recurrent Networks 69
6.1 The Hopfield network 70
6.1.1 Description 70
6.1.2 Hopfield network as associative memory 71
6.1.3 Neurons with graded response 72
6.2 Boltzmann machines 73


III APPLICATIONS 77

7 Robot Control 79
7.1 End-effector positioning 80
7.1.1 Camera-robot coordination is function approximation 81
7.2 Robot arm dynamics 86
7.3 Mobile robots 88
7.3.1 Model based navigation 88
7.3.2 Sensor based control 90

8 Vision 93
8.1 Introduction 93
8.2 Feed-forward types of networks 94
8.3 Self-organising networks for image compression 94
8.3.1 Back-propagation 95
8.3.2 Linear networks 95
8.3.3 Principal components as features 96
8.4 The cognitron and neocognitron 97
8.4.1 Description of the cells 97
8.4.2 Structure of the cognitron 98
8.4.3 Simulation results 99
8.5 Relaxation types of networks 99
8.5.1 Depth from stereo 99
8.5.2 Image restoration and image segmentation 101
8.5.3 Silicon retina 101



IV IMPLEMENTATIONS 105


9 General Purpose Hardware 109
9.1 The Connection Machine 110
9.1.1 Architecture 110
9.1.2 Applicability to neural networks 111
9.2 Systolic arrays 112

10 Dedicated Neuro-Hardware 115
10.1 General issues 115
10.1.1 Connectivity constraints 115
10.1.2 Analogue vs. digital 116
10.1.3 Optics 116
10.1.4 Learning vs. non-learning 117
10.2 Implementation examples 118
10.2.1 Carver Mead's silicon retina 118
10.2.2 LEP's LNeuro chip 120


Author Index 123

Subject Index 126


- -----------------------------------------------------------------------------
To retrieve the document by anonymous ftp :

Unix> ftp galba.mbfys.kun.nl (or ftp 131.174.82.73)
Name (galba.mbfys.kun.nl <yourname>) anonymous
331 Guest login ok, send ident as password.
Password <your login name>
ftp> bin
ftp> cd neuro-intro
ftp> get neuro-intro.400.ps.Z
150 Opening ASCII mode data connection for neuro-intro.400.ps.Z (xxxxxx bytes).
ftp> bye
Unix> uncompress neuro-intro.400.ps.Z
Unix> lpr -s neuro-intro.400.ps ;; optionally

- -----------------------------------------------------------------------------
The file neuro-intro.400.ps.Z is the manuscript for 400dpi printers.
If you have a 300dpi printer, get neuro-intro.300.ps.Z instead.
The 1991 version is still available as neuro-intro.1991.ps.Z. 1991 Is
not the #dots per inch! We don't have such good printers here.

Do preview the manuscript before you print it, since otherwise 131
pages of virginal paper are wasted.

Some systems cannot handle the large postscript file (around 2M).
On Unix systems it helps to give lpr the "-s" flag, such that the
postscript file is not spooled but linked (see man lpr). On others,
you may have no choice but extract (chunks of) pages manually and print
them separately. Unix filters like pstops, psselect, and psxlate
(the source code of the latter is available from various ftp sites)
can be used to select pages to be printed. Alternatively, print from
your previewer. Better still, don't print at all!

Enjoy!


------------------------------

Subject: correction: 5th edition of neural network intro book
From: Patrick van der Smagt <smagt@fwi.uva.nl>
Date: Sun, 31 Jan 93 13:42:57 +0100

Excuse!

It appears that galba's ftp manager (the chap with the PassWord) changed
the ftp site; previously, when you logged in on galba, a "cd pub"
was done. This is changed for now, such that the instructions for
getting

The fifth edition of the neural network introductory text

An Introduction to Neural Networks

Ben Kr\"ose and Patrick van der Smagt
Dept. of Computer Systems
University of Amsterdam

are now:
- -----------------------------------------------------------------------------
To retrieve the document by anonymous ftp :

Unix> ftp galba.mbfys.kun.nl (or ftp 131.174.82.73)
Name (galba.mbfys.kun.nl <yourname>) anonymous
331 Guest login ok, send ident as password.
Password <your login name>
ftp> bin
ftp> cd pub/neuro-intro
ftp> get neuro-intro.400.ps.Z
150 Opening ASCII mode data connection for neuro-intro.400.ps.Z (xxxxxx bytes).
ftp> bye
Unix> uncompress neuro-intro.400.ps.Z
Unix> lpr -s neuro-intro.400.ps ;; optionally

- -----------------------------------------------------------------------------
There is a possibility that the previous state (where a "cd neuro-intro"
instead of "cd pub/neuro-intro" must be done) will be restored in future.
Be forewarned.

The file neuro-intro.400.ps.Z is the manuscript for 400dpi printers.
If you have a 300dpi printer, get neuro-intro.300.ps.Z instead.
The 1991 version is still available as neuro-intro.1991.ps.Z. 1991 Is
not the #dots per inch! We don't have such good printers here.

Do preview the manuscript before you print it, since otherwise 131
pages of virginal paper are wasted.

Some systems cannot handle the large postscript file (around 2M).
On Unix systems it helps to give lpr the "-s" flag, such that the
postscript file is not spooled but linked (see man lpr). On others,
you may have no choice but extract (chunks of) pages manually and print
them separately. Unix filters like pstops, psselect, and psxlate
(the source code of the latter is available from various ftp sites)
can be used to select pages to be printed. Alternatively, print from
your previewer. Better still, don't print at all!

Enjoy!

Patrick

PS the length of some chapters reflect the focus of the research
in our group. E.g., chapter 6 is ridiculously short (which was brought
to my attention) and needs improvement. Next time.


------------------------------

Subject: visual tracking
From: Denis Mareschal <denis@psy.ox.ac.uk>
Date: Wed, 03 Feb 93 15:47:36 +0000

Hi,

A couple of months ago I sent around a request for further information
concerning higher level connectionist approaches to the development of
visual tracking. I received a number of replies spanning the broad range of
fields in which neural network research is being conducted.

I also received a significant number of requests for the resulting
compiled list of references. I am thus posting a list of references resulting
directly and indirectly from my original request. I have also included a few
relevant psychology review papers.

Thanks to all those who replied. Clearly this list is not exhaustive
and if anyone reading it notices an ommission which may be of interest I
would greatly appreciate hearing from them.

Cheers,

Denis Mareschal
Department of Experimental Psychology
South Parks Road
Oxford University
Oxford OX1 3UD
maresch@black.ox.ac.uk



REFERENCES:


Allen, R. B. (1988), Sequential connectionist networks for answering simple
questions about a microworld. In: Proceedings of the Tenth Annual
Conference of the Cognitive Science Society, pp. 489-495, Hillsdale,
NJ: Erlbaum.

Baloch, A. A. & Waxman A. M. (1991). Visual learning, adaptive expectations
and behavioral conditioning of the mobile robot MAVIN, Neural Networks,
vol. 4, pp. 271-302.

Buck, D. S. & Nelson D. E. (1992). Applying the abductory induction mechanism
(AIM) to the extrapolation of chaotic time series. In: Proceedings of
the National Aerospace Electronics Conference (NAECON), 18-22 May,
Dayton, Ohio, vol. 3, pp 910-915.

Bremner, J. G. (1985). Object tracking and search in infancy: A review of data
and a theoretical evaluation, Developmental Review, 5, pp. 371-396

Carpenter, G. A. & Grossberg, S. (1992). Neural Networks for Vision and Image
Processing, Cambridge, MA: MIT Press.

Cleermans, A., Servan-Schreiber, D. & McClelland, J. L. (1989). Finite state
automata and simple recurrent networks, Neural Computation,1, pp 372-
381.

Deno, D. C., Keller, E. L. & Crandall, W. F. (1989). Dynamical neural network
organization of the visual pursuit system, IEEE Transactions on
Biomedical Engineering, vol. 36, pp. 85-91.

Dobnikar, A., Likar, A. & Podbregar, D. (1989). Optimal visual tracking with
artificial neural network. In: First I.E.E. International Conference
on Artificial Neural Networks (conf. Publ. 313), pp 275-279.

Elman, J. L. (1990). Finding structure in time. Cognitive Science, 14, pp.
179-211.

Ensley, D. & Nelson, D. E. (1992). Applying Cascade-correlation to the
extrapolation of chaotic time series. Proceedings of the Third
Workshop on Neural Networks: Academic/Industrial/NASA/Defense;
10-12 February, Auburn, Alabama.

Fay, D. A. & Waxman, A. M. (1992). Neurodynamics of real-time image velocity
extraction. In: G. A. Carpenter & S. Grossberg (Eds), Neural Networks
for Vision and Image Processing, pp 221-246, Cambridge, MA: MIT Press.

Gordon, Steele, & Rossmiller (1991). Predicting trajectories using recurrent
neural networks. In: Dagli, Kumara, & Shin (Eds), Intelligent Systems
Through Artificial Neural Networks, ASME Press. (Sorry that's the best
I can do for this reference)

Grossberg, S. & Rudd(1989). A neural architecture for visual motion perception:
Neural Networks, 2, pp. 421-450.

Koch, C. & Ullman, S. (1985). Shifts in selective visual attention: towards
the underlying neural circuitry. Human Neurobiology, 4, pp. 219-227.

Lisberger, S. G., Morris, E. J. & Tychsen, L. (1987). Visual motion processing
and sensory-motor integration for smooth pursuit eye movements,
Annual Review of Neuroscience, 10, pp. 97-129.

Lumer, E., D. (1992). The phase tracker of attention. In: Proceedings of the
Fourteenth Annual Conference of the Cognitive Science Society, pp
962-967, Hillsdale, NJ: Erlbaum.

Neilson,P. D., Neilson, M. D. & O'Dwyer, N. J. (1993, in press). What limits
high speed tracking performance?, Human Mouvement Science, 12.

Nelson, D. E., Ensley, D. D. & Rogers, S. K. (1992). Prediction of chaotic time
series using Cascade Correlation: Effects of number of inputs and
training set size. In: The Society for Optical Engineering (SPIE),
Proceeedings of the Applications of Artificial Neural Networks III
Conference, 21-24 April, Orlando, Florida, vol. 1709, pp 823-829.

Marshall, J. A. (1990). Self-organizing neural networks for perception of
visual motion, Neural Networks, 3, pp. 45-74.

Martin, W. N. & Aggarwal, J. K. (Eds) (1988). Motion Understanding: Robot
and Human Vision. Boston: Kluwer Academic Publishers.

Metzgen, Y. & Lehmann D. (1990). Learning temporal sequences by local synaptic
changes, Network, 1, pp. 271-302.

Nakayama, K. (1985). Biological image motion processing: A review. Vision
Research 25, pp 625-660.

Parisi, D., Cecconi, F. & Nolfi, S. (1990). Econets: Neural networks that learn
in an environment, Network, 1, pp. 149-168.

Pearlmutter, B. A. (1989). Learning state space trajectories in recurrent
networks, Neural Computation, 1, pp. 263-269.

Regier, T. (1992). The acquisition of lexical semantics for spatial terms:
A connectionist model of perceptual categorization. International
Computer Science Institute (ICSI) Technical Report TR-92-062, Berkely.

Schmidhuber, J. & Huber, R. (1991). Using adaptive sequential neurocontrol
for efficient learning of translation and rotation invariance. In:
T. Kohonen, K. Makisara, O. Simula & J. Kangas (Eds), Artificial
Neural Networks, pp 315-320, North Holland: Elsevier Science.

Schmidhuber, J. & Huber, R. (1991). Learning to generate artificial foveal
trajectories for target detection. International Journal of Neural
Systems, 2, pp. 135-141.

Schmidhuber, J. & Wahnsiedler, R. (1992). Planning simple trajectories using
neural subgoal generators. Second International Conference on
Simulations of Adaptive Behavior (SAB92). (Available by ftp from Jordan
Pollack's Neuroprose Archive).

Sereno, M. E. (1986). Neural network model for the measurement of visual
motion. Journal of the Optical Sociaty of America A, 3, pp 72.

Sereno, M. E. (1987). Implementing stages of motion analysis in neural.
Program of the Ninth Annual Conference of the Cognitive Science
Society, pp. 405-416, Hillsdala, NJ: Erlbaum.

Servan-Schreiber, D., Cleermans, A. & McClelland, J. L. (1991). Graded state
machines: The representation of temporal contingencies in simple
recurrent networks, 7, pp. 161-193.

Shimohara, K., Uchiyama T. & Tokunaya Y. (1988). Back propagation networks for
event-driven temporal sequence processing. In: IEEE International
Conference on Neural Networks (San Diego), vol. 1, pp. 665-672, NY:
IEEE.

Sutton, R. S. (1988). Learning to predict by the methods of temporal
differences, Machine Learning, 3, pp 9-44.

Tolg, S. (1991). A biological motivated system to track moving objectas by
active camera control. In:T. Kohonen, K. Makisara, O. Simula & J.
Kangas (Eds), Artificial Neural Networks, pp 1237-1240, North Holland:
Elsevier Science.

Wechsler, H. (Ed) (1991). Neural Networks for Human and Machine Perception,
New York: Academic Press.







------------------------------

End of Neuron Digest [Volume 11 Issue 11]
*****************************************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT