Copy Link
Add to Bookmark
Report
Neuron Digest Volume 11 Number 19
Neuron Digest Tuesday, 23 Mar 1993 Volume 11 : Issue 19
Today's Topics:
Request for software written in C++??
Very Fast Simulated Reannealing update v7.6
King's College London Neural Networks MSc and PhD courses
Beta sites for MUME
New Postdoc Position in Neural Modelling
Neural, fuzzy, rough systems
neural net applications to fixed-income security markets
A NEURAL COMPUTATION course reading list
Send submissions, questions, address maintenance, and requests for old
issues to "neuron-request@cattell.psych.upenn.edu". The ftp archives are
available from cattell.psych.upenn.edu (130.91.68.31). Back issues
requested by mail will eventually be sent, but may take a while.
----------------------------------------------------------------------
Subject: Request for software written in C++??
From: "Michael Soper" <soper@next2.nwac.sea06.navy.mil>
Date: Wed, 24 Feb 93 08:27:28 -0800
I am also trying to find neural network software written in C++. We
currently use ASPIRIN/MIGRAINE software from Mitre Corp., but as we move
more toward OOP we want to move the neural network stuff to C++.
If you know of any good backpropagation systems that are available on the
Internet I would appreciate any information. Thank you for your time in
addressing these issues.
------------------------------
Subject: Very Fast Simulated Reannealing update v7.6
From: ingber@umiacs.umd.edu (Lester Ingber)
Organization: UMIACS, University of Maryland, College Park, MD 20742
Date: 25 Feb 93 20:56:17 +0000
*****************
VFSR Update v7.6
*****************
Very Fast Simulated Reannealing (VFSR) is a very powerful global
optimization C-code algorithm especially useful for nonlinear and/or
stochastic systems.
I still am depositing the latest version in uunet, which can be
accessed via anonymous ftp.
ftp ftp.uu.net
cd tmp
binary
get vfsr.Z
Also in this directory is a review article on simulated annealing by
Lester Ingber, sarev.ps.Z. However, uunet regularly destroys old files
in tmp, and so I cannot guarantee these files always will be there.
After retrieving vfsr.Z, `uncompress vfsr.Z` will leave vfsr.
`sh vfsr` will leave directory VFSR.DIR with the code. (If your
receive VFSR via email, then first `uuencode mailfile` to get vfsr.Z,
and then follow the previous directions.)
The latest version in Netlib (ftp research.att.com to the opt
directory, login as netlib, binary, get vfsr.Z) and Statlib (ftp
lib.stat.cmu.edu to the general directory, login as statlib, get vfsr)
is v6.35.
The latest version in the UMIACS archive (ftp ftp.umiacs.umd.edu to
the pub/ingber directory, binary, get vfsr.Z) is v6.38. This archive
also contains some relevant (p)reprints.
Another reprint comparing VFSR to other algorithms is
rosen.advsim.ps.Z, by Bruce Rosen, coauthor of VFSR. This PostScript
paper can be obtained via anonymous ftp from rosen@ringer.cs.utsa.edu
in the pub/rosen directory. VFSR updates are placed there more often
than in Netlib, Statlib or UMIACS.
If this is not convenient, let me know and I will send you the code
or papers you require via email. Sorry, I cannot assume the task of
mailing out hardcopies.
The feedback of many users has been very important, and will continue
to be important, in maintaining and improving VFSR. Especially if
you have discovered solutions to any problems in implementing VFSR
on specific architectures, e.g., PC's, please contact me as this
information should be shared with other users in the NOTES file.
Lester
Highlights of CHANGES since 7.1 released on 1 Feb 93:
========================================================================
14 Feb 93
readme.ms:
In the Makefile section, more explicit information was inserted on
the three alternative methods of passing the Program Options.
========================================================================
13 Feb 93
TESTS/v..., readme.ms, user.h, user.c, vfsr.c, vfsr.h:
Added flexibility to parameter_type[]. Most users need only
be concerned that real-valued parameters must be designated by
negative integers and integer-valued parameters must be designated
by positive integers.
Some users may find it useful to include additional information. If
the absolute value of the parameter_type[] is +-2, +-4, +-6, +-8, then
no derivatives are calculated for this parameter, effectively excluding
it from being reannealed or having its curvature[] calculated. (This
only effects integer-valued parameters if INCLUDE_INTEGER_PARAMETERS is
set to TRUE.) If the absolute value is greater than 9, that value is
used to multiply DELTA_X for that variable; therefore, we have set the
cast on *parameter_type to be LONG_INT. This flexibility can be used
to exclude discontinuous functions from derivative operations, to have
DELTA_X for integer-valued functions be integral, etc. Odd numbers +-5
and +-7, are reserved for specific user-defined options, as are even
numbers +-6 and +-8 that require bypassing derivative calculations.
(We will keep the other numbers for future VFSR enhancements.)
The utility of having some user-defined flexibility was highlighted
by some correspondence with Bob Goldstein <u09872@uicvm.uic.edu>
and Brooke Paul Anderson <brooke@cco.caltech.edu>.
vfsr.c:
Corrected calculation of diagonal curvatures.
========================================================================
5 Feb 93
Makefile, readme.ms, user.[ch], vfsr.[ch], TESTS:
Added CURVATURE_0=FALSE to DEFINE_OPTIONS. When CURVATURE_0=TRUE,
then only a one-dimensional array, curvature[0], need be passed to
the vfsr module. This can help when dealing with very large parameter
spaces where the curvature calculations are not required. curvature[0]
is still maintained a hook for USER_INITIAL_COST_TEMP=TRUE in any case.
The general problem was emphasized by Bob Goldstein
<u09872@uicvm.uic.edu>.
========================================================================
3 Feb 93
NOTES:
Comments on setting some VFSR options in case premature convergence
is suspected. Thanks to Wu Kenong <wu@mcrcim.mcgill.edu> for checking
over some of the details.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Makefile, options_file, user.c, user.h:
Added options_file and OPTIONS_FILE=FALSE. If both
ADAPTIVE_OPTIONS=TRUE and OPTIONS_FILE=TRUE, then the Program Options
are read in from options_file. This permits more efficient testing
of various Program Options.
This feature was in the original VFSR code, and was added here at
the suggestion of Davyd Norris <daffy@physics.monash.edu.au>.
========================================================================
|| Prof. Lester Ingber [10ATT]0-700-L-INGBER ||
|| Lester Ingber Research Fax: 0-700-4-INGBER ||
|| P.O. Box 857 Voice Mail: 1-800-VMAIL-LI ||
|| McLean, VA 22101 EMail: ingber@alumni.caltech.edu ||
------------------------------
Subject: King's College London Neural Networks MSc and PhD courses
From: Mark Plumbley <mark@dcs.kcl.ac.uk>
Date: Fri, 26 Feb 93 13:25:01 +0000
Fellow Neural Networkers,
Please post or forward this announcement about our M.Sc. and Ph.D. courses
in Neural Networks to anyone who might be interested.
Thanks,
Mark Plumbley
-------------------------------------------------------------------------
Dr. Mark D. Plumbley Tel: +44 71 873 2241
Centre for Neural Networks Fax: +44 71 873 2017
Department of Mathematics/King's College London/Strand/London WC2R 2LS/UK
-------------------------------------------------------------------------
CENTRE FOR NEURAL NETWORKS
and
DEPARTMENT OF MATHEMATICS
King's College London
Strand
London WC2R 2LS, UK
M.Sc. AND Ph.D. COURSES IN NEURAL NETWORKS
---------------------------------------------------------------------
M.Sc. in INFORMATION PROCESSING and NEURAL NETWORKS
---------------------------------------------------
A ONE YEAR COURSE
CONTENTS
Dynamical Systems Theory
Fourier Analysis
Biosystems Theory
Advanced Neural Networks
Control Theory
Combinatorial Models of Computing
Digital Learning
Digital Signal Processing
Theory of Information Processing
Communications
Neurobiology
REQUIREMENTS
First Degree in Physics, Mathematics, Computing or Engineering
NOTE:
For 1993/94 we have 3 SERC quota awards for this course.
---------------------------------------------------------------------
Ph.D. in NEURAL COMPUTING
-------------------------
A 3-year Ph.D. programme in NEURAL COMPUTING is offered to applicants
with a First degree in Mathematics, Computing, Physics or Engineering
(others will also be considered). The first year consists of courses
given under the M.Sc. in Information Processing and Neural Networks
(see attached notice). Second and third year research will be
supervised in one of the various programmes in the development and
application of temporal, non-linear and stochastic features of neurons
in visual, auditory and speech processing. There is also work in
higher level category and concept formation and episodic memory
storage. Analysis and simulation are used, both on PC's SUNs and main
frame machines, and there is a programme on the development and use of
adaptive hardware chips in VLSI for pattern and speed processing.
This work is part of the activities of the Centre for Neural Networks
in the School of Physical Sciences and Engineering, which has over 40
researchers in Neural Networks. It is one of the main centres of the
subject in the U.K.
---------------------------------------------------------------------
For further information on either of these courses please contact:
Postgraduate Secretary
Department of Mathematics
King's College London
Strand
London WC2R 2LS, UK
MATHS@OAK.CC.KCL.AC.UK
------------------------------
Subject: Beta sites for MUME
From: Marwan Jabri <marwan@sedal.su.OZ.AU>
Date: Wed, 03 Mar 93 01:14:36 +1100
We are seeking universities beta sites for testing a new release (0.6) of
a multi-net multi-algorithm connectionist simulator (MUME) for the
following plateforms:
- HPs 9000/700
- SGIs
- DEC Alphas
- PC DOS (with DJGCC)
If interested please send, name, email, affiliation, address and fax
number to my email address below.
Note that starting with release 0.6, MUME (including source code) will be made
available to universities through FTP but following the signature of a
license protecting the University of Sydney and the authors.
Marwan
Marwan Jabri Email: marwan@sedal.su.oz.au
Senior Lecturer Tel: (+61-2) 692-2240
SEDAL, Electrical Engineering, Fax: 660-1228
Sydney University, NSW 2006, Australia Mobile: (+61-18) 259-086
------------------------------
Subject: New Postdoc Position in Neural Modelling
From: "James A. Reggia" <reggia@cs.UMD.EDU>
Date: Tue, 02 Mar 93 09:31:31 -0500
Post-Doctoral Position in Neural Modelling
A new post-doctoral position in computational neuroscience will
be available at the University of Maryland, College Park, MD starting
between June 1 and Sept. 1, 1993. This research position will center
on modelling neocortical self-organization and plasticity. Requirements
are a PhD in computer science, neuroscience, applied math, or a related
area by the time the position starts, experience with neural modelling,
and familiarity with the language C. The position will last one or
(preferably) two years. The University of Maryland campus is located
just outside of Washington, DC.
If you would like to be considered for this position, please
send via regular mail services a cover letter expressing your interest
and desired starting date, a copy of your cv, the names of two possible
references (with their address, phone number, fax number, and email
address), and any other information you feel would be relevant to
James Reggia
Dept. of Computer Science
A. V. Williams Bldg.
University of Maryland
College Park, MD 20742
or send this information via FAX at (301)405-6707. (Applications
will NOT be accepted via email.) Closing date for receipt of
applications is March 26, 1993.
If you have questions about the position please send email
to reggia@cs.umd.edu .
------------------------------
Subject: Neural, fuzzy, rough systems
From: "Dr. Roman Swiniarski" <rswiniar%saturn@sdsu.edu>
Date: Tue, 02 Mar 93 09:34:39 -0800
Dear Madam/Sir/Professor,
I dare to provide the information about the short course.
We will be very happy to introduce the distinguish world class
scientists and our friends: Professor L.K. Hansen, Professor W. Pedrycz and
Professor A. Skowron.
Best regards,
Roman Swiniarski,
NEURAL NETWORKS. FUZZY AND ROUGH SYSTEMS.
THEORY AND APPLICATIONS.
Friday, April 2, 1993, room BAM 341
The short course sponsored by the Interdisciplinary Research Center for
Scientific Modeling and Computation at Department of Mathematical Sciences
San Diego State University.
8:15-11:30 Professor L. K. Hansen, Technical University of Denmark, Denmark
1. Introduction to neural networks.
2. Neural Networks for Signal Processing, Prediction and Image Processing.
11:30-1.00 pm R. Swiniarski, San Diego State University
1. Application of neural networks to systems, adaptive control,
and genetics.
Break
2:00-4:00 Professor W. Pedrycz, University of Manitoba, Canada
1. Introduction to Fuzzy Sets.
2. Application of Fuzzy Sets:
- knowledge based computations and logic-oriented neurocomputing
- fuzzy modeling
- models of fuzzy reasoning
4:00-6:30 Professor A. Skowron, University of Warsaw, Poland
1. Introduction to Rough Sets and Decision Systems.
2. Applications of Rough Sets and Decision Systems.
3. Neural Networks, Fuzzy Systems, Rough Sets and Evidence Theory.
There will be a $80 (students &40) preregistration fee. To register,
please send your name and affiliations along with a check to
Interdisciplinary Research Center
Department of Mathematical Sciences
San Diego State University.
San Diego, California 82182-0314, U.S.A.
The check should be made out to SDSU Interdisciplinary Research Center.
The registration fee after March, 18 will be a $100. The number of
participants is limited.
Should you need further information, please contact Roman Swiniarski
(619) 594-5538 rswiniar@saturn.sdsu.edu or Jose Castillo (619) 594-7205
castillo@math.sdsu.edu.
You are cordially invited to participate in the short course.
------------------------------
Subject: neural net applications to fixed-income security markets
From: P.Refenes@cs.ucl.ac.uk
Date: Wed, 03 Mar 93 11:13:58 +0000
you might find the following of interest.
NEURAL NETWORK SYSTEM FOR TACTICAL ASSET ALLOCATION IN THE
GLOBAL BONDS MARKETS
M. AZEMA-BARAC, A. N. REFENES,
Department of Computer Science,
University College London,
Gower Street WC1 6BT,
London UK.
ABSTRACT
Asset allocation is a critical decision in fund management
and is having a profound effect in many industries. Domestic
and/or global asset allocation requires an understanding of
the linkages between the global economies, and between
fundamentals of the capital markets. Modeling of such
systems has traditionally been done in partial equilibrium.
Such models have failed to explain many empirical financial
anomalies.
Because of their inductive nature, dynamical systems such as
neural networks can bypass the step of theory formulation,
and they can infer complex non-linear relationships between
input and output variables. This paper presents a neural
network system for tactical asset allocation in the seven
major bonds markets. The system consists of several networks
each designed to optimise a local portfolio (local bonds
plus US cash). These are subsequently integrated into a
global portfolio management system which imposes financial
constraints on asset allocation. The portfolio yields
returns in excess of 100% for three years which compares
favourably with industry benchmarks returning 34% for the
same period. Intervals of values for the parameters that
influence network performance over which this performance is
persistent, are identified.
REF: Proc. IEE third Int conf. on ANNS, 25-27 May 1993, Brighton, UK.
- -------------------------------------------------
NEUROFORECASTING CLUB
The UK's Depratment of Trade & Industry has recently
anounced a Neural Computing Technology Transfer Programme.
The NEUROFORECASTING Club, part of the DTI's programme, aims
to establish an awareness of the neural network technology
amongst IT supliers, banks, multinational companies, and
financial institutions whose needs for efficient asset
management are of similar type.
The NEUROFORECASTING Club applies neural techniques to these
areas of financial decision making and forecasting and is
building application demonstrators in:
- - foreign exchange & interest rate prediction,
- - bond and stock valuation and trading,
- - commodity price prediction,
- - tactical asset allocation in the global capital markets
- - risk and liability management in the above areas.
The management consortium for the NEUROFORECASTING Club is a
colloboration between London Business School and University
College London. The Club has established a centralised unit
at LBS which acts as a centre for developing the application
demonstrators; a place for members to second staff to
participate in the application development cycle; to
maintain a "hands-on" area, to provide training facilities,
organise seminars, etc.
Participation is open to any UK Company, Organisation, or
Institution which are potential users of neural network
systems, subject to their signing the Club Agreement and
paying the agreed contribution towards Club activities.
The NEUROFORECASTING Club is receiving DTI support at 50% of
total cost, approximately $1 million over two years.
Matching funds will be raised from membership contributions.
Further Information: Prof. Derek Bunn, Department of
Decision Science, London Business School, Tel: (++44 71) 262
50 50, or Dr Paul Refenes, Department of Computer Science,
Univesrity College London, Tel: (++44 71) 380 73 29.
- -----------------------------------------------------------
RESEARCH POSITIONS AT LBS
The Decision Science Faculty at the London Business School
welcomes applications from individuals for PhD Research or
Postdoctoral Fellowships. Areas of particular interest to
the group include decision analysis, optimisation,
forecasting, simulation and the application of neural
networks to investment and finance. PhD scholarships are
available from a number of corporate sponsors, and we are
currently keen to support researchers interested in applying
(1) neural networks to time-series forecasting and (2)
optimisation and decision analytic methods to electricity
capacity planning. Potential candidates are invited to
write, explaining research interests, together with c-v and
addresses of two references, to Professor Derek Bunn, London
Business School, Sussex Place, Regennt's Park, London NW1
4SA.
------------------------------
Subject: A NEURAL COMPUTATION course reading list
From: unni@krusty.eecs.umich.edu (K. P. Unnikrishnan)
Organization: University of Michigan EECS Dept., Ann Arbor, MI
Date: 20 Feb 93 20:51:48 +0000
Here is the reading list for a course I taught last semester.
Unnikrishnan
-----------------------
READING LIST FOR THE COURSE "NEURAL COMPUTATION"
EECS-598-6 (FALL 1992), UNIVERSITY OF MICHIGAN
INSTRUCTOR: K. P. UNNIKRISHNAN
-----------------------------------------------
A. COMPUTATION AND CODING IN THE NERVOUS SYSTEM
1. Hodgkin, A.L., and Huxley, A.F. A quantitative description of membrane
current and its application to conduction and excitation in nerve. J. Physiol.
117, 500-544 (1952).
2a. Del Castillo, J., and Katz, B. Quantal components of the end-plate
potential. J. Physiol. 124, 560-573 (1954).
2b. Del Castillo, J., and Katz, B. Statistical factors involved in neuromuscular
facilitation and depression. J. Physiol. 124, 574-585 (1954).
3. Rall, W. Cable theory for dendritic neurons. In: Methods in neural
modeling (Koch and Segev, eds.) pp. 9-62 (1989).
4. Koch, C., and Poggio, T. Biophysics of computation: neurons, synapses and
membranes. In: Synaptic function (Edelman, Gall, and Cowan, eds.) pp.
637-698 (1987).
B. SENSORY PROCESSING IN VISUAL AND AUDITORY SYSTEMS
1. Werblin, F.S., and Dowling, J.E. Organization of the retina of the mudpuppy,
Necturus maculosus: II. Intracellular recording. J. Neurophysiol. 32, 339-355
(1969).
2a. Barlow H.B., and Levick, W.R. The mechanism of directionally selective
units in rabbit's retina. J. Physiol. 178, 477-504 (1965).
2b. Lettvin, J.Y., Maturana, H.R., McCulloch, W.S., and Pitts, W.H. What the
frog's eye tells the frogs's brain. Proc. IRE 47, 1940-1951 (1959).
3. Hubel, D.H., and Wiesel, T.N. Receptive fields, binocular interaction and
functional architecture in the cat's visual cortex. J. Physiol. 160, 106-154
(1962).
4a. Suga, N. Cortical computational maps for auditory imaging. Neural Networks,
3, 3-21 (1990).
4b. Simmons, J.A. A view of the world through the bat's ear: the formation
of acoustic images in echolocation. Cognition, 33 155-199 (1989).
C. MODELS OF SENSORY SYSTEMS
1. Hect,S., Shlaer, S., and Pirenne, M.H. Energy, quanta, and vision. J. Gen.
Physiol. 25, 819-840 (1942).
2. Julesz, B., and Bergen, J.R. Textons, the fundamental elements in
preattentive vision and perception of textures. Bell Sys. Tech. J. 62,
1619-1645 (1983).
3a. Harth, E., Unnikrishnan, K.P., and Pandya, A.S. The inversion of sensory
processing by feedback pathways: a model of visual cognitive functions.
science 237, 184-187 (1987).
3b. Harth, E., Pandya, A.S., and Unnikrishnan, K.P. Optimization of cortical
responses by feedback modification and synthesis of sensory afferents. A model
of perception and rem sleep. Concepts Neurosci. 1, 53-68 (1990).
3c. Koch, C. The action of the corticofugal pathway on sensory thalamic
nuclei: A hypothesis. Neurosci. 23, 399-406 (1987).
4a. Singer, W. et al., Formation of cortical cell assemblies. In: CSH Symposia
on Quant. Biol. 55, pp. 939-952 (1990).
4b. Eckhorn, R., Reitboeck, H.J., Arndt, M., and Dicke, P. Feature linking via
synchronization among distributed assemblies: Simulations of results from
cat visual cortex. Neural Comp. 293-307 (1990).
5. Reichardt, W., and Poggio, T. Visual control of orientation behavior in
the fly. Part I. A quantitative analysis. Q. Rev. Biophys. 9, 311-375 (1976).
D. ARTIFICIAL NEURAL NETWORKS
1a. Block, H.D. The perceptron: a model for brain functioning. Rev. Mod. Phy.
34, 123-135 (1962).
1b. Minsky, M.L., and Papert, S.A. Perceptrons. pp. 62-68 (1988).
2a. Hornik, K., Stinchcombe, M., and White, H. Multilayer feedforward
networks are universal approximators. Neural Networks 2, 359-366 (1989).
2b. Lapedes, A., and Farber, R. How neural nets work. In: Neural Info. Proc.
Sys. (Anderson, ed.) pp. 442-456 (1987).
3a. Ackley, D.H., Hinton, G.E., and Sejnowski, T.J. A learning algorithm for
boltzmann machines. Cog. Sci. 9, 147-169 (1985).
3b. Hopfield, J.J. Learning algorithms and probability distributions in
feed-forward and feed-back networks. PNAS, USA. 84, 8429-8433 (1987).
4. Tank, D.W., and Hopfield, J.J. Simple neural optimization networks:
An A/D converter, signal decision circuit, and linear programming circuit.
IEEE Tr. Cir. Sys. 33, 533-541 (1986).
E. NEURAL NETWOK APPLICATIONS
1. LeCun, Y., et al., Backpropagation applied to handwritten zip code
recognition. Neural Comp. 1, 541-551 (1990).
2. Lapedes, A., and Farber, R. Nonlinear signal processing using neural
networks. LA-UR-87-2662, Los Alamos Natl. Lab. (1987).
3. Unnikrishnan, K.P., Hopfield, J.J., and Tank, D.W. Connected-digit
speaker-dependent speech recognition using a neural network with time-delayed
connections. IEEE Tr. ASSP. 39, 698-713 (1991).
4a. De Vries, B., and Principe, J.C. The gamma model - a new neural model for
temporal processing. Neural Networks 5, 565-576 (1992).
4b. Poddar, P., and Unnikrishnan, K.P. Memory neuron networks: a prolegomenon.
GMR-7493, GM Res. Labs. (1991).
5. Narendra, K.S., and Parthasarathy, K. Gradient methods for the optimization
of dynamical systems containing neural networks. IEEE Tr. NN 2, 252-262 (1991).
F. HARDWARE IMPLEMENTATIONS
1a. Mahowald, M.A., and Mead, C. Silicon retina. In: Analog VLSI and neural
systems (Mead). pp. 257-278 (1989).
1b. Mahowald, M.A., and Douglas, R. A silicon neuron. Nature 354, 515-518
(1991).
2. Mueller, P. et al. Design and fabrication of VLSI components for a
general purpose analog computer. In: Proc. IEEE workshop VLSI neural sys.
(Mead, ed.) pp. xx-xx (1989).
3. Graf, H.P., Jackel, L.D., and Hubbard, W.E. VLSI implementation of
a neural network model. Computer 2, 41-49 (1988).
G. ISSUES ON LEARNING
1. Geman, S., Bienenstock, E., and Doursat, R. Neural networks and the
bias/variance dilema. Neural Comp. 4, 1-58 (1992).
2. Brown, T.H., Kairiss, E.W., and Keenan, C.L. Hebbian synapses: Biophysical
mechanisms and algorithms. Ann. Rev. Neurosci. 13, 475-511 (1990).
3. Haussler, D. Quantifying inductive bias: AI learning algorithms and
valiant's learning framework. AI 36, 177-221 (1988).
4. Reeke, G.N. Jr., and Edelman, G.M. Real brains and artificial intelligence.
Daedalus 117, 143-173 (1988).
5. White, H. Learning in artificial neural networks: a statistical
perspective. Neural Comp. 1, 425-464 (1989).
- ----------------------------------------------------------------------
SUPPLEMENTAL READING
Nehr, E., and Sakmann, B. Single channel currents recorded from membrane
of denervated frog muscle fibers. Nature 260, 779-781 (1976).
Rall, W. Core conductor theory and cable properties of neurons. In: Handbook
Physiol. (Brrokhart, Mountcastle, and Kandel eds.) pp. 39-97 (1977).
Shepherd, G.M., and Koch, C. Introduction to synaptic circuits. In: The
synaptic organization of the brain (Shepherd, ed.) pp. 3-31 (1990).
Junge, D. Synaptic transmission. In: nerve and muscle excitation (Junge)
pp. 149-178 (1981).
Scott, A.C. The electrophysics of a nerve fiber. Rev. Mod. Phy. 47, 487-533
(1975).
Enroth-Cugell, C., and Robson, J.G. The contrast sensitivity of retinal
ganglion cells of the cat. J. Physiol. 187, 517-552 (1966).
Felleman, D.J., and Van Essen, D.C. Distributed hierarchical processing in the
primate cerebral cortex. Cerebral Cortex, 1, 1-47 (1991).
Julesz, B. Early vision and focal attention. Rev. Mod. Phy.63, 735-772 (1991).
Sejnowski, T.J., Koch, C., and Churchland, P.S. Computational neuroscience.
Science 241, 1299-1302 (1988).
Churchland, P.S., and Sejnowski, T.J. Perspectives on Cognitive Neuroscience.
Science 242, 741-745 (1988).
McCulloch, W.S., and Pitts, W. A logical calculus of ideas immanent in
nervous activity. Bull. Math. Biophy. 5, 115-133
Hopfield, J.J. Neural networks and physical systems with emergent
collective computational abilities. PNAS, USA. 79, 2554-2558 (1982).
Hopfield, J.J. Neurons with graded responses have collective computational
properties like those of two-state neurons. PNAS, USA. 81, 3088-3092 (1984).
Hinton, G.E., and Sejnowski, T.J. Optimal perceptual inference. Proc. IEEE
CVPR. 448-453 (1983).
Rumelhart, D.E., Hinton, G.E., and Williams, R.J. Learning representations
by back-propagating errors. Nature 323, 533-536 (1986).
Unnikrishnan, K.P., and Venugopal, K.P. Learning in connectionist networks
using the Alopex algorithm. Proc. IEEE IJCNN. I-926 - I-931 (1992).
Cowan, J.D., and Sharp, D.H. Neural nets. Quart. Rev. Biophys. 21, 365-427
(1988).
Lippmann, R.P. An introduction to computing with neural nets. IEEE ASSP
Mag. 4, 4-22 (1987).
Sompolinsky, H. Statistical mechanics of neural networks. Phy. Today 41, 70-80
(1988).
Hinton, G.E. Connectionist learning procedures. Art. Intel. 40, 185-234 (1989).
------------------------------
End of Neuron Digest [Volume 11 Issue 19]
*****************************************