Copy Link
Add to Bookmark
Report

Neuron Digest Volume 07 Number 11

eZine's profile picture
Published in 
Neuron Digest
 · 1 year ago

Neuron Digest   Wednesday, 27 Feb 1991                Volume 7 : Issue 11 

Today's Topics:
Kohone's Network
Intro and Help
Alopex algorithm and ANNs
New M.A. Course in the U.K.
Student Conference Sponsorships
Limited precision implementations (updated posting)
Special message to SF Bay Area Neural Net Researchers
Looking for Phoneme Data
Postdoc & Research Assistant openings in the COGNITIVE & NEURAL BASES OF LEARNI
summer position
neural net position available
Question regarding Back-propagation Rule........
RE: Transputers for neural networks?


Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"
Use "ftp" to get old issues from hplpm.hpl.hp.com (15.255.176.205).

------------------------------------------------------------

Subject: Kohone's Network
From: harish@enme.ucalgary.ca (Anandarao Harish)
Date: Tue, 05 Feb 91 15:11:32 -0700


I was wondering whether any of you had some information for me on
Kohonen's network. I am a mechanical engr. student working in the area of
Computer aided Manufacturing (specifically in the area of machine cell
design) and was thinking of using kohonen's network for the problem of
machine cell grouping. I would really appreciate if you could inform me
of a simulation package available (like the backpropagation algorithm by
Donald R. Tveter) you know of, that is available on public domain.
Thanking you
harish

Harish A. Rao
Dept. of Mechanical Engr.
The University of Calgary
Calgary, Canada

------------------------------

Subject: Intro and Help
From: JHDCI@canal.crc.uno.edu
Date: Tue, 12 Feb 91 21:38:00 -0600

I am a doctoral student in education at the University of New Orleans and
am planning on writing my dissertation on some form of simulation as it
applies to education...e.g., class room simulation, curriculum sim.,
trend sim., etc. ad nauseum. I have been avidly reading everything I can
get my hands on concerning neural nets, complexity, genetic algorithms,
cellular automata, emergence, and anything else that appears nonlinear.
There appears to be more than enough literature. I have been especially
impressed with "Genetic Algorithms..." by David Goldberg although I am
having a little trouble in reading the PASCAL code (I, unfortunately
write in QuickBasic). Chris Langton's writings have also influenced my
thinking and given me a mad desire to create an "Artificial School."

My desires are for a reasonably simple computer simulation program
(public domain if possible) that can be run on a PC. I would also like
to do more reading on GA's and artificial life. I have already copied
the digests from ALife, Neuron, and Simulator and am plowing through them
as fast as possible. It would also be nice to see some more genetic
algorithm code (even in PASCAL, I can read it at about the same speed I
read Urdu).

Any info that any of you could send me, FTP addresses, product names, people
to contact would be greatly appreciated.

Thank you all for your help.

Jack DeGolyer [JHDCI@UNO.EDU]
University of New Orleans

------------------------------

Subject: Alopex algorithm and ANNs
From: venu%sea.oe.fau.EDU@CUNYVM.CUNY.EDU
Date: Wed, 13 Feb 91 22:40:25 -0500

Hi

I am trying to gather information about the works going on on Alopex
algorithm, applied to ANNs. I know about some of the works going on at
Syracuse Univ (under Prof. Harth, who developed it) and at some other
places. We are working on the digital hardware implementation of the
algorithm as well as on some modifications and applications.

I would appreciate if the fellow researchers respond to this request.

Thanks,

K P Venugopal venu@sea4.oe.fau.edu
Dept. of Elec. Engineering Ph. 407-367-2731
Florida Atlantic University
Boca Raton, FL 33431.




------------------------------

Subject: New M.A. Course in the U.K.
From: Andy Clark <andycl@syma.sussex.ac.uk>
Date: Mon, 18 Feb 91 20:25:47 +0000

Dear People, Here is a short ad concerning a new course which may be of
interest to you or your students.

NOTICE OF NEW M.A.COURSE BEGINNING OCT. 1991

UNIVERSITY OF SUSSEX, BRIGHTON, ENGLAND
SCHOOL OF COGNITIVE AND COMPUTING SCIENCES

M.A. in the PHILOSOPHY OF COGNITIVE SCIENCE

This is a one year taught course which examines issues relating to
computational models of mind. A specific focus concerns the significance
of connectionist models and the role of rules and symbolic representation
in cognitive science. Students would combine work towards a 20,000 word
philosophy dissertation with subsidiary courses introducing aspects of
A.I. and the other Cognitive Sciences. For information about this new
course contact Dr Andy Clark, School of Cognitive and Computing Sciences,
University of Sussex,Brighton, BN1 9QH, U.K. E-mail:
andycl@uk.ac.sussex.syma

------------------------------

Subject: Student Conference Sponsorships
From: issnnet@park.bu.edu (Student Society Account)
Date: Fri, 22 Feb 91 12:54:24 -0500


---- THIS NOTE CONTAINS MATERIAL OF INTEREST TO ALL STUDENTS ----
(and some non-students)

This message is a brief update of the International Student Society for
Neural Networks (ISSNNet), and also an ANNOUNCEMENT OF AVAILABLE STUDENT
SPONSORSHIP AT UPCOMING NNets CONFERENCES.

----------------------------------------------------------------------
NOTE TO ISSNNet MEMBERS: If you have joined the society but did not
receive our second newsletter, or have not heard from us in the recent
past, please send e-mail to <issnnet@park.bu.edu>.
----------------------------------------------------------------------

1) We had a problem with our Post Office Box (in the USA), which was
inadvertently shut down for about one month between Christmas and the end
of January. If you sent us surface mail which was returned to you, please
send it again. Our apologies for the inconvenience.

2) We are about to distribute the third newsletter. This is a special
issue that includes our bylaws, a description of the various officer
positions, and a call for nominations for upcoming elections. In
addition, a complete membership list will be included. ISSNNet MEMBERS:
If you have not sent us a note about your interests in NNets, you must do
so by about the end of next week to insure it appears in the membership
list. Also, all governors should send us a list of their members if they
have not done so already.

3) STUDENT SPONSORSHIPS AVAILABLE: We have been in contact with the IEEE
Neural Networks Council. We donated $500 (about half of our savings to
this point) to pay the registration for students who are presenting
articles (posters or oral presentations) at the Helsinki (ICANN), Seattle
(IJCNN), or Singapore (IJCNN) conferences. The IEEE Neural Networks
Council has augmented our donation with an additional $5,000 to be
distributed between the two IJCNN conferences, and we are currently in
contact with the International Neural Networks Society (INNS) regarding a
similar donation. We are hoping to pay for registration and proceedings
for an approximately equal number of students at each of the three
conferences. Depending on other donations and on how many people are
eligible, only a limited number of sponsorships may be available.

The details of eligibility will be officially published in our next
newsletter and on other mailing lists, but generally you will need to be
the person presenting the paper (if co-authored), and you must receive
only partial or no support from your department. Forms will be included
with the IJCNN paper acceptance notifications. It is not necessary to be
a member of any of these societies, although we hope this will encourage
future student support and increased society membership.

IF YOU HAVE SUBMITTED A PAPER TO ONE OF THE IJCNN CONFERENCES, YOU
WILL RECEIVE DETAILS WITH YOUR NOTIFICATION FROM IJCNN. Details on the
ICANN conference will be made available in the near future. For other
questions send us some e-mail <issnnet@park.bu.edu>.

ISSNNet, Inc.
PO Box 557, New Town Branch
Boston, MA 02258

Sponsors will be officially recognized in our future newsletters, and
will be mentioned by the sponsored student during the presentations and
posters.

6) We are considering the possibility of including abstracts from papers
written by ISSNNet members in future newsletters. Depending on how many
ISSNNet papers are accepted at the three conferences, we may be able to
publish the abstracts in the fourth newsletter, which should come out
before the ICANN conference. This would give the presenters some
additional publicity, and would give ISSNNet members a sneak preview of
what other people are doing.

MORE DETAIL ON THESE TOPICS WILL BE INCLUDED IN OUR NEXT NEWSLETTER,
WHICH WE EXPECT TO PUBLISH AROUND THE END OF THIS MONTH.

For more details on ISSNNet, or to receive a sample newsletter, send
e-mail to <issnnet@park.bu.edu>. You need not be a student to become a
member!

------------------------------

Subject: Limited precision implementations (updated posting)
From: MURRE%rulfsw.LeidenUniv.nl@BITNET.CC.CMU.EDU
Date: Mon, 25 Feb 91 16:57:00 +0700

Connectionist researchers,

Here is an updated posting on limited precision implementations of neural
networks. It is my impression that research in this area is still
fragmentary. This is surprising, because the literature on analog and
digital implementations is growing very fast. There is a wide range of
possibly applicable rules of thumb. Claims about sufficient precision
differ from single bits to 20 bits or more for certain models. Hard
problems may need higher precision. There may be a trade-off between few
weights (nodes) with high precision weights (activations) versus many
weights (nodes) with low precision weights (act.). The precise relation
between precision in weights and activations remains unclear, as does the
relation between the effect of precision on learning and recall.

Thanks for all comments so far.

Jaap


Jacob M.J. Murre
Unit of Experimental and Theoretical Psychology
Leiden University
P.O. Box 9555
2300 RB Leiden
The Netherlands


General comments by researchers

By Soheil Shams:

As far as the required precision for neural computation is concerned,
the precision is directly proportional to the difficulty of the problem
you are trying to solve. For example in training a back-propagation
network to discriminate between two very similar classes of inputs, you
will need to have high precision values and arithmetic to effectively
find the narrow region in the space that the separating hyperplane has
to be drawn at. I believe that the lack of analytical information in
this area is due to this relationship between the specific application
and the required precision . At the NIPS90 workshop on massively
parallel implementations, some people indicated they have determined,
EMPERICALLY, that for most problems, 16-bit precision is required for
learning and 8-bit for recall of back-propagation.

By Roni Rosenfeld:

Santosh Venkatesh (of Penn State, I believe, or is it U. Penn?) did
some work a few years ago on how many bits are needed per weight. The
surprising result was that 1 bit/weight did most of the work, with
additional bits contributing surprisingly little.

By Thomas Baker:

... We have found that for backprop learning, between twelve and
sixteen bits are needed. I have seen several other papers with these
same results. After learning, we have been able to reduce the weights
to four to eight bits with no loss in network performance. I have also
seen others with similar results. One method that optical and analog
engineers use is to calculate the error by running the feed forward
calculations with limited precision, and learning the weights with a
higher precision. The weights are quantized and updated during
training.

I am currently collecting a bibliography on limited precision papers.
... I will try to keep in touch with others that are doing research in
this area.



References

Brause, R. (1988). Pattern recognition and fault tolerance in non-linear
neural networks. Biological Cybernetics, 58, 129-139.

Hollis, P.W., J.S. Harper, J.J. Paulos (1990). The effects of precision
constraints in a backpropagation learning network. Neural Computation,
2, 363-373.

Holt, J.L., & J-N. Hwang (in prep.). Finite precision error analysis of
neural network hardware implementations. Univ. of Washington, FT-10,
WA 98195.
(Comments by the authors:
We are in the process of finishing up a paper which gives a theoretical
(systematic) derivation of the finite precision neural network
computation. The idea is a nonlinear extension of "general compound
operators"
widely used for error analysis of linear computation. We
derive several mathematical formula for both retrieving and learning of
neural networks. The finite precision error in the retrieving phase
can be written as a function of several parameters, e.g., number of
bits of weights, number of bits for multiplication and accumlation,
size of nonlinear table-look-up, truncation/rounding or jamming
approaches, and etc. Then we are able to extend this retrieving phase
error analysis to iterative learning to predict the necessary number of
bits. This can be shown using a ratio between the finite precision
error and the (floating point) back-propagated error. Simulations have
been conducted and matched the theoretical prediction quite well.)

Hong, J. (1987). On connectionist models. Tech. Rep., Dept. Comp. Sci.,
Univ. of Chicago, May 1987.
(Demonstrates that a network of perceptrons needs only finite-precision
weights.)

Jou, J., & J.A. Abraham (1986). Fault-tolerant matrix arithmetic and
signal processing on highly concurrent computing structures. Proceedings
of the IEEE, 74, 732-741.

Kampf, F., P. Koch, K. Roy, M. Sullivan, Z. Delalic, & S. DasGupta
(1989). Digital implementation of a neural network. Tech. Rep. Temple
Univ., Philadelphia PA, Elec. Eng. Div.

Moore, W.R. (1988). Conventional fault-tolerance and neural computers.
In: R. Eckmiller, & C. Von der Malsburg (Eds.). Neural Computers. NATO
ASI Series, F41, (Berling: Springer-Verlag), 29-37.

Nadal, J.P. (1990). On the storage capacity with sign-constrained
synaptic couplings. Network, 1, 463-466.

Nijhuis, J., & L. Spaanenburg (1989). Fault tolerance of neural
associative memories. IEE Proceedings, 136, 389-394.

Rao, A., M.R. Walker, L.T. Clark, & L.A. Akers (1989). Integrated
circuit emulation of ART networks. Proc. First IEEE Conf. Artificial
Neural Networks, 37-41, Institution of Electrical Engineers, London.

Rao, A., M.R. Walker, L.T. Clark, L.A. Akers, & R.O. Grondin (1990).
VLSI implementation of neural classifiers. Neural Computation, 2, 35-43.
(The paper by Rao et al. give an equation for the number of bits of
resolution required for the bottom-up weights in ART 1:

t = (3 log N) / log(2),

where N is the size of the F1 layer in nodes.)


------------------------------

Subject: Special message to SF Bay Area Neural Net Researchers
From: "Andras Pellionisz" <pellioni@pioneer.arc.nasa.gov>
Date: Mon, 25 Feb 91 09:51:01 -0800

"Special message too Neural Net researchers in the San Francisco Region:

San Francisco Bay Area Special Interest Group of the International Neural
Network Society (SF-SIGINNS) has been established. If interested in free
enrollment, please contact Dr. Pellionisz with your name, mailing and/or
E-mail address.

Andras J. Pellionisz can be reached:
NASA Ames Research Center
Neurocomputer Laboratory, Bldg.261-3
Moffett Fields, CA 94035-1000
Voice/messages: (415) 604-4821
Fax: (415) 604-0046
E-mail: pellioni@pioneer.arc.nasa.gov

Please join in!
A. Pellionisz"


------------------------------

Subject: Looking for Phoneme Data
From: gunhan@otis.hssc.scarolina.edu (The Young Turk)
Date: Mon, 25 Feb 91 13:40:54 -0500

I am looking for speech data that I can use as input into a phoneme
recognition neural network. I am working on alternative neural network
models that have improved learning rates in terms of time and I need to
test these algorithms with speech data used with traditional
implementations of neural network based speech recognition packages. Any
information on where and how to get this speech input data would be
greatly appreciated. Thanks.

Gunhan H. Tatman
Computer Engineering Dept.
The University of South Carolina
Columbia, SC 29201

e-mail: gunhan@otis.hssc.scarolina.edu
(gunhan@129.252.1.2)


------------------------------

Subject: Postdoc & Research Assistant openings in the COGNITIVE & NEURAL BASES OF LEARNING (Rutgers, NJ)
From: gluck%psych@Forsythe.Stanford.EDU (Mark Gluck)
Date: Mon, 25 Feb 91 11:03:40 -0800

Postdoctoral & Research/Programming Positions in:

THE COGNITIVE & NEURAL BASES OF LEARNING
----------------------------------------------------------------------------
Rutgers University
Center for Molecular & Behavioral Neuroscience
195 University Avenue
Newark, NJ 07102

Postdoctoral Positions in:
--------------------------
1. EMPIRICAL STUDIES OF HUMAN LEARNING:
Including: designing and conducting studies
of human learning and decision making,
especially categorization learning. These
are primarily motivated by a desire to
evaluate and refine adaptive network models
of learning and memory (see, e.g., the
experimental studies described in Gluck
& Bower, 1988a; Pavel, Gluck, & Henkle, 1988).
This work requires a familiarity with psychological
methods of experimental design and data
analysis.

2. COMPUTATIONAL MODELS OF ANIMAL & HUMAN LEARNING:
Including: developing and extending current network
models of learning to more accurately reflect a
wider range of animal and human learning
behaviors. This work requires
strong programming skills, familiarity with
adaptive network theories and methods, and
some degree of mathematical and analytic
training.

3. COMPUTATIONAL MODELS OF THE NEUROBIOLOGY OF LEARNING & MEMORY:
Including: (1) Models and theories of the neural
bases of classical and operant conditioning;
(2) Neural mechansims for human associative learning;
(3) Theoretical studies which seek to
form links, behavioral or biological,
between animal and human learning (see, e.g.,
Gluck, Reifsnider, & Thompson (1989), in Gluck &
Rumelhart (Eds.) Neuroscience & Connectionist Theory).
and Connectionist Theory).


Full or Part-Time Research & Programming Positions:
---------------------------------------------------
These positions are ideal for someone who has just graduated with an
undergraduate degree and would like a year or two of "hands on"
experience in research before applying to graduate school in one of the
cognitive sciences (e.g., neuroscience, psychology, computer science). We
are looking for two types of people: 1) a RESEARCH PROGRAMMER with strong
computational skills (especially with C/Unix and SUN systems) and
experience with PDP models and theory, and (2) an EXPERIMENTAL RESEARCH
ASSISTANT to assist in running and designing human learning experiments.
Some research experience required (familiarity with Apple MACs a plus).

Application Procedure:
----------------------
For more information on learning research at the CMBN/Rutgers or to
apply for these positions, please send a cover letter with a statement of
your research interests, a CV, copies of relevant preprints, and the the
names & phone numbers of references to:

Dr. Mark A. Gluck Phone: (415) 725-2434
Dept. of Psychology <-[Current address to 4/91] FAX: (415) 725-5699
Jordan Hall; Bldg. 420
Stanford University email: gluck@psych.stanford.edu
Stanford, CA 94305-2130


------------------------------

Subject: summer position
From: giles@fuzzy.nec.com (Lee Giles)
Date: Mon, 25 Feb 91 14:08:40 -0500


NEC Research Institute in Princeton, N.J. has available a 3 month summer
research and programming position. The research emphasis will be on
exploring the computational capabilities of recurrent neural networks.
The successful candidate will have a background in neural networks and
strong programming skills in the C/Unix environment. Computer science
background preferred. Interested applicants should send their resumes by
mail, fax, or email to the address below.

The application deadline is March 25, 1991. Applicants must show
documentation of eligibility for employment. Because this is a summer
position, the only expenses to be paid will be salary. NEC is an equal
opportunity employer.

C. Lee Giles
NEC Research Institute
4 Independence Way
Princeton, NJ 08540
USA

Internet: giles@research.nj.nec.com
UUCP: princeton!nec!giles
PHONE: (609) 951-2642
FAX: (609) 951-2482




------------------------------

Subject: neural net position available
From: Andras Pellionisz -- SL <pellioni@pioneer.arc.nasa.gov>
Date: Mon, 25 Feb 91 11:44:46 -0800



Neural Network Research Position Available Effective March 1,1990

Place: Nuclear Science Division
Lawrence Berkeley Laboratory

Area: Neural Network Computing with Application to
Complex Pattern Recognition Problems
in High Energy and Nuclear Physics



Description:

Experiments in high energy and nuclear physics are confronted with
increasingly difficult pattern recognition problems, for example tracking
charged particles and identifying jets in very high multiplicity and
noisy environments. In 1990, a generic R&D program was initiated at LBL
to develop new computational strategies to address such problems. The
emphasis is on developing and testing artificial neural network
algorithms for applications to experimental physics. Last year we
developed a new Elastic Network type tracking algorithm that is able to
track at densities an order of magnitude higher than conventional Road
Finding algorithms and even Hopfield Network type algorithms.

This year we plan on a number of followup studies and extensions of that
work as well as begin research on jet finding algorithms. Jets are formed
through the fragmentation of high energy quarks and gluons in rare
processes in high energy collisions of hadrons or nuclei. The problem of
identifying such jets via calorimetric or tracking detectors is greatly
complicated by the very high multiplicity of fragments produced via other
processes. The research will involve developing new preprocessing
strategies and network architectures to be trained by simulated Monte
Carlo data.


Required Qualifications: General understanding of basic neural computing
algorithms such as multilayer feed forward and recurrent nets and a
variety of training algorithms. Experience and proficiency in programing
in Fortran and C on a variety of systems VAX/VMS and/or Sparc/UNIX.
Physics background preferred.


For more information and applications please contact:

Miklos Gyulassy
Mailstop 70A-3307
Lawrence Berkeley Laboratory
Berkeley, CA 94720

E-mail: GYULASSY@LBL.Bitnet

Telephone: (415) 486-5239


------------------------------

Subject: Question regarding Back-propagation Rule........
From: ELEE6NY@jetson.uh.edu
Date: Mon, 25 Feb 91 19:00:00 -0600

Dear Sir,

I am graduate student at University of Houston and I am working
on Neural Network applications. I am working on some recurrent networks.
I am using Rochester Connectionist Simulator (RCS) for net. The version
of simulator i have has not got Recurrent backpropagation algorithm. so,i
am finding problem in using backpropagation rule for this kind of
recurrent net. I would appreaciate if i would get modified
backpropagation algorithm that includes the recurrent net. I would also
like to get any suggestions from you. I will look forward for your quick
reply. Thanks in advance.

pratish

------------------------------

Subject: RE: Transputers for neural networks?
From: rolf@pallas.neuroinformatik.ruhr-uni-bochum.de (ROLF WUERTZ)
Date: Wed, 27 Feb 91 17:16:09 +0100

Dear Tom, let me answer your questions briefly and use the opportunity to
advertise our work a little bit.

We have set up a system that is capable of recognizing human faces from
video camera images. It can handle a fair range of realistic conditions.
For more details I refer to our publications, which I will be glad to
supply copies of.

The system as it currently stands is implemented on a system of 23 T800
transputers, one of which is a special interface to a video camera and a
graphics display. The configuration is hardwired as a tree.

The application is completely written in occam under the multitool
environment (a slightly modified TDS by PARSYTEC/Aachen). We have
developped three pieces of generally useful software:

1) an efficient farming system that will support any tree of
transputers.
2) An implementation of remote procedure calls for the use of host devices
(keyboard, screen, clock, files, etc. on a remote transputer).
3) A set of assembler routines to speed up pieces of code that are crucial
for the scalability of the tree size.

The goal of the system is not so much practical or commercial use but the
illustration of a new neural network paradigm called the "Dynamic Link
Architecture"
(DLA) developped by Christoph von der Malsburg. This
concept proposes correlation-based learning on a very short time scale as
a solution to conceptual problems of neural networks such as binding or
higher order objects. Our simulations, therefore, fall in the category
"something more dynamic". Again, I will provide further details on
request.

Parallelism is handled in the simplest way possible. In the recognition
algorithm, incoming data is matched to a stored object by a random
process which models the above mentioned dynamics. All network dynamics
happen on a single transputer, but in parallel for the persons stored in
the database. So each tranputer matches the incoming image to one stored
person at a time. Implementation details are discussed in
\cite{bonas,kosko}

We do like the performance. The latest figures are around 30 secs for
recognizing a person out of a database of 86 with acceptable reliability.
For details on computation time, see \cite{icnc90,kosko}. For a detailed
analysis of the reliability of the recognition see \cite{transcomp}.

No fine grained parallel system maps nicely onto a coarse grained MIMD
structure, so we use the transputers because they're fast or, rather,
because of their price/performance ratio. However, we appreciate the
didactical outcomes about parallelism in general.

No sales, no price, no clients.

The following are the relevant publications in BIBTEX format.
\cite{icnc90} is very brief, but currently available in print. The
availability of the others is expected within 1991.

@incollection{kosko,
author="J. Buhmann and J. Lange and C. {v.\,d.\,}Malsburg
and J. C. Vorbr{\"
u}ggen and R. P. W{\"u}rtz",
editor="B. Kosko",
booktitle="Neural Networks: A Dynamical Systems Approach to
Machine Intelligence"
,
title="Object Recognition in the Dynamic Link Architecture ---
Parallel Implementation on a Transputer Network"
,
year="1990", note="In print",
publisher="Prentice Hall, New York"}

@incollection{icnc90,
author="R. P. W{\"u}rtz and J.C. Vorbr{\"uggen} and C. {v.\,d.\,}Malsburg",
editor="R. Eckmiller and G. Hartmann and G. Hauske",
booktitle="Parallel Processing in Neural Systems and Computers",
title="A Transputer System for the Recognition of Human Faces by
Labeled Graph Matching"
,
pages="37--41",
year="1990",
publisher="North Holland, Amsterdam"}

@inproceedings{bonas,
author={Rolf P. W{\"u}rtz and Jan C. Vorbr{\"u}ggen and
Christoph von der Malsburg and J{\"o}rg Lange},
title={A Transputer-based Neural Object Recognition System},
year=1990,
booktitle={From Pixels to Features II -- Parallelism in Image Processing},
publisher={Elsevier},
editor={H. Burkhardt and J.C. Simon} }

@unpublished{transcomp,
author={Martin Lades and Jan C. Vorbr{\"
u}ggen and Joachim Buhmann and
Wolfgang Konen and Christoph v.d. Malsburg and Rolf P. W{\"u}rtz},
title={Distortion Invariant Object Recognition in the Dynamic Link
Architecture},
year=1991,
note={Submitted to IEEE Transactions on Computers}}

*****************************************************************************
* *
* Rolf P. W"
urtz rolf@bun.neuroinformatik.ruhr-uni-bochum.de *
* *
* Ruhr-Universit"at Bochum phone: +49 234 700-7996 or *
* Institut f"
ur Neuroinformatik -7997 (dept. secr.) *
* ND03-33 fax: +49 234 700-7995 *
* Postfach 102148 *
* D-4630 Bochum 1 *
* Germany *
* *
*****************************************************************************

------------------------------

End of Neuron Digest [Volume 7 Issue 11]
****************************************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT