Copy Link
Add to Bookmark
Report

Neuron Digest Volume 06 Number 15

eZine's profile picture
Published in 
Neuron Digest
 · 1 year ago

Neuron Digest   Wednesday, 21 Feb 1990                Volume 6 : Issue 15 

Today's Topics:
A comment on the Searle discussion
More neural consciousness
ART vs. Back-prop vs. Grossberg's prior claims
ANN fault tol.
tech report available
Neural Computation, Vol. 1, No. 4
Abstract from J. Exp. and Theoretical AI vol.2(1)
TR announcement: W. Levelt, Multilayer FF Nets & Turing machines
preprint announcement


Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"
Use "ftp" to get old issues from hplpm.hpl.hp.com (15.255.176.205).

------------------------------------------------------------


Subject: A comment on the Searle discussion
From: "Tom Vogl" <TVP@CU.NIH.GOV>
Date: Tue, 20 Feb 90 08:04:50 -0500

The ongoing discussion of Searle's article brings to mind a quotation
brought back by a time traveler in the mid 1960's but that, apparently,
has again been lost. It is, I believe, relevant to the discussion.

"In the 15th century, the question was: 'How many angels
can dance on the head of a pin?'; In the 20th century
the question was: 'Can machines really think?.' The
question had not changed, only the wording had been
altered to confuse the innocent."


Historian Horace Haight (2074 - 2183 AD)


------------------------------

Subject: More neural consciousness
From: codelab@psych.purdue.edu (Coding Lab Wasserman)
Date: Tue, 20 Feb 90 14:32:22 -0500


I have been listening to the consciousness debate that has been
provoked by Searle. It seems to me that this debate goes to the core
issue: what important results can be produced by studying neural
networks? Such positions matter and so I wish to offer some comments
intended to illuminate them.

The heart of the disagreement seems to me to be determined by
where the investigator stands with respect to a simple dichotomy: Either
(position A) we already know enough to understand the physical basis of
consciousness or (position B) we still have something essential to learn.

Position A reduces to the view that the brain is a physical
system and that we already know enough about the physical properties of
the brain (i.e., its structure and function) to be able to infer its
fundamental properties. Those inferences make it possible for us to
attempt to create consciousness in an alternative physical medium. All
that stands in our way are some obstacles (such as scale) that are
formidable in practice but tractable in principle. Taking position A
predicates research that has a problem-solving character: How can I use
what I already know about neural networks to construct a network that
will successfully accomplish some interesting task?

Position B grants that the brain is certainly a physical system
and that we already know a lot about it. However, what we know now just
gives a fairly complete and quite satisfactory explanation of mindless
behavior, such as that exhibited by small- brained critters. (Brain size
is, of course, taken relative to body weight.) But large-brained animals
exhibit something else and the larger their brain, the more willing we
are to acknowledge the existence of this additional characteristic
(unless we are radical behaviorists). This something else does not go
away with any small lesion of any part of the brain; such small lesions
only produce alterations in performance. (Note that lesions of the
reticular arousal system do not produce mindless behavior; they merely
produce temporary somnolence and a corresponding absence of behavior.)
However, the something else does go away when large parts of the brain
are damaged, as in terminal cases of senile dementia.

Nothing we now know about the nervous system explains this
additional characteristic of big brains in the way that (say) our
knowledge of the shape of DNA and RNA molecules explains life. Unless
one believes this to be nothing more than a matter of scale, there
remains something to be learned. Taking position B therefore predicates
research that tends to be experimental in nature: How can I learn more
about the organization of the brain?

The great thing about artificial neural networks, from the
position B point of view, is that they provide a platform on which
experiments can readily be mounted which compare one neural organizing
principle with another. By contrast, while work on natural nervous
systems is often quite productive of ideas about the organization of the
natural nervous system, it can be very difficult to determine if these
ideas are valid because so many factors covary in natural nervous
systems. Neural networks make it easy to hold everything constant while
one property of the network is varied. Such work tells one something
truly fundamental.

So Searle's argument should not be seen as a counsel of despair
but rather as a prod that should influence network research. There may be
something very important to be learned from experiments on artificial
neural networks that would be overlooked by an exclusive concentration on
successful demonstrations.

Jerry Wasserman
Purdue University




------------------------------

Subject: ART vs. Back-prop vs. Grossberg's prior claims
From: hplabs!bradley!pwh (Pete Hartman)
Date: Mon, 19 Feb 90 20:58:07 -0600

I noticed in ftp-ing the home site of all this that there was some
software based on a book by Rumelhart and McClelland (I think some of the
local faculty have been using the book as a text and have the PC
software...). In my reading of what sources I could get hold of (through
interlibrary loan etc) I've mostly followed the Adaptive Resonance Theory
papers written by Grossberg and Carpenter. In one particular article
they went to great pains to point out how McClelland and Rumelhart (M&R)
were (in an earlier paper) merely retracing steps that Grossberg had
already taken in the early-mid 70's.

Does anyone have any comment about the differences between Adaptive
Resonance and Back Propagation? Not just the (apparent) conflict between
these two "camps" but also technical aspects (pros and cons) of the two
approaches. Are there any other "camps" out there? (forgive the naivete
of this question, but given the large lists of references with each of
the papers I have read, it's been very difficult to sort out which are
necessary articles/papers, and which are simply embellishments)

Perhaps someone could also suggest a good introduction to Adaptive
Resonance? I still struggle with the few details that are in the papers
I've read. (obviously the McClelland & Rumelhart book would be an intro
to Back Propagation...are there any better?)

thanks....
----------------------------------------------------------------------------
...uiucdcs\
Pete Hartman ......noao >!bradley!pwh
......cepu/

INTERNET: pwh@bradley.edu
ARPA: cepu!bradley!pwh@seas.ucla.edu


Post Script: All the recent articles about Adaptive Resonance were great.
I especially appreciated the person who suggested starting articles to
read--I really did fall into trouble trying to make sense of Grossberg's
articles without any guidance.

[[ Editor's Note: As Pete explained to me, he swallowed with what little
literature he could find at the time; there is obviously an enormous body
which he hadn't touched. The trouble of how a "beginner" should approach
te field still remains acute. I suggest the PDP books (Rumelhart and
McClelland) as still the best comprehensive overviews available, although
they are getting a tiny bit creaky around the edges (gasp, published in
1986!). Back issues of Neuron Digest have other suggestions. -PM ]]



------------------------------

Subject: ANN fault tol.
From: Mike Rudnick <rudnick@cse.ogi.edu>
Date: Tue, 16 Jan 90 09:51:51 -0800

Below is a synopsis of the references/material I received in response to
an earlier request for pointers to work on the fault tolerance of
artificial neural networks. Although there has been some work done
relating directly to ANN models, most of the work appears to have been
motivated by VLSI implementation and fault tolerance concerns.

Apparently, and this is speculation on my part, the folklore that
artificial neural networks are fault tolerant derives mostly from the
fact that they resemble biological neural networks, which generally don't
stop working when a few neurons die here and there.

Although it looks like I'm not going to be doing ANN fault tolerance as
my dissertation topic, I can't help but feel this line of research
contains a number of outstanding phd topics.

Mike Rudnick Computer Science & Eng. Dept.
Domain: rudnick@cse.ogi.edu Oregon Graduate Institute (was OGC)
UUCP: {tektronix,verdix}!ogicse!rudnick 19600 N.W. von Neumann Dr.
(503) 690-1121 X7390 (or X7309) Beaverton, OR. 97006-1999

- -----

From: platt@synaptics.com (John Platt)

Well, one of the original papers about building a neural network in
analog VLSI had a chip where about half of then synapses were broken, but
the chip still worked. Look at

``VLSI Architecutres for Implementation of Neural Networks'' by Massimo
A. Sivilotti, Michael Emerling, and Carver A. Mead, in ``Neural Networks
for Computing'', AIP Conference Proceedings 151, John S. Denker, ed., pp.
408-413

- -----

From: Jonathan Mills <rutgers!iuvax.cs.indiana.edu!jwmills>

You might be interested in a paper submitted to the 20th Symposium on
Multiple-Valued Logic titled "Lukasiewicz Logic Arrays", describing work
done by M. G. Beavers, C. A. Daffinger and myself. These arrays (LLAs
for short) can be used with other circuit components to fabricate neural
nets, expert systems, fuzzy inference engines, sparse distributed
memories and so forth. They are analog circuits, massively parallel,
based on my work on inference cellular automata, and are inherently
fault-tolerant.

In simulations I have conducted, the LLAs produce increasingly noisy
output as individual processors fail, or as groups of processors randomly
experience stuck-at-one and/or stuck-at-zero faults. While we have much
more work to do, it does appear that with some form of averaging the
output of an LLA can be preserved without noticeable error with up to
one-third of the processors faulty (as long as paths exist from some
inputs to the output). If the absolute value of the output is taken, a
chain of pulses results so that a failing LLA will signal its graceful
degradation.

VLSI implementations of LLAs are described in the paper, with an example
device submitted to MOSIS, and due back in January 1990. We are aware of
the work of Alspector et. al. and Graf et. al., which is specific to
neural architectures. Our work is more general in that it arises from a
logic with both algebraic and logical semantics, lending the dual
semantics (and its generality) to the resulting device.

LLAs can also be integrated with the receptor circuits of Mead, leading
to a design project here for a single circuit that emulates the first
several levels of the visual system, not simply the retina. This is
almost necessary because I can put over 2,000 processors on a single
chip, but haven't the input pins to drive them! Thus, a chip that uses
fewer processors with the majority of inputs generated on chip is quite
attractive -- especially since even with faults I'll still get a valid
result from the computational part of the device.

Sincerely,

Jonathan Wayne Mills
Assistant Professor
Computer Science Department
Indiana University
Bloomington, Indiana 47405
(812) 331-8533

- -----

From: risto@CS.UCLA.EDU (Risto Miikkulainen)

I did a brief analysis of the fault tolerance of distributed
representations. In short, as more units are removed from the
representation, the performance degrades linearly. This result is
documented in a paper I submitted to Cognitive Science a few days ago:

Risto Miikkulainen and Michael G. Dyer (1989). Natural Language
Processing with Modular Neural Networks and Distributed Lexicon.

Some preliminary results are mentioned in:

@InProceedings{miikkulainen:cmss,
author = "Risto Miikkulainen and Michael G. Dyer",
title = "Encoding Input/Output Representations in Connectionist
Cognitive Systems"
,
booktitle = "Proceedings of the 1988 Connectionist Models Summer School",
year = "1989",
editor = "David S. Touretzky and Geoffrey E. Hinton and Terrence
J. Sejnowski"
,
publisher = KAUF,
address = KAUF-ADDR,
}

- -----

"Implementation of Fault Tolerant Control Algorithms Using Neural
Networks"
, systematix, Inc., Report Number 4007-1000-08-89, August 1989.

- -----

From: kddlab!tokyo-gas.co.jp!hajime@uunet.uu.net

> " A study of high reliable systems
> against electric noises and element failures "

>
> -- Apllication of neural network systems --
>
> ISNCR '89 means "International Symposium on Noise and Clutter Rejection
> in Radars and Image Processing in 1989"
.
> It was held in Kyoto, JAPAN from Nov.13 to Nov.17.

Hajime FURUSAWA JUNET: hajime@tokyo-gas.co.jp
Masayuki KADO JUNET: kado@tokyo-gas.co.jp

Research & Development Institute
Tokyo Gas Co., Ltd.
1-16-25 Shibaura, Minato-Ku
Tokyo 105
JAPAN

- -----

From: <MJ_CARTE@UNHH.BITNET> Mike Carter

"Operational Fault Tolerance of CMAC Networks", NIPS-89, by Mikeael J.
Carter, Frank Rudolph, and Adam Nucci, University of New Hampshire

Mike Carter also says he has a non-technical overview of NN fault
tolerance which he wrote some time ago which contains references to
papers which have some association with fault tolerance (although only
1 of which had fault tolerance as its focus).

- -----

From: Martin Emmerson <mde@ecs.southampton.ac.uk>

I am working on simulating faults
in neural-networks using a program running
on a Sun (Unix and C).

I am particularly interested in qualitative methods
for assessing performance of a network and also
faults that might occur in a real VLSI implementation.

------------------------------

Subject: tech report available
From: Ajay.Jain@ANJ.BOLTZ.CS.CMU.EDU
Date: Mon, 18 Dec 89 14:17:38 -0500


A CONNECTIONIST ARCHITECTURE FOR SEQUENTIAL SYMBOLIC DOMAINS

Ajay N. Jain
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213-3890

Technical Report CMU-CS-89-187
December, 1989

Abstract:

This report describes a connectionist architecture specifically
intended for use in sequential domains requiring symbol manipulation.
The architecture is based on a network formalism which differs from
other connectionist networks developed for use in temporal/sequential
domains. Units in this formalism are updated synchronously and retain
partial activation between updates. They produce two output values:
the standard sigmoidal function of the activity and its velocity.
Activation flowing along connections can be gated by units.
Well-behaved symbol buffers which learn complex assignment behavior
can be constructed using gates. Full recurrence is supported. The
network architecture, its underlying formalism, and its performance on
an incremental parsing task requiring non-trivial dynamic behavior are
presented.

This report discusses a connectionist parser built for a smaller task
than was discussed at NIPS.

- ----------------------------------------------------------------------

TO ORDER COPIES of this tech report: send electronic mail to
copetas@cs.cmu.edu, or write the School of Computer Science at the
address above.

Those of you who requested copies of the report at NIPS a couple of
weeks ago need not make a request (your copies are in the mail).


****** Do not use your mailer's "reply" command. ******


------------------------------

Subject: Neural Computation, Vol. 1, No. 4
From: Terry Sejnowski <terry%sdbio2@ucsd.edu>
Date: Sat, 06 Jan 90 17:42:07 -0800

Neural Computation
Volume 1, Number 4

Reviews:

Learning in artificial neural networks
Halbert White

Notes:

Representation properties of networks: Kolmogorov's theorem is irrelevant
Frederico Girosi and Tomaso Poggio

Sigmoids distinguish more efficiently than heavisides
Eduardo D. Sontag

Letters:

How cortical interconnectedness varies with network size
Charles F. Stevens

A canonical microcircuit for neocortex
Rodney J. Douglas, Kevan A. C. Martin, and
David Whitteridge

Snythetic neural circuits using current-domain signal representations
Andreas G. Andreou and Kwabena A. Boahen

Random neural networks with negative and positive signals
and product form solution
Erol Gelenbe

Nonlinear optimization using generalized Hopfield networks
Athanasios G. Tsirukis, Gintaras V. Reklaitis, and
Manoel F. Tenorio

Discrete Synchronous neural algorithm for minimization
Hyuk Lee

Approximation of boolean functions by sigmoidal networks:
Part I: XOR and other two-variable functions
E. K. Blum

Backpropagation applied to handwritten zip code recognition
Y. LeCun, B. Boser, J. S. Denker, D. Henderson,
R. E. Howard, W. Hubbard, and L. D. Jackel

A subgrouping strategy that reduces complexity and
speeds up learning in recurrent networks
David Zipser

Unification as constraint satisfaction in structured connectionist networks
Andreas Stolcke


SUBSCRIPTIONS: This will be the last opportunity to receive all
issues for volume 1. Subscriptions for volume 2 will not receive
back issues for volume 1. Back issues of single issues will,
however, be available for $25. each.

Volume 1 Volume 2

______ $35. ______ $40. Student
______ $45. ______ $50. Individual
______ $90. ______ $100. Institution

Add $9. for postage outside USA and Canada surface mail
or $17. for air mail.

MIT Press Journals, 55 Hayward Street, Cambridge, MA 02142.
(617) 253-2889.



------------------------------

Subject: Abstract from J. Exp. and Theoretical AI vol.2(1)
From: cfields@NMSU.Edu
Date: Fri, 26 Jan 90 14:02:10 -0700

The following are abstracts of papers appearing in the fourth issue
of the Journal of Experimental and Theoretical Artificial
Intelligence, which appeared in November, 1989. The next issue, 2(1),
will be published in March, 1990.

For submission information, please contact either of the editors:

Eric Dietrich Chris Fields
PACSS - Department of Philosophy Box 30001/3CRL
SUNY Binghamton New Mexico State University
Binghamton, NY 13901 Las Cruces, NM 88003-0001

dietrich@bingvaxu.cc.binghamton.edu cfields@nmsu.edu

JETAI is published by Taylor & Francis, Ltd., London, New York, Philadelphia

_________________________________________________________________________

Problem solving architecture at the knowledge level.

Jon Sticklen, AI/KBS Group, CPS Department, Michigan State University,
East Lansine, MI 48824, USA

The concept of an identifiable "knowledge level" has proven to be
important by shifting emphasis from purely representational issues to
implementation-free decsriptions of problem solving. The knowledge level
proposal enables retrospective analysis of existing problem-solving
agents, but sheds little light on how theories of problem solving can
make predictive statements while remaining aloof from implementation
details. In this report, we discuss the knowledge level architecture, a
proposal which extends the concepts of Newell and which enables
verifiable prediction. The only prerequisite for application of our
approach is that a problem solving agent must be decomposable to the
cooperative actions of a number of more primitive subagents.
Implications for our work are in two areas. First, at the practical
level, our framework provides a means for guiding the development of AI
systems which embody previously-understood problem-solving methods.
Second, at the foundations of AI level, our results provide a focal point
about which a number of pivotal ideas of AI are merged to yield a new
perspective on knowledge-based problem solving. We conclude with a
discussion of how our proposal relates to other threads of current
research.

With commentaries by:

William Clancy: "Commentary on Jon Stcklen's 'Problem solving
architecture at the knowledge level'"
.

James Hendler: "Below the knowledge level architecture".

Brian Slator: "Decomposing meat: A commentary on Sticklen's 'Problem
solving architecture at the knowledge level'"
.

and Sticklen's response.

__________________________________________________________________________

Natural language analysis by stochastic optimization: A progress
report on Project APRIL

Geoffrey Sampson, Robin Haigh, and Eric Atwell, Centre for Computer
Analysis of Language and Speech, Department of Linguistics &
Phonetics, University of Leeds, Leeds LS2 9JT, UK.

Parsing techniques based on rules defining grammaticality are difficult
to use with authentic natural-language inputs, which are often
grammatically messy. Instead, the APRIL systems seeks a labelled tree
structure which maximizes a numerical measure of conformity to
statistical norms derived from a sample of parsed text. No distinction
between legal and illegal trees arises: any labelled tree has a value.
Because the search space is large and has an irregular geometry, APRIL
seeks the best tree using simulated annealing, a stochastic optmization
technique. Beginning with an arbitrary tree, many randomly-generated
local modifications are considered and adopted or rejected according to
their effect on tree-value: acceptance decisions are made
probabilistically, subject to a bias against adverse moves which is very
weak at the outset but is made to increase as the random walk through the
search space continues. This enables the system to converge on the global
optimum without getting trapped in local optima. Performance of an early
verson of the APRIL system on authentic inputs had been yielding analyses
with a mean accuracy of 75%, using a schedule which increases processing
linearly with sentence length; modifications currently being implemented
should eliminate many of the remaining errors.

_________________________________________________________________________

On designing a visual system
(Towards a Gibsonian computational model of vision)

Aaron Sloman, School of Cognitive and Computing Sciences, University
of Sussex, Brighton, BN1 9QN, UK

This paper contrasts the standard (in AI) "modular" theory of the nature
of vision with a more general theory of vision as involving multiple
functions and multiple relationships with other subsystems of an
intelligent system. The modular theory (e.g. as expounded by Marr)
treats vision as entirely, and permanently, concerned with the production
of a limited range of descriptions of visual surfaces, for a central
database; while the "labyrithine" design allows any output that a visual
system can be trained to associate reliably with features of an optic
array and allows forms of learning that set up new communication
channels. The labyrithine theory turns out to have much in common with
J. J. Gibson's theory of affordances, while not eschewing information
processing as he did. It also seems to fit better than the modular
theory with neurophysiological evidence of rich interconnectivity within
and between subsystems in the brain. Some of the trade-offs between
different designs are discussed in order to provide a unifying framework
for future empirical investigations and engineering design studies.
However, the paper is more about requirements than detailed designs.



------------------------------

Subject: TR announcement: W. Levelt, Multilayer FF Nets & Turing machines
From: Jeff Elman <elman@amos.ucsd.edu>
Date: Fri, 26 Jan 90 20:37:00 -0800


I am forwarding the following Tech Report announcement on behalf of
Pim Levelt, Max-Planck-Institute for Psycholinguistics. Note that
requests for reprints should go to 'pim@hnympi51.bitnet' -- not to me!

Jeff

--------------------------------------------------------------------

Jeff Elman suggested I announce the existence of the following
new paper:

Willem J.M. Levelt, ARE MULTILAYER FEEDFORWARD NETWORKS EFFECTIVELY
TURING MACHINES?

(Paper for conference on "Domains of Mental Functioning: Attempts at a
synthesis"
. Center for Interdisciplinary Research, Bielefeld, December
4-8, 1989. Proceedings to be published in Psychological Research, 1990).

Abstract:

Can connectionist networks implement any symbolic computation? That would
be the case if networks have effective Turing machine power. It has been
claimed that a recent mathematical result by Hornik, Stinchcombe and
White on the generative power of multilayer feedforward networks has that
implication. The present paper considers whether that claim is correct.
It is shown that finite approximation measures, as used in Hornik et
al.'s proof, are not adequate for capturing the infinite recursiveness of
recursive functions. Therefore, the result is irrelevant to the issue at
hand.

Willem Levelt, Max Plank Institute for Psycholinguistics, Nijmegen,
The Netherlands, e-mail: PIM@HNYMPI51 (on BITNET).



------------------------------

Subject: preprint announcement
From: DENBY%FNAL.BITNET@UICVM.uic.edu
Date: Wed, 31 Jan 90 15:05:00 -0600

Preprint Announcement


NEURAL NETWORKS FOR TRIGGERING
------------------------------

Fermilab-Conf-90/20

(Presented by B. Denby at the 1989 IEEE Nuclear Science
Symposium, San Francisco, California, January 15-19, 1990)

B. Denby Fermi National Accelerator Lab.
M. Campbell Univ. of Michigan
F. Bedeschi INFN Sezione di Pisa
N. Chriss, C. Bowers Univ. of Chicago
F. Nesti Scuola Normale Superiore, Pisa

ABSTRACT

Two types of neural network trigger architectures for detecting particles
containing beauty quarks have been simulated in the environment of the
CDF experiment at the Fermilab proton antiproton collider. The triggers
involve identification of electrons within jets in a calorimeter and
recognition of secondary vertices in a microvertex detector,
respectively. The efficiencies obtained for detection of beauty events
and rejection of background are very good. A hardware implementation of
the electron identification architecture will be tested soon. If these
tests are successful the trigger will be installed for evaluation in the
1991 run of the CDF experiment.

------------------------------


End of Neuron Digest [Volume 6 Issue 15]
****************************************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT