Copy Link
Add to Bookmark
Report

Neuron Digest Volume 06 Number 18

eZine's profile picture
Published in 
Neuron Digest
 · 1 year ago

Neuron Digest   Saturday,  3 Mar 1990                Volume 6 : Issue 18 

Today's Topics:
What is Dr. Stephen Gallant's method
Dr. Stephen Gallant's method -- THE ANSWER IS...
Chaos in the brain
Computational Metabolism on a Connnection Machine and Other Stories...
Re: N.N. & CW!
Re: Boltzmann v. Cauchy
Fujistu "Super Computer"
Logic Based Systems Using Neural Nets
Two positions
Announcement: Workshop on Adapt. Neural Nets & Pattern Recog.
NN conference Indiana_Purdue Ft Wayne


Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"
Use "ftp" to get old issues from hplpm.hpl.hp.com (15.255.176.205).

------------------------------------------------------------

Subject: What is Dr. Stephen Gallant's method
From: giant@lindy.Stanford.EDU (Buc Richards)
Organization: Stanford University
Date: 05 Jan 90 01:25:58 +0000


I've read that Dr. Stephen Gallant has developed a technique for deriving
rules from neural networks. Does anyone know what this technique is or
have references to papers that describe it? Also is it known if he will
be presenting at IJCNN in Washington, DC?

Thanks.

Buc Richards @ @
Supercomputer Support Staff >
Stanford University -

------------------------------

Subject: Dr. Stephen Gallant's method -- THE ANSWER IS...
From: giant@lindy.Stanford.EDU (Buc Richards)
Organization: Stanford University
Date: 25 Jan 90 23:34:00 +0000

The only reference for the method that I received is the following,

"Connectionist Expert Systems"
in Communications of the ACM
February 1988 - Volume 31 - number 2

------------------------------

Subject: Chaos in the brain
From: ssingh@watserv1.waterloo.edu ($anjay "lock-on" $ingh - Indy Studies)
Organization: U of Waterloo, Ontario
Date: 15 Jan 90 16:19:08 +0000


Here are the some references to chaos-theoretic descriptions of the
brain. They are from Neural and Brain Modelling. I also have the Fortran
files for the simulation programs contained therein. They are the correct
ones. Someone mentioned a few months ago about errors in the book. I
hope this proves useful to some of you, given the recent talk about chaos
and brains.

Chay, T.R. Abnormal discharges and chaos in a neuronal model system.
_Biological Cybernetics_. 50, 301-311

Abstract: Using the mathematical model of the pacemaker neuron formulated
by Chay, we have investigated the conditions in which a neuron can
generate chaotic signals in response to variation in temperature, ionic
compositions, chemicals, and the strength of the depolarizing current.

Choi, M.Y., and Huberman, B.A. Dynamic Behaviour of nonlinear networks.
_Phys. Rev. A_. 28, 1204-1206.

Abstract: We study the global dynamics of nonlinear networks made up of
synchronous threshold elements. By writing a master equation for the
system, we obtain an expression for the time dependence of its activity
as a function of parameter values. We show that with both excitatory and
inhibatory couplings, a network can display collective behaviour which
can be either multiple periodic or deterministic chaotic, a result that
appears to be quite general.

Grondin, R.O., et. al. Synchronous and Asynchronous Systems of Threshold
Elements. _Biological Cybernetics_. 49, 1-7.

Abstract: The role of synchronism in systems of threshold elements (such
as neural networks) is examined. Some important differences between
synchronous and asynchronous systems are outlined. In particular,
important restrictions on limit cycles are found in asynchronous systems
along with multi-frequency oscillations which do not appear in
synchronous systems. The possible role of deterministic chaos in these
systems is discussed.

Guevara, M.R., Glass, L., Mackey, M.C., Shrier, A. Chaos in Neurobiology.
_IEEE Transactions on Systems, Man, and Cybernetics_. 13, 790-798.

Abstract: Deterministic mathematical models can give rise to complex
aperiodic ("chaotic") dynamics in the abscence of stochastic fluctuations
("noise") in the variables or parameters of the model or in the inputs to
the system. We show that chaotic dynamics are expected in nonlinear
feedback systems possessing time delays such as are found in recurrent
inhibition and from the periodic forcing of neural oscillators. The
implications of the possible occurrence of chaotic dynamics for
experimental work and mathematical modelling of normal and abnormal
function in neurophysiology are mentioned.

Holden, A.V., Winlow, W., and Hayden, P.G. The Induction of Periodic
and Chaotic Activity in a Molluscan Neurone. _Biological Cybernetics_.
43, 169-173.

Abstract: During prolonged exposure to extracellular 4-aminopyridine
(4AP) the periodic activity of the somatic membrane of an identified
molluscan neurone passes from a repetitive regular discharge of >90 mV
amplitude action potentials, through double discharges to <50 mV
amplitude oscillations. Return to standard saline causes the growth of
parabolic amplitude-modulated oscillations that develop, through chaotic
amplitude- modulated oscillations, into regular oscillations. These
effects are interpreted in terms of the actions of 4AP on the dynamics of
the membrane excitation equations.


$anjay "lock-on" $ingh
ssingh@watserv1.waterloo.edu

"A modern-day warrior, mean mean stride, today's Tom Sawyer, mean mean pride."

------------------------------

Subject: Computational Metabolism on a Connnection Machine and Other Stories...
From: marek@iuvax.cs.indiana.edu (Marek Lugowski)
Date: 23 Jan 90 16:23:27 +0000

[[ Editor's note: This is a bit afield, and certainly past the
presentation date, but I thought readers might be interested in one of
the connectionist analogs for a different purpose. -PM ]]

Indiana University Computer Science Departamental Colloquium

Computational Metabolism on a Connection Machine and Other Stories...
---------------------------------------------------------------------
Elisabeth M. Freeman, Eric T. Freeman & Marek W. Lugowski
graduate students, Computer Science Department
Indiana University

Wednesday, 31 January 1990, 7:30 p.m.
Ballantine Hall 228
Indiana University campus, Bloomington, Indiana


This is work in progress, to be shown at the Artificial Life Workshop II,
Santa Fe, February 5-9, 1990. Connection Machine (CM) is a supercomputer
for massive parallelism. Computational Metabolism (ComMet) is such computation.
ComMet is a tiling where tiles swap places with neighbors or change their
state when noticing their neighbors. ComMet is a programmable digital liquid.

Reference: Artificial Life, C. Langton, ed., "Computational Metabolism:
Towards Biological Geometries for Computing"
, M. Lugowski, pp. 343-368,
Addison-Wesley, Reading, MA: 1989, ISBN 0-201-09356-1/paperbound.

Emergent mosaics:
- ----------------
This class of ComMet instances arise from generalizing the known
ComMet solution of Dijkstra's Dutch Flag problem. This has implications
for cryptology and noise-resistant data encodings. We observed deterministic
and indeterministic behavior intertwined, apparently a function of geometry.

A preliminary computational theory of metaphor:
- ----------------------------------------------
We are working on a theory of metaphor as transformations within ComMet.
Metaphor is loosely defined as expressing one entity in terms of another,
and so it must underlie categorization and perception. We postulate that
well-defined elementary events capable of spawning an emergent computation
are needed to encode the process of metaphor. We use ComMet to effect this.

A generalization of Prisoner's Dilemma (PD) for computational ethics:
- ---------------------------------------------------------------------
The emergence of cooperation in iterated PD interactions is known.
We propose a further generalization of PD into a communication between two
potentially complex but not necessarily aware of each other agents. These
agents are expressed as initial configurations of ComMet spatially arranged
to allow communication through tile propagation and tile state change.

Connection Machine (CM) implementation:
- --------------------------------------
We will show a video animation of our results, obtained on a 16k-processor CM,
including emergent mosaics, thus confirmed after we predicted them
theoretically. Our CM program computes in 3 minutes what took 7 days to do on
a Lisp Machine. Our output is a 128x128 color pixel map. Our code will
run in virtual mode, if need be, with up to 32 ComMet tiles per CM processor,
yielding a 2M-tile tiling (over 2 million tiles) on a 64k-processor CM.

------------------------------

Subject: Re: N.N. & CW!
From: GMoretti@massey.ac.nz (Giovanni Moretti)
Organization: Massey University, Palmerston North, New Zealand
Date: 21 Feb 90 02:23:02 +0000

>> Are neural nets suitable for decoding CW (morse code)?

Sylvan
I won't venture to offer an opinion on the above question (I started
reading about NN four days ago) but refer you to a couple of readable
articles in BYTE August 1989.

The second of these is "Building blocks for Speech" (page 235) which deals
with the use of nets to recognise the "B" and "D" consonants, and builds
up from there. It's a very readable article about the use of nets on
input data that's time varying (there's probably a jargon term for this
:-).

I've also found a very simple backpropagation program written in Turbo
Pascal (about one and a half pages) written by David Parker and included in
Dr Dobbs article "Programming Paradigms" by Michael Swaine DDJ October
1989, p112 (listing starts p146). The source is available from SIMTEL-20
in the DDJMAG (I think) directory.

These articles aren't exactly high-brow stuff but they're a very good
introduction, especially the interview with Dave Parker.

I can mail you the Turbo Pascal source if you like.

Cheers
Giovanni (ZL2BOI)

- --
- -------------------------------------------------------------------------------
| GIOVANNI MORETTI, Consultant | EMail: G.Moretti@massey.ac.nz |
|Computer Centre, Massey University | Ph 64 63 69099 x8398, FAX 64 63 505607 |
| Palmerston North, New Zealand | QUITTERS NEVER WIN, WINNERS NEVER QUIT |
- -------------------------------------------------------------------------------

------------------------------

Subject: Re: Boltzmann v. Cauchy
From: munnari!cluster.cs.su.oz.au!ray@uunet.UU.NET
Date: Mon, 26 Feb 90 22:32:38 +1100

In Neuron Digest, Volume 6 : Issue 16 (Friday, 23 Feb 1990) andrew@dtg.nsc.com
writes ...

> Phil Wassermann came to speak at our plant recently, and mentioned that
> Cauchy was often superior to Boltzmann in that a large jump out of an
> extended local minimum area was more likely, and thus a global minimum
> more likely to be successfully found ...

A nice property of wandering through an energy landscape by classic
simulated annealing is that the relative probabilities of two global
states is simply a function of the energies of the two states i.e. it is
independent of the path(s) to those states. This is particulary useful
for Boltzmann Machines. I don't think this property holds when you use
"fast" simulated annealing.

Raymond Lister
Basser Department of Computer Science
University of Sydney
NSW 2006
AUSTRALIA

Internet: ray@cs.su.oz.AU
CSNET: ray%cs.su.oz@RELAY.CS.NET
UUCP: {uunet,hplabs,pyramid,mcvax,ukc,nttlab}!munnari!cs.su.oz.AU!ray
JANET: munnari!cs.su.oz.AU!ray@ukc (if your lucky)

------------------------------

Subject: Fujistu "Super Computer"
From: Mark Dzwonczyk <Mark_Dzwonczyk@qmlink.draper.com>
Date: 28 Feb 90 16:19:54 -0800

[[ Editor's note: I took the liberty of attempting to format this
article and clean it up a bit. Editorial confusions/comments are marker,
as always with [[x]]. -PM ]]

Subject: Time: 03:37 PM
OFFICE MEMO Fujistu "Super Computer" Date: 2/28/90

Here's an article regarding IJCNN-Wash which mentions the Fujistu
machine: There may be some typos; the scanner isn't perfect. Probably
needs HNC's OCR system...

Mokhoff, N. , "Neural nets making the leap out of lab", Electronic
Engineering Times, January 22, 1990, p. 1 & 122.

Washington. Neural networks are beginning to make good on their
long-promised potential, with individual chips, boards and development
tools having poured into the marketplace over the past two years. Last
week's International Joint Conference on Neural Networks made it clear
that neurocomputers are pleased to take their place as actual partners
with digital computers in solving complex problems with unprecedented
precision by the year 2000.

Among the signposts at IJCNN, aside from the traditional product
rollouts, were the debut of a neural ASIC business (see related story,
page 122) and a host of presentations on multiprocessor system
architectures, most of which are at the prototype stage. Among the
developments are:

A multiprocessor architecture from Fujitsu Laboratories for
connecting up to 1,024 TMS320C30 floating-point digital signal
processors to generate a billion connection updates per second.

A toroidal lattice architecture implemented with 17 Inmos T800
floating-point Transputers expandable to large-scale networks
using digital signal processors from Sharp Corp.

A waferscale systolic array comprising more than 900,000
transistors capable of a billion connections per second in a
system implementation from Siemens.

From SAIC, a concurrent processing architecture that eliminates
sequential processing requirementsa throughput limiter [[ sic ]] that can
support implementations of both loosely coupled networks and tightly
coupled multiple-instruction, multiple-data (MIMD) processors fabricated
in VLSI chips.

Sandy/8, the TMS320C30 based architecture from Fujitsu Laboratories
(Kawasaki, Japan), may be the first such system architecture geared
toward use in today's system environment: When full implemented, Sandy/8
will comprise VME boards populated with custom chips that will be scaled
replicas of the current prototype board (see photo). Today, one board
represents one neuron. The prototype Sandy/8, which will use 256
processing elements when completed in the fall, will eventually be
upgraded to the Sandy/10, able to house 1,024 processing elements (PEs).

Fujitsu researchers Hideki Kato and Kazuo Asakawa developed the
architecture to resolve the thorny problem of how to divide tasks in a
multiprocessing system and then assign the divided tasks to PEs without
affecting throughput. The basic architecture consists of PEs and "trays"
(see diagram on page 1). The trays, which are connected by a ring network
that serves as cyclic shift register function as container/routers that
allow tasks to be divided and assigned to the PEs. Each tray is
implemented in a 20,000-gate Hitachi gate array and is connected to two
neighbors.

Each PE has a floating-point multiplier, am [[ ??? ]] adder and some
local memory (SRAM) to store the weight vectors and program code needed
to run the system's backpropagation algorithm.

The layers required to implement the various connection patterns are
configured in software. "This provides us the flexibility for changing
the number of layers to be simulated as required, since the architecture
has no physical layer structure,"
said Asakawa, section manager of the
Computer-based System Laboratory.

For example, to implement a three-layer perfect connection neural network
that has four neurons in the input layer, three in the hidden layer and
two in the output layer, four trays and three PEs are needed with no need
for any physical layer structure.

In addition to the PEs, trays and SRAMs, three other elements implement
the architecture: a variable FIFO memory connected to a host Sun-3
computer, short bridges linking the PEs to the ring network of trays and
a host common bus through which all of the SRAM associated with the PEs
can be accessed through the host.

FlFOs and SRAMs are needed to store the vectors, which are used
repeatedly in learning cycles to get the best performance and the most
accurate results. That's because the bandwidth of the ring network of
trays is 67 Mbytes/s much faster than the VME bus connected to the host.

Applications other than neuro computing for which the Fujitsu
architecture is expected to be useful are 2-D image processing and
conventional vector processing. A 2-D version of Sandy for image
processing win be ready in two years, using custom VLSI PEs, said senior
researcher Kato.

Among the ready-for-market products shown was the most commercial,
turnkey, end-user neural network product. Hecht-Nielsen Neurocomputer
(San Diego) introduced IDEPT, the Image Document Entry Processing
Terminal, for recognizing a wide variety of images. Armed with Oscar,
HNC's proprietary neural network recognition software, IDEPT is claimed
to "achieve on a digit-to-digit basis a greater accuracy m handwritten
numbers and alphabetic characters than any other image character
recognition system,"
said Robert Hecht-Nielsen, HNC's chairman. For
$39,500, the IDEPT workstation includes an AT-compatible personal
computer, am HNC Anza neurocomputing coprocessor, an optical scanner with
document feeder, a color VGA monitor and Oscar. IDEPT can operate as a
standalone unit or m a networked configuration, where it can recognize
characters on forms that have been stored on a remote image data system
or scanned at another location.

HNC also incorporated its General Applications Architecture (GAA) into
NetsetII, a software package that allows users to define and create
applications without programming. GAA elements are represented by icons
on the screen and include the following: file processor, data processor,
pipe processor, display processor and a neural network processor that
allows access to all 19 neural network paradigms supplied by HNC. "GAA
is the first modular CASE environment built by a neural network company,"

said Hecht-Nielsen.

Meanwhile, Lucid Inc. (Menlo Park, Calif.) said it will supply Plexi, a
software-based neural net system for Unix platforms. Offered for the
Sun-3, Sun-4 and Sparcstation in the summer, Plexi is a general
development tool for neural network simulations that was developed by
Plexi Software Inc. (San Francisco). Lucid, a leading supplier of
Common Lisp software products, will distribute Plexi development and
delivery products and plans to port Plexi to all Lucid Common Lisp
platforms. The product is menu driven, and networks can be con
structured and modified via mouse actions using the Network Editor and
Pattern Editor. Several conventional neural network paradigms Hopfield
memory, back propagation and competitionare provided with Plexi. [[ sic ]]



------------------------------

Subject: Logic Based Systems Using Neural Nets
From: arun1@mars.njit.edu (arun maskara spec lec cis)
Date: Thu, 01 Mar 90 14:12:46 -0500




"Logic Based Systems Using Neural Nets"

Since last 9 months, I have been thinking about the idea of implementing
an inference engine using Neural Network. I have written simulation
program which implements Inference engine, and works on simple set of
rules.

I want to get feedback from other people who have thought of this idea or
are working on something related.


Thank You all.

Arun Maskara

arun1@mars.njit.edu

------------------------------

Subject: Two positions
From: marwan@extro.ucc.su.oz.au (Marwan Jabri)
Organization: University Computing Service, Uni. of Sydney, Australia.
Date: 24 Feb 90 01:52:59 +0000

[[ Editor's note: I assume the salaries listed are in Australian
dollars; convert accordingly. -PM ]]

Sydney University Electrical Engineering

Systems Engineering and Design Automation Laboratory

(Two Positions)

- ---------------------------------------------------------------

Research Fellow

Reference No. 08/01

Microelectronic Implementation of Neural
Networks based Devices for the Analysis and
Classification of Medical Signals


Applications are invited from enthusiastic persons to work on advanced
neural network application project in the medical area. The project is
being funded jointly by the Australian Goverment and a high-technology
manufacturer of medical products.

The project is the research and development of different architectures of
networks to be implemented on ASIC's. The chips are to be used for the
analysis and classification of medical signals. The successful applicant
is expected to play a leading role in system specification, design and
implementation, functional and circuit level simulation.

Applicants should have an electrical engineering degree or equivalent,
and either a PhD degree or a substantial experience in a related field.

The appointees may apply for enrollment towards a postgraduate degree
(part-time).

Preference will be given to applicants who have experience in artificial
neural networks, MOS analog or digital integrated circuit design.

The appointment is originally for one year with possibility of renewal.
Salary range according to qualifications.

- ---------------------------------------------------------------

Professional Assistant
Grade I/II

Reference No. 08/04

Microelectronic Implementation of Neural
Networks based Devices for the Analysis and
Classification of Medical Signals


Applications are invited from enthusiastic persons to work on advanced
neural network application project in the medical area. The project is
being funded jointly by the Australian Goverment and a high-technology
manufacturer of medical products.

The project is the research and development of different architectures of
networks to be implemented on ASIC's. The chips are to be used for the
analysis and classification of medical signals. The successful applicant
is expected to contribute to architecture design and implementation, and
to carry out MOS circuit design, modelling, and simulation.

Applicants should have an electrical engineering degree or equivalent.
Preference will be given to applicants who have experience in artificial
neural networks, MOS analog or digital integrated circuit design.

The appointees may apply for enrollment towards a postgraduate degree
(part-time).

The appointment is originally for one year with possibility of renewal.
Salary range according to qualifications.

- ---------------------------------------------------------------

Salaries: Research Fellow $32,197-$41,841 p.a.
Professional Assistant Grade II $33,842-$35,928
Professional Assistant Grade I $23,878-$32,053

Method of application: Applications, quoting ref. no. and including
curriculum vitea, list of publications and the names, addresses and Fax
nos, of three referees, to the Registrar, Staff Office, University of
Sydney, NSW 2006, from whom general information is available. The
University reserves the right not to proceed with any appointment for
financial or other reasons.

No smoking in the workplace is University policy

---------------------------------------------------------------

For further information please contact:

Dr M.A. Jabri
Sydney University Electrical Engineering
NSW 2006 Australia
Tel: +61-2-692 2240
Fax: +61-2-692 3847
Email: marwan@ee.su.oz.au

Marwan Jabri E-mail: marwan@ee.su.oz
Systems Engineering and Design Automation Laboratory Fax: (+61-2) 692 3847
Sydney University Electrical Engineering
NSW 2006 Australia

------------------------------

Subject: Announcement: Workshop on Adapt. Neural Nets & Pattern Recog.
From: flynn@pixel.cps.msu.edu (Patrick J. Flynn)
Date: Tue, 27 Feb 90 08:24:06 -0500

Workshop on
Artificial Neural Networks & Pattern Recognition

Sponsored by
The International Association for Pattern Recognition (IAPR)

Sands Hotel
Atlantic City, New Jersey
June 17, 1990


Recent developments in artificial neural networks (ANN's) have caused a
great deal of excitement in the academic, industrial, and defense
communities. Current ANN research owes much to several decades of work
in statistical pattern recognition (SPR); indeed, many fundamental
concepts from SPR have recently found new life as research topics when
placed into the framework of an ANN model.

The aim of this one-day workshop is to provide a forum for itneraction
between the leading researchers from the SPR and ANN fields. As pattern
recognition practioners, we seek to address the following issues:

**In what ways do artificial neural networks differ from the well-known
paradigms of statistical pattern recognition? Are there concepts in ANN
for which no counterpart in SPR exists (and vice versa?)

**What benefits can come out of interaction between ANN and SPR researchers?

**What advantages, if any, does ANN techniques have over SPR methods in
dealing with real world problems such as object recognition, pattern
classification, and visual environment learning?

Tentative Program

8:00 Registration
8:30 Issues in ANN and SPR, Laveen Kanal, University of Maryland
9:15 Links Between ANN's & SPR, Paul Werbos, National Science Foundation
10:00 Coffee Break
10:30 Generalization & Discovery in Adaptive Pattern Recognition, Y. Pao,
Case Western Reserve University
11:15 Character Recognition, Henry Baird, AT&T Bell Labs

12:00 LUNCH

1:30 Target Recognition, Steven Rogers, U.S. Air Force
2:15 Connectionist Models for Speech Recognition, Renato DeMori, McGill
University
3:00 Coffee Break
3:30 Panel Discussion, Moderators: Anil Jain, Michigan State University &
Ishwar Sethi, Wayne State University

Registration Information:
Advance Registration (by 5/15/90): $100
Late Registration: $120

Contact: Ms. Cathy Davison (Workshop on ANN and PR)
Department of Computer Science, A-714 Wells Hall
Michigan State University, East Lansing, MI 48824
Tel. (517)355-5218, email: davison@cps.msu.edu, FAX: (517)336-1061

------------------------------

Subject: NN conference Indiana_Purdue Ft Wayne
From: Samir Sayegh <sayegh@ed.ecn.purdue.edu>
Date: Thu, 01 Mar 90 18:47:15 -0500


Third Conference on Neural Networks and Parallel Distributed Processing
Indiana-Purdue University

A conference on NN and PDP will be held April 12, 13 and 14, 1990 on th
common campus of Indiana and Purdue University at Ft Wayne. The emphasis
of this conference will be Vision and Robotics although all contributions
are w are welcome. People from the Midwest are particularly encouraged
to attend and contribute especially since the "major" NN conferences seem
to oscillate between the East and West Coast!

Send abstracts and inquiries to:

Dr. Samir Sayegh
Physics Department
Indiana Purdue University
Ft Wayne, IN 46805

email: sayegh@ed.ecn.purdue.edu
sayegh@ipfwcvax.bitnet

FAX : (219) 481-6800
Voice: (219) 481-6157


------------------------------

End of Neuron Digest [Volume 6 Issue 18]
****************************************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT