Copy Link
Add to Bookmark
Report

Neuron Digest Volume 05 Number 21

eZine's profile picture
Published in 
Neuron Digest
 · 1 year ago

Neuron Digest   Monday,  8 May 1989                Volume 5 : Issue 21 

Today's Topics:
What is the size of LARGE nets ??
Re: Removing noise from EKG signals
guassian elimination on sparce matricies (used as an associative mem)
Re: guassian elimination on sparce matricies (used as an associative mem)
Sorting
Re: Sorting
E. coli & gradient search
Neural Nets in Medical Imaging
Re: Neuron Digest V5 #17
Parallel Simulated Annealing / References and Are You Doing It?
Share hotel room at IJCNN?
Volunteers Wanted for IJCNN
Rochester Connectionist Simulator: answers to common questions
rochester connectionist simulator available on uunet.uu.net


Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"
ARPANET users can get old issues via ftp from hplpm.hpl.hp.com (15.255.16.205).

------------------------------------------------------------

Subject: What is the size of LARGE nets ??
From: Rene Bach <mcvax!cernvax!ascom!rene@uunet.uu.net>
Organization: Ascom Tech AG, Solothurn, CH
Date: 31 Mar 89 07:10:29 +0000

Hi,
I am fairly new to this domain. I am curious about the common sizes
of useful nets and what size is considered to be LARGE. I would like to know
a few things for the LARGE, implemented nets:

a) size of input and output layer
b) number of nodes (broken down as nodes/hidden layers) and number
of connections/node.
c) size of training set
d) CPU and elapsed training time (and on which hardware)
e) a rough measure of performance for that training time and size of
training set.

If there is interest, I'll summarize to the net. And do not hesitate to let
me know whether this is a silly/not relevant question.

Cheers
Rene Bach
Ascom Tech
Switzerland

rene@ascom.uucp

------------------------------

Subject: Re: Removing noise from EKG signals
From: demers@beowulf.ucsd.edu (David E Demers)
Organization: EE/CS Dept. U.C. San Diego
Date: Fri, 31 Mar 89 17:23:44 +0000

Doug Palmer and Duane DeSieno have a paper, "Removing Random Noise from EKG
Signals using a Backpropogation Network"
. I don't know where it has been
published, I have a preprint. Doug Palmer is at HNC, Inc. and Duane DeSieno
is at Logical Designs Consulting, Inc. both in the San Diego area.

They trained a net to filter out noise and achieved better performance than
a finite impulse response filter.

Dave DeMers
demers@cs.ucsd.edu

------------------------------

Subject: guassian elimination on sparce matricies (used as an associative mem)
From: Mike Rynolds <myke@GATECH.EDU>
Organization: School of Information and Computer Science, Georgia Tech, Atlanta
Date: 04 Apr 89 16:38:29 +0000

If A represents a series of input state vectors and B is a corresponding
list of output state vectors, then in the equation AX = B, X is a neural net
which can be trained simply by setting it equal to A-1 * B. Since A and B
consists of 1's and 0's, and mostly 0's, large matricies can be made
managable if they are sparce.
I have only been able to find gaussian elimination alg.'s on sparce
systems of linear equations of the form Ax = b, where x and b are vectors.
Can anyone direct me to where I can find a gaussian elimination alg on
sparce systems of the form AX = B?

Mike Rynolds
School of Information & Computer Science, Georgia Tech, Atlanta GA 30332
uucp: ...!{decvax,hplabs,ncar,purdue,rutgers}!gatech!myke
Internet: myke@gatech.edu

------------------------------

Subject: Re: guassian elimination on sparce matricies (used as an associative mem)
From: hwang@taipei.Princeton.EDU (Jenq-Neng Hwang)
Organization: Princeton University, Princeton NJ
Date: Thu, 06 Apr 89 13:50:08 +0000

Instead of using Gaussian elimination type of algorithms for solving the
sparse matrices, there have been row-action methods proposed, which are
iterative procedures suitable for solving linear systems without any
structural assumption on sparseness. One famous example is the Kaczmarz
projection method, which can be used to interpret the dynamic behavior of
later stage of back-propagation learning, has been widely used in image
reconstruction applications. A good tutorial paper is:

Yair Censor, " Row-Action Methods for Huge and Sparse Systems and Their
Applications,"
SIAM Review, Vol. 23, No. 4, pp 444-466, October 1981.

J. N. Hwang

------------------------------

Subject: Sorting
From: Ytshak Artzi - CPS <mailrus!shadooby!accuvax.nwu.edu!tank!eecae!cps3xx!cpsvax!artzi@TUT.CIS.OHIO-STATE.EDU>
Organization: Michigan State University, Computer Science Department
Date: 11 Apr 89 03:14:24 +0000


Please comment on the following NAIVE algorithm:

Let n1, n2,....nk be neurons in a network
For every 2 connected neurons (Ni,Nj) we define Ni to "LEFT"
and Nj to be "RIGHT"
The neurons are initially assigned numeric values (numbers we wish
to sort)
We activate the network;
We let the neurons to exchange information among them "freely"
until EVERY "LEFT" contains a smaller value than its "RIGHT"

We defined this stage as SORTED NETWORK


Questions:
1. Is the computing time predictable ?
2. How can we evaluate the performance of the algorithm (criteria) ?
3. Has anyone done it before ?
4. Does anyone know of a sorting algorithm ?

NOTE: if it makes it easier for you, you may assume any Network model,
with or without feedback, etc.

Thanks.

Itzik Artzi, CPS, Michigan State University
artzi@cpsvax.cps.msu.edu

------------------------------

Subject: Re: Sorting
From: andrew <nsc!andrew@DECWRL.DEC.COM>
Organization: National Semiconductor, Santa Clara
Date: 11 Apr 89 08:03:56 +0000

> Please comment on the following NAIVE algorithm:
> ...

For a fully-connected network (as opposed to a 1D string):
1. The network is sorted immediately the numbers are assigned, due to
the flexibility in the definition of left, right. Ans: ZERO time.
2. We don't need to bother.
3. Yes, because it's a null operation, and this is occasionally done.
4. Yes. I read Knuth some time ago.

Andrew Palfreyman USENET: ...{this biomass}!nsc!logic!andrew
National Semiconductor M/S D3969, 2900 Semiconductor Dr., PO Box 58090,
Santa Clara, CA 95052-8090 ; 408-721-4788 there's many a slip
'twixt cup and lip

------------------------------

Subject: E. coli & gradient search
From: Alex Kirlik <prism!hms2!kirlik@GATECH.EDU>
Organization: Center for Human-Machine Systems Research - Ga Tech
Date: 11 Apr 89 04:31:41 +0000


The following is taken from Stephen Jay Gould's _Hen's Teeth and Horse's
Toes_, W.W. Norton & Co., New York, 1984 (p. 161):

"After Berg had modified his microscope to track individual bacteria, he
noted that an E. coli moves in two ways. It may "
run," swimming steadily for
a time in a straight or slightly curved path. Then it stops abruptly and
jiggles about--a "
twiddle" in Berg's terminology. After twiddling, it runs
off again in another direction. Twiddles last a tenth of a second and occur
on an average of once a second. The timing of twiddles and the directions
of new runs seem to be random unless a chemical attractant exists at high
concentration at one part of the medium. A bacterium will then move
up-gradient toward the attractant by decreasing the probability of twiddling
when a random run carries it in the right direction. When a random run moves
in the wrong direction, twiddling frequency remains at its normal, higher
level. The bacteria therefore drift toward an attractant by increasing the
lengths of runs in favored directions."


Not exactly simulated annealing, but close enough for patent infringement? :-)

Alex Kirlik

UUCP: kirlik@chmsr.UUCP
{backbones}!gatech!chmsr!kirlik
INTERNET: kirlik@chmsr.gatech.edu

------------------------------

Subject: Neural Nets in Medical Imaging
From: bruce raoul parnas <pasteur!sim!brp@UCBVAX.BERKELEY.EDU>
Organization: University of California, Berkeley
Date: 11 Apr 89 22:00:11 +0000

Can anyone send me any information about/references to the use of neural
networks in image processing, specifically in the case of medical imaging.
I'm interested in what sorts of things can be done by NNs in recognition of
patterns in medical images where there is some a priori knowledge of the
scene to be surveyed.

if people would prefer to send e-mail, i can post results to the net.

thanx,

bruce
brp@sim.berkeley.edu

------------------------------

Subject: Re: Neuron Digest V5 #17
From: ian@shire.cs.psu.edu (Ian Parberry)
Organization: Penn State University
Date: Wed, 12 Apr 89 14:02:57 +0000

Bruce, thanks for your interesting reply. I have been away from the net for
a while (system installation), sorry if I am a bit out-of-date.

>Subject: Re: Re: bottom-up (was Re: NN Question)
>From: brp@sim.uucp (bruce raoul parnas)
>Date: Wed, 15 Mar 89 17:28:45 +0000

I think we are basically agreed that a statement like "the world is
discrete"
or "the world is analog" gives us little reason to model neural
networks as discrete or analog.

>>Real numbers, continuous functions etc., are abstractions which help
>>us deal with the fact that the number of discrete units is larger
>>than we can deal with comfortably.
>
>right. and in most physical systems we may, for our understanding, treat them
>as essentially analog since we simply can't deal with the complexity presented
>by the true (?) discrete nature.

I'm not convinced. Computational complexity theory gives us tools for
dealing with discrete resources (time, memory, hardware) which are too large
to handle individually. There is no need to treat them as continuous.

>>There are (at least) two objections to the classical automata-
>>theoretic view of neural systems. One is that neural systems
>>are not clocked (I presume that this is what you mean by
>>"continuous time"), and that neurons have analog behaviour.
>
>that is precisely what i meant. neurons each evolve on their own, independent
>of system clocks.

Yes? I didn't think the evidence was in on that. I recently heard of a
paper that claimed a large amount of synchronicity in neuron firings. I
don't remember the author. I'll send you email if I remember.

>i believe that a system clock would be more of a hindrance that a help.
>studies with central pattern generators and pacemaker activity (re: the heart)
>show clearly that system clocks are not unavailable. if evolution had found
>a neural system clock advantageous, one could have been created. i feel,
>however, that the continuous-time evolution of neural systems imbues them
>with their remarkable properties.

You are entitled to your opinion. You are reasoning by analogy here. Could
there REALLY be a wetware system clock? You may be missing implementation
details that make it impossible. For example, could the correct period
(milliseconds) be achieved? And could it be communicated reliably and in
small hardware to all neurons? I think the remarkable properties of neural
networks come from other sources; or perhaps we have different definitions
of "remarkable".

Here is another way of looking at it. When one neuron fires and its
neighbour is not receptive (building up charge) there is a fault. Faults
are relatively infrequent (receptive time is larger than nonreceptive time).
The architecture is fault-tolerant. That's why we observe that the brain is
fault-tolerant when some of its neurons are destroyed. It has to be in
order to get around the lack of system clock. Neural architectures are
better at fault-tolerance than von-Neumann ones (at least, we can prove this
when the thresholding is physically separated from the summation of weights,
as seems to be the case for biological neurons).

>I don't think that such a fine level of precision is necessary in neural
>function, i.e. six places would likely be enough. but since digital circuitry
>is made actaully from analog circuit elements limited to certain regions of
>operation, why go to this trouble in real neural systems when analog seems
>to work just fine?

If six decimal places is enough, then we can model everything as integers.
Why do this? It is easier to analyze. Combinatorics is easier than
analysis (despite Hecht-Nielson's claim in the first San Diego NN conference
that the opposite is true). I don't care if the real neural systems seem to
behave in an analog fashion. If it seems that the _computationally
important_ things going on are really discrete (and you seem to have agreed
that this is the case), then our model should reflect this. I'm not
necessarily saying that we should _build_ them that way. That's another
question. But perhaps we ought to _think_ of them that way. To use an
analogy, we don't usually think of a computer as having infinite memory, but
it certainly helps to program them as if it were the case. For a complexity
theorist, infinite means "adequate for day-to-day use". This is where the
classical attack on theoretical computer science (my TRaSh-80 is not a
Turing machine) breaks down.

I think that, despite the bad press that theoretical computer science gets
from some NN researchers (I've heard many unprofessional statements made in
conference presentations by people who should know better), complexity
theory has something to contribute. So do other disciplines. I'm just a
little tired of people closing doors in my face. It has become fashionable
to disparage TCS (following the bad examples mentioned three sentences ago).
Sorry if my knee-jerk reaction to your posting was a little harsh.

Ian Parberry
"The bureaucracy is expanding to meet the needs of an expanding bureaucracy"
ian@theory.cs.psu.edu ian@psuvax1.BITNET ian@psuvax1.UUCP (814) 863-3600
Dept of Comp Sci, 333 Whitmore Lab, Penn State Univ, University Park, Pa 16802

------------------------------

Subject: Parallel Simulated Annealing / References and Are You Doing It?
From: "Dan R. Greening" <squid!dgreen@LANAI.CS.UCLA.EDU>
Organization: UCLA Computer Science Department
Date: 13 Apr 89 23:14:42 +0000

Simulated annealing is often used for combinatorial optimization and other
hard problems. It uses thermodynamic properties as a metaphor for solving
these problems. Simulated annealing is used for nearly every VLSI CAD
problem under the sun: placement, routing, logic optimization, circuit
delay. Plus some vision problems, neural network weight-setting, a huge
collection of NP complete problems, etc. That's why the crossposting list
is so big. I probably missed a few, too :-).

I am looking for references AND people who have implemented simulated
annealing applications on parallel processors. There are some difficult and
interesting problems in extending simulated annealing to parallel
processors.

If you send parallel simulated annealing references to me via e-mail, I will
compile a bibliography and share it with all contributors. I'm also
interested in hearing from people who are actively WORKING in parallel
simulated annealing, and will set up a mailing list if there is enough
interest.

Any leads you can give me to parallel simulated annealing researchers who
may not read news (but have e-mail connections), would also be greatly
appreciated.

Dan Greening dgreen@cs.ucla.edu 213-472-4898 308 Westwood Plaza, #117
Los Angeles, CA 90024-1647

------------------------------

Subject: Share hotel room at IJCNN?
From: mv10801@uc.msc.umn.edu
Date: Wed, 03 May 89 14:25:17 -0500


I would like to find a roommate to share at hotel room with at the
IJCNN conference in Washington DC in June. Male non-smoker preferred.
If you're interested, please contact me by e-mail or by phone. Thanks!

-Jonathan Marshall mv10801@uc.msc.umn.edu
Center for Research in Learning, Perception, and Cognition
205 Elliott Hall
University of Minnesota 612-331-6919 (eve/weekend/msg)
Minneapolis, MN 55455 612-626-1565 (office)


------------------------------

Subject: Volunteers Wanted for IJCNN
From: neilson%cs@ucsd.edu (Robert Hecht-Nielsen)
Date: Wed, 03 May 89 15:43:55 -0700


Request for volunteers for the upcoming International
Conference on Neural Networks (IJCNN)

June 18 - June 22


Requirements: In order to receive full admission to conference and the
proceedings, you are required to work June 19 - June 22, one shift each day.
On June 18 there will be tutorials presented all day. In order to see a
tutorial, you must work that tutorial. See the information below on what
tutorials are being presented.

Shifts: There are 3 shifts: Morning, afternoon and evening. It is best that
you work the same shift each day. Volunteers are organized into groups and
you will, more than likely, be working with the same group each day. This
allows at great deal of flexibility for everyone. If there is a paper being
presented at the time of your shift, you can normally work it out with your
group to see it. Last year I had no complaints from any of the volunteers
regarding missing a paper which they wanted to view.

Tutorials: The following tutorials are being presented:

1) Pattern Recognition - Prof. David Casasent
2) Adaptive Pattern Recognition - Prof. Leon Cooper
2) Vision - Prof. John Daugman
4) Neurobiology Review - Dr. Walter Freeman
5) Adaptive Sensory Motor Control - Prof. Stephen Grossberg
6) Dynamical Systems Review - Prof. Morris Hirsch
7) Neural Nets - Algorithms & Microhardware - Prof. John Hopfield
8) VLSI Technology and Neural Network Chips - Dr. Larry Jackel
9) Self-Organizing Feature Maps - Tuevo Kohonen
10) Associative Memory - Prof. Bart Kosko
11) Optical Neurocomputers - Prof. Demitre Psaltis
12) Starting a High-Tech Company - Peter Wallace
13) LMS techniques in Neural Networks - Prof. Bernard Widrow
14) Reenforcement Learning - Prof. Ronald Williams

If you want to work the tutorials, please return to me you preferences from
1 to 14 (1 being the one you want to see the most).

Housing: Guest housing is available at the University of Maryland. It is
about 30 minutes away from the hotel, but Washington D.C has a great "metro"
system to get you to and from the conference. The cost of housing per night
is $16.50 per person for a double room, or $22.50 for a single room. I will
be getting more information on this, but you need to sign up as soon as
possible as these prices are quite reasonable for the area and the rooms
will go quickly.

General Meeting: A general meeting is scheduled at the hotel on Saturday,
June 17, around 6:00 pm. You must attend this meeting! If there is a
problem with you not being able to make the meeting, I need to know about
it.

When you contact me to commit yourself officially, I will need from you the
following:

1) shift preference
2) tutorial preferences
3) housing preference (University Housing?)

To expedite things, I can be contacted at work at (619) 573-7391 during
7:00am-2:00pm west coast time. You may also leave a message on my home phone
(619) 942-2843.

Thank-you,
Karen G. Haines
IJCNN Tutorials Chairman
neilson%cs@ucsd.edu


------------------------------

Subject: Rochester Connectionist Simulator: answers to common questions
From: rochester!bukys@RUTGERS.EDU
Organization: U of Rochester, CS Dept, Rochester, NY
Date: 07 Apr 89 17:41:57 +0000

From: bukys

===============================================================================

ANSWERS TO COMMON QUESTIONS
REGARDING
THE ROCHESTER CONNECTIONIST SIMULATOR

Fri Apr 7 12:34:10 EDT 1989

Liudvikas Bukys
<bukys@cs.rochester.edu>

===============================================================================

WHAT IS THE SIMULATOR?

The Rochester Connectionist Simulator is a tool that allows you to
build and experiment with connectionist networks. It provides the
basic mechanism for running a simulation (iterate through all
units, call functions, update values). It also provides a graphic
interface that lets you examine the state of a network. It also
provides convenient facilities for defining and manipulating your
network: names for units, set manipulation, etc. It also has a
dynamic loading facility, so you can compile and load new functions
on the fly, and to allow to customize the simulator by adding your
own commands. There is also a library to help you implement
back-propagation networks.

The Simulator does come with a few simple canned examples, but does
not provide a lot of the latest greatest gizmos that researchers
have dreamed up. You should think of the simulator as a network
programmer's tool -- you have the tool, but you have to know what
to do with it.

WHAT MACHINES DOES IT RUN ON?

We have run it here on VAX and SUN-3 (SunOS 3.x) systems here. It
has been made to run on SUN-4 systems without too much trouble.

It's a little boring without the graphic interface, though, and...
The graphic interface only runs on Sun workstations under the
SunView window system.

WHAT OPERATING SYSTEM DEPENDENCIES ARE THERE?

The code is generally pretty generic C, so running it on strange
machines shouldn't be too much trouble, except for the dynamic
loading modules.

The Simulator should feel at home on most Berkeley Unix (BSD
4.2/4.3) based operating systems. In particular, if your system
has an a.out.h file in /usr/include, it'll probably install
easily. If your machine is based on AT&T System V, your object
files are probably COFF format, and some changes need to be made.

We are not running SunOS 4.0 here yet, so it is only known to run
on SunOS 3.x systems. Again, the dynamic loading routines need a
few changes to run under this operating system.

If you have the simulator running on either System V or SunOS 4.0,
please send me your patches, so that I can share them with everyone
else.

DO YOU HAVE AN X VERSION OF THE GRAPHIC INTERFACE?

Someone here worked on a version of the graphic interface using X11
and the HP toolkit. It was close to working, but not quite
debugged (as far as I can tell). Someone else (who will remain
nameless for the moment) ported that code to X11 and the Athena
toolkit. This code will probably be available soon. Stay tuned to
this mailing list.

IS THERE A VERSION AVAILABLE FOR MULTI-PROCESSORS?

Well, yes and no. Version 4.0 was ported to the BBN Butterfly I.
Unfortunately, this was one of the first things we did with the
Butterfly, at a time when the programming environment was quite
crude. (For those who know the Butterfly: this was before the
existence of the Uniform System or the Streams library.) Getting it
to run now is still possible, but it would require digging up a
pile of little utility daemons and libraries and so forth, and,
chances are, unless you are really serious about it, you'll give
up! Perhaps only the original author (Mark Fanty) or I (Liudi
Bukys) would really be able to do it, and we're both doing better
things right now. If, after reading all this, you still want to
try, contact me, and I'll gather the package up and send it to
you. P.S. The back-propagation library doesn't run under this old
Butterfly version.

Of course, the current environment (Mach) on modern Butterflies is
much better, so it's possible that someone here will port it there
some time.

To my knowledge, no one has ported a parallelized version to any of
the other common multiprocessors (Sequent, Alliant, Encore,
Inmos/Transputer, Intel/HyperCube).

===============================================================================

Liudvikas Bukys
<simulator-request@cs.rochester.edu>

------------------------------

Subject: rochester connectionist simulator available on uunet.uu.net
From: Liudvikas Bukys <rochester!bukys@CU-ARPA.CS.CORNELL.EDU>
Organization: U of Rochester, CS Dept, Rochester, NY
Date: 21 Apr 89 14:39:56 +0000

A number of people have asked me whether the Rochester Connectionist
Simulator is available by uucp. I am happy to announce that uunet.uu.net
has agreed to be a redistribution point of the simulator for their uucp
subscribers. It is in the directory ~uucp/pub/rcs on uunet:

-rw-r--r-- 1 8 11 2829 Jan 19 10:07 README
-rw-r--r-- 1 8 11 527247 Jan 19 09:57 rcs_v4.1.doc.tar.Z
-rw-r--r-- 1 8 11 9586 Jul 8 1988 rcs_v4.1.note.01
-rw-r--r-- 1 8 11 589 Jul 7 1988 rcs_v4.1.patch.01
-rw-r--r-- 1 8 11 1455 Apr 19 19:18 rcs_v4.1.patch.02
-rw-r--r-- 1 8 11 545 Aug 8 1988 rcs_v4.1.patch.03
-rw-r--r-- 1 8 11 837215 May 19 1988 rcs_v4.1.tar.Z

These files are copies of what is available by FTP from the directory
pub/rcs on cs.rochester.edu (192.5.53.209). We will still send you, via
U.S. Mail, a tape and manual for $150 or just a manual for $10.

If you are interested in obtaining the simulator via uucp, but you aren't a
uunet subscriber, I can't help you, because I don't know how to sign up.
Maybe a note to postmaster@uunet.uu.net would get you started.

Liudvikas Bukys
<simulator-request@cs.rochester.edu>

------------------------------

End of Neurons Digest
*********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT