Copy Link
Add to Bookmark
Report

Neuron Digest Volume 06 Number 35

eZine's profile picture
Published in 
Neuron Digest
 · 1 year ago

Neuron Digest   Friday, 25 May 1990                Volume 6 : Issue 35 

Today's Topics:
ISSNNet meeting/dinner at IJCNN
Korean Neural Network Activities
Re: Intel N64
Re: Intel N64
Connections among nodes within a layer
Re: Connections among nodes within a layer
Re: Connections among nodes within a layer
Re: Connections among nodes within a layer
Re: Connections among nodes within a layer
Is there any quantitative measure of accuracy of a bp network?
Back Propagation for Training Every Input Pattern with Multiple Output
Re: Back Propagation for Training Every Input Pattern with Multiple Output
servo systems
Re: servo systems
feature extraction in pictures (In English)
Back-propagation/NN benchmarks
Re: Back-propagation/NN benchmarks
Re: Back-propagation/NN benchmarks
Re: Back-propagation/NN benchmarks


Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"
Use "ftp" to get old issues from hplpm.hpl.hp.com (15.255.176.205).

------------------------------------------------------------

Subject: ISSNNet meeting/dinner at IJCNN
From: issnnet@bucasb.bu.edu
Date: Fri, 25 May 90 13:29:35 -0400

The International Student Society for Neural Networks (ISSNNet) will hold
a meeting on Monday night, June 18, during the IJCNN conference in San
Diego. All interested parties are welcome to join us.

We are planning to organize a (cheap), quick meal right before or after
the meeting, so participants may attend the evening plenary talks. We
also expect to get a lot of people together after the plenaries and head
over to some local establishment (you do not need to be a member to join
us there :-).

Exact details will be available at registration or at the ISSNNet booth
during the conference. For more information send email to:
issnnet@bucasb.bu.edu



------------------------------

Subject: Korean Neural Network Activities
From: Soo Young Lee <sylee%eekaist.kaist.ac.kr@RELAY.CS.NET>
Date: Fri, 25 May 90 11:40:42 -0700


[[ Editor's Note: I am always pleased to have reports of activities from
different labs and (!) countries. I hope readers of this Digest will
take advantage of Soo-Young's offer to act as intermediary between Korean
researchers and Neuron Digest readers!

For for my own records, I usually ask new subscribers what they do.
However, I'd like to invite *current* subscribers to submit a short
description of their own or groups' projects to be published in future
Digests. I'm sure the general readership will be interested in what *you*
have to say and what *you* are doing. -PM ]]

Dear Moderator:

I have some information about neural network activities in Korea.
Please distribute followings for the Neuron Digest group.
Thank you in advance.

Soo-Young Lee
KAIST

- -----------------------------------------------------------------

Subject: Annual Meeting of Neural Network Study Group in Korea
From: sylee%eekaist.kaist.ac.kr@relay.cs.net
Date: May 24th, 1990

On May 19th, 1990, there was a big event on neural networks
in Korea, ANNUAL MEETING of Neural Network Study Group (NNSG) of Korea.
The meeting was also supported by Neural Network Committee of
Korean Institute of Communication Sciences (KICS), Special Interest Group
- Artificial Intelligence of Korean Information Science Society (KISS),
and Special Interest Group - Korea of International Neural Network
Society (SIGINNS-Korea). Book of summaries of the technical
presentations is available by request to:

sylee%eekaist.kaist.ac.kr@relay.cs.net

or

Prof. Soo-Young Lee
Dept. of Electrical Engineering
Korea Advanced Institute of Science and Technology
P.O.Box 150 Chongryangni
Seoul, 130-650 Korea.

Also if you want information on Korean neural network activities,
NNSG of Korea, Neural Network Committee of KICS, or SIGINNS-Korea,
I may provide that for you. As a Secretary of NNSG, Chairman of Neural
Network Committee of KICS, and also Co-chairman of SIGINNS-Korea,
I voluntereed to serve as a point-of-contact between Korean neural
network researchers and Neuron Digest group.

Best regards,

Soo-Young Lee


FOREWORD of the Digest

Neural Network Study Group (NNSG) was established on
August, 1988, and has organized regular seminars bimonthly
to provide a forum for information exchange among neural
network researchers and to promote research environment in
Korea. The NNSG initially started with about 30 partici-
pants, but now has more than 80 professors and industry
researchers in the distribution list. Also we understand
there are many other neural network researchers in Korea who
may join the NNSG in the near furture. As an attempt to
summarize all the neural network research activities in
Korea, an ANNUAL MEETING is organized by the NNSG in coor-
poration with Neural Network Committee of Korean Institute
of Communication Sciences, Special Interest Group - Artificial
Intelligence of Korean Information Science Society, and Spe-
cial Interest Group - Korea of International Neural Network
Society (SIGINNS-Korea).
Twenty eight papers are presented at the Technical
Presentation Sessions. Although only summaries are included
in this digest, we hope it gives you some ideas of on-going
neural network research activities in Korea.


May 19th, 1990
Secretaries
Neural Network Study Group

TABLE OF CONTENTS

SESSION I Theory

13:00-13:20 Computation of Average Vector in Primate Saccadic Eye Movement
System
Choonkil Lee, Kyung-Hee Yoon, and Kye-Hyun Kyung,
Seoul National University

13:20-13:40 Computer Simulation of Central Pattern Generation in Invertebrates
Jisoon Ihm, Seoul National University

13:40-14:00 Harmony Theory
Koo-Chul Lee, Seoul National University

14:00-14:20 Hamiltonian of the Willshaw Model with Local Inhibition
Gyoung Moo Shim and Doochul Kim, Seoul National University

14:20-14:40 Modular Neural Networks: Combining the Coulomb Energy Network
Algorithm and the Error Back Propagation Algorithm
Won Don Lee, Chungnam National University

14:40-15:00 Dynamic Model of Neural Networks
M.Y. Choi, Seoul National University


SESSION II Application (I)

13:00-13:20 Neural Network Control for Robotics and Automation
Se-Young Oh, Pohang Institute of Science and Technology

13:20-13:40 A Modified Hopfield Network: Design, Application and Implementation
Chong Ho Lee, Sung Soo Dong, Inha University
Kye Hyun Kim, Bo Yeon Kim, Seoul National University

13:40-14:00 Frequency Analysis of Speech Signal using Laplacian of Gaussian
Operator
Yang-Sung Lee, Jae-Chang Kim, Ui-Yul Park, Tae-Hoon Yoon,
Yong-Guen Jung, Pusan National University, and
Jong-Hyek Lee, Kyungsung University

14:00-14:20 Korea Phoneme Classification using the Modified LVQ2 Algorithm
Hong K. Kim, Hwang S. Lee, Soo Y. Lee, and Chong K. Un,
Korea Advanced Institute of Science and Technology

14:20-14:40 User Authentification by a Keyboard Typing Patterns using
a Neural Net Algorithm
Jaeha Kim, Joonho Lee, and Kangsuk Lee,
Samsung Advanced Institute of Technology

14:40-15:00 An Artificial Neural Net Approach to Time Series Modeling:
ARMA Model Identification
Sung Joo Park and Jin Seol Yang,
Korea Advanced Institute of Science and Technology


SESSION III Implementation

15:20-15:40 Neural Networks on Parallel Computers
Hyunsoo Yoon and Seung R. Maeng
Korea Advanced Institute of Science and Technology

15:40-16:00 Automatic Design of Neural Circuits by the Single Layer Perceptron
and Unidirectional Type Models
Ho Sun Chung, Kyungpook National University

16:00-16:20 Design of Neural Chip for Conversion of Binary Dither Image to
Multilevel Image
Jin Kyung Ryeu, Kyungpook National University

16:20-16:40 The Design of Neural Circuits for the Preprocessing of the Character
Recognition
Ji-Hwan Ryeo, Taegu University

16:40-17:00 Optical Associative Memory Based on Inner Product Neural Network
Model
Jong-Tae Ihm, Sang-Keun Gil, Han-Kyu Park, Yonsei University
Seung-Woo Chung, ETRI, and Ran-Sook Kim, KTA

17:00-17:20 Optical Holographic Heteroassociative Memory System
Seung-Hyun Lee, Woo-Sang Lee, and Eun-Soo Kim, Kwangwoon University

17:20-17:40 Optical Implementation of Multilayer Neural Networks
Sang-Yung Shin, Korea Advanced Institute of Science and Technology

17:40-18:00 Re-training Neural Networks for Slightly Modified Input Patterns
and its Optical Implementation for Large Number of Neurons
Soo-Young Lee, Korea Advanced Institute of Science and Technology


SESSION IV Application (II)

15:20-15:40 Efficient Image Labeling Using the Hopfield Net and Markov Random
Field
Hyun S. Yang, Korea Advanced Institute of Science and Technology

15:40-16:00 On the Improvements for Applying Neural Network to Real World
Problems
Sung-Bae Cho and Jin H. Kim,
Korea Advanced Institute of Science and Technology

16:00-16:20 An Image Reinforcement Algorithm,ALOPEX, Conducted on Connectivity
by Using Moment Invarient as a Feature Extraction Method
Tae-Soo Chon, Pusan national University, and
Evangelia Tzanakou, Rutgers University

16:20-16:40 Recognition of Printed Hanguel Characters Using Neocognitron
Approach
S.Y. Bang, Pohang Institute of Science and Technology

16:40-17:00 The Recognition of Korean Characters by Neural Networks
Chong-Ho Choi, Yun-Ho Jeon, and Myung-Chan Kim,
Seoul National University

17:00-17:20 Character Recognition Using Iterative Autoassociation with
the Tagged Classification Fields
Sung-Il Chien, Kyungpook National University

17:20-17:40 Sejong-Net: A Neural Net Model for Dynamic Character Pattern
Recognition
Hyukjin Cho, Jun-Ho Kim, and Yillbyung Lee, Yonsei University

17:40-18:00 Hangul Recognition Using Neocognitron with Selective Attention
Eun Jin Kim and Yillbyung Lee, Yonsei University


------------------------------

Subject: Re: Intel N64
From: steinbac@hpl-opus.HP.COM (Gunter Steinbach)
Organization: HP Labs, High Speed Electronics Dept., Palo Alto, CA
Date: 12 Apr 90 23:18:28 +0000


I don't have info on the N64 handy, but I just clipped an article from
Electronic Engineering Times (Apr.9,90,p.2) with an even wilder claim.
It is short enough to type in - without permission, of course:

Intel, Nestor plan fastest neural chip

Providence, R.I. - Nestor Inc. and Intel Corp (Santa Clara) have landed
a $1.2 million contract from Darpa to fabricate the world's fastest
neural network microchip. The target speed for the N1000 is 150 billion
interconnections per second. The N1000, to be fabricated in Intel's
EEPROM memory operation, wil have over 1000 neurons, using 250000 EEPROM
cells for its synaptic weights and bias signals.
It will be a single, standalone chip custom-tailored to realize
Nestor's patented neural model, called restricted-coulomb energy (RCE).
A special version of its development system will control a state machine
that allows the chip to learn by programming its EEPROM.

[[ For us Europeans, that's 150*10^9 interconnections/s. ]]

Guenter Steinbach gunter_steinbach@hplabs.hp.com

------------------------------

Subject: Re: Intel N64
From: samsung!sdd.hp.com!elroy.jpl.nasa.gov!aero!aerospace.aero.org!plonski@think.com (Mike Plonski)
Organization: The Aerospace Corporation
Date: 16 Apr 90 23:22:50 +0000

Intel released the details for a neural network chip at the
1989 IJCNN. You may want to look at the following paper.

"An Electrically Trainable Artifical Neural Network with 10240 "Floating Gate"
Synapses. By Mark Holler, et. al., IJCNN, June 18-22 1989, Vol II,`
pages 191-196.

---------------------------------------------------------------------------
. . .__. The opinions expressed herein are soley
|\./| !__! Michael Plonski those of the author and do not represent
| | | "
plonski@aero.org" those of The Aerospace Corporation.
_______________________________________________________________________________

------------------------------

Subject: Connections among nodes within a layer
From: eepgszl@EE.Surrey.Ac.UK (LI S Z)
Organization: University of Surrey, Guildford, Surrey, UK. GU2 5XH
Date: 15 Apr 90 12:39:06 +0000

It seems to me that multilayer NN is generally modeled like this:

Layer 3 O O
^\ /^
| \/ |
| /\ |
|/ \|
Layer 2 O O
^\ /^
| \/ |
| /\ |
|/ \|
Layer 1 O O

There are connections among nodes between layers,
but no connections among nodes WITHIN a layer. Why?
For simplicity or neurologically it is so?
If we connect nodes within each layer, it becomes like this:

Layer 3 O<-->O
^\ /^
| \/ |
| /\ |
|/ \|
Layer 2 O<-->O
^\ /^
| \/ |
| /\ |
|/ \|
Layer 1 O<-->O

The model includes the former one, and should be more powerful.
Why not?

Can anyone explain or tutor a bit, from computational viewpoints
and/or from neuro-bio-physio-anatomo-logy viewpoints in particular.
Thanks a lot.

Stan

------------------------------

Subject: Re: Connections among nodes within a layer
From: usenet@nlm-mcs.arpa (usenet news poster)
Organization: National Library of Medicine, Bethesda, Md.
Date: 15 Apr 90 19:14:48 +0000

>There are connections among nodes between layers,
>but no connections among nodes WITHIN a layer. Why?

One reason for excluding intra-layer connections is that you lose
deterministic behavior of the net. A flip-flop can be created as a
simple net with two nodes on the same layer each inhibiting the other.
The system has two stable states (1,0) and (0,1) which are equally good.
In more complex nets you could end up with race conditions where results
depended on the order of evaluation etc.

Depending on you application such non-deterministic may not be all bad.
It is, for example, a way of building memory into a net after the
training is complete.

David States

------------------------------

Subject: Re: Connections among nodes within a layer
From: demers@odin.ucsd.edu (David E Demers)
Organization: University of California, San Diego
Date: 16 Apr 90 17:19:29 +0000

[... more deleted]

If you look into the literature, you will find just about every topology
possible. Competitive learning models generally have mutually inhibitory
connections along a layer, so that the "
winner" eventually drives the
others down. See Kohonen, for example. Also Grossberg.

Connections within layers has been used in semi-recurrent nets, for
example, Mike Jordan's nets.

The question of architecture is highly problem dependent. For pattern
associations, there does not seem to be much advantage to connections
within layers (though I haven't done a solid search of research results).
Essentially, for feedforward nets, the task intended is to develop a
mapping between the input vectors and their corresponding patterns. A
feedforward network with one hidden layer can approximate any mapping
(Cybenko; Hornik, Stinchecombe & White; others). The number of units,
connections and training time for learning the mapping are known to only
very loose bounds, but much work is being done in the area. Expect to
see a few papers at ICNN in San Diego in June.

Dave

------------------------------

Subject: Re: Connections among nodes within a layer
From: orc!bu.edu!bucasb!slehar@decwrl.dec.com (Lehar)
Organization: Boston University Center for Adaptive Systems
Date: 17 Apr 90 13:57:28 +0000


Interconnections between neurons within a layer fall into the same
category as recurrent networks of the form...

Layer 2 ---O O---
| ^\ /^ |
| | \/ | |
| | /\ | |
| |/ \| |
Layer 1 -->O O<--

Such topologies are unpopular among many network modellers because
they introduce a lot of complexity into the system. Specifically, the
state of the system depends on it's own past state as well as on the
inputs. That means that you cannot compute the values of the nodes in
one iteration, but must take many little time steps and compute the
whole network at each step. Nodal activations will build up and decay
much like voltages on capacitors and inductors. The accuracy of the
simulation depends critically on the size of the time steps, and is
best done using some differential equation solving algorithm such as
Runge Cutta, as is employed often for simulations of analog
electronics circuits.

Recurrent networks however are very common in the brain, and just
about every neural pathway has a complementary pathway going in the
other direction and just about all neural layers have lateral
connections. For example the optic nerve from the retina to the
lateral geniculate body is uni-directional, but from the geniculate to
the visual cortex (v1) there is a bi-directional path, with as much
information going towards your eye as there is going towards your
brain. Also, the cells in the retina, the geniculate and the cortex
are all richly interconnected among themselves.

Certain neural modellers (most notably Grossberg) make use of lateral
and recurrent pathways using dynamic system simulations. The
principal advantage of such simulations is that complex and subtle
behavior can be elicited from very simple and elegant architectures.
The power of dynamic and recurrent architectures will only be fully
realized when we liberate ourselves from digital simulation and can
build directly in analog hardware. In the meantime, such systems are
still the best way to model the brain directly where your priorities
are not speed and efficiency, but rather modelling accuracy of the
biological system.

(O)((O))(((O)))((((O))))(((((O)))))(((((O)))))((((O))))(((O)))((O))(O)
(O)((O))((( slehar@bucasb.bu.edu )))((O))(O)
(O)((O))((( Steve Lehar Boston University Boston MA )))((O))(O)
(O)((O))((( (617) 424-7035 (H) (617) 353-6425 (W) )))((O))(O)
(O)((O))(((O)))((((O))))(((((O)))))(((((O)))))((((O))))(((O)))((O))(O)

------------------------------

Subject: Re: Connections among nodes within a layer
From: elroy.jpl.nasa.gov!aero!robert (Bob Statsinger)
Date: 17 Apr 90 19:52:06 +0000

>There are connections among nodes between layers,
>but no connections among nodes WITHIN a layer. Why?

Look at Grossberg's Adaptive Resonance Theory. It DOES
use connections within a layer to implement competitive learning.

>For simplicity or neurologically it is so?

Simplicity and computational tractability.

Bob

------------------------------

Subject: Is there any quantitative measure of accuracy of a bp network?
From: joshi@wuche2.wustl.edu (Amol Joshi)
Organization: Washington University, St. Louis MO
Date: 15 Apr 90 21:49:21 +0000


hi folks!

i have to clarify my thoughts about how to interpret outputs obtained
from a back-propagation neural network and also learn what is the
practice. so, if you neural network gurus let me know your opinions about
what follows, i would greatly appreciate it.

this is a typical problem, i am sure most of you have encountered it at
least once:

say a back prop nn is to be used as a pattern classifier. i train a net
so that my network gives values close to 0.995 to the desired nodes. (let
us assume that the patterns are discrete - i.e. not more than one output
node is required to flag any pattern in the domain).

now, i use this trained network on noisy data and obtain values which are
not `perfect'. let's say that in case I, the network flags the node
indicative of a particular pattern with an output value of 0.45. the rest
of the nodes have values which are *much lesser* than 0.45 (how much is
*much lesser* is vague, but say any value of the order of 2 magnitudes
lesser is *much lesser* - i.e. anything less than 0.0045). in my
interpretation i call this performance as 100% accurate. however, i say
that the network is only 45% certain of its own performance. what i am
saying is that as long as the output node value is dominating other
values in the output layer, my network is performing well if not
perfectly.

on the other hand, say in case II, if my network assigns values of 0.90
and 0.85 to two nodes (where only one of them is supposed to be flagged
with a high value), even though the values are close to 1.0, i don't
consider my network producing `accurate' results (but at the same time i
have no way of quantifying the accuracy of this network).

what is the common practice in interpreting the output values? is there
any quantitative measure for accuracy of the network?

could you please let me know your opinions?

thanx.

:amol

-----------------------------------------------------------
Amol Joshi | joshi@wuche2.wustl.edu
Department of Chemical Engineering |
Washington University in St. Louis.|

------------------------------

Subject: Back Propagation for Training Every Input Pattern with Multiple Output
From: cs.utexas.edu!uwm.edu!mrsvr.UUCP!neuron.uucp!ravula@tut.cis.ohio-state.edu (Ramesh Ravula)
Date: 17 Apr 90 17:13:35 +0000




Has anyone used training the back propagation algorithm where every input

pattern has to be associated with multiple output patterns. If so, what

version (conventional, recurrent, etc.) of the back prop algorithm did you

use?? I would be interested in knowing the results especially the learning

times etc. I would also appreciate if any one could point any publications

in this area. Please reply to my e-mail address given below.


Ramesh Ravula
GE Medical Systems
Mail W-826
3200 N. Grandview Blvd.
Waukesha, WI 53188.

email: {att|mailrus|uunet|phillabs}!steinmetz!gemed!ravula
or
{att|uwvax|mailrus}!uwmcsd1!mrsvr!gemed!ravula

------------------------------

Subject: Re: Back Propagation for Training Every Input Pattern with Multiple Output
From: thomasp@lan.informatik.tu-muenchen.dbp.de (Patrick Thomas)
Organization: Inst. fuer Informatik, TU Muenchen, W. Germany
Date: 20 Apr 90 16:43:45 +0000


I don't have it at hand, but I remember a net architecture by Jordan
described in the 3rd PDP book, "
Explorations in Parallel Distributed
Processing", McClelland/Rumelhard, MIT Press, which would fit your needs.

It's called something with SEQUENTIAL and backprop-trained a net to
associate a sequence of output patterns (actually four of them, I think)
with an input pattern. This gave you a kind of "
command pattern" as input
which initiated a sequence of "
action patterns" as output.

The trick was to divide the input layer and to have feedback connections
from the output layer to one part of the so-called input layer. This part
with the feedback connections and NO external input finally learned a
representation of the output sequence.

This was by far the most interesting model simulated with the PDP
software but it's a long time ago, and I don't remember the details. The
reference in the PDP book should be around "
Jordan (1986)" or so.

Patrick

------------------------------

Subject: servo systems
From: voder!dtg.nsc.com!andrew@ucbvax.Berkeley.EDU (Lord Snooty @ The Giant Poisoned Electric Head )
Organization: National Semiconductor, Santa Clara
Date: 17 Apr 90 23:32:21 +0000

I am interested in compiling a few references on the application of NNs
to motion control systems - robotics and the like. If there's sufficient
interest, I'll post the results.
thanks in advance, andrew

...........................................................................
Andrew Palfreyman andrew@dtg.nsc.com Albania during April!

------------------------------

Subject: Re: servo systems
From: plonski@aerospace.aero.org (Mike Plonski)
Organization: The Aerospace Corporation
Date: 19 Apr 90 18:58:26 +0000

The latest IEEE Control Systems Magaine is a special issue on Neural Nets.

%\def\Zzz{Special Issue on Neural Networks}
%J |IEECSM|
%A \Zzz
%V 10
%N 3
%D |APR| 1990
%K Special Issue on Neural Networks in Control Systems
%X {\bf Table of Contents:}
Neural Networks in Control Systems;
Associative Memories via Artificial Neural Networks;
Neural Networks for Self-Learning Control Systems;
Modeling Chemical Process Systems via Neural Computations;
Neural Networks for System Identification;
A Comparison Between CMAC Neural Nettwork Control and Two Traditional Adaptive
Control Systems;
Back-Propagation Neural Networks for Nonlinear Self-Tuning Adaptive Control;
Use of Neural Networks for Sensor Failure Detection in a Control System;
Learning Algorithms for Perceptrons Using Back-Propagation with Selective
Updates;
Neuromorphic Pitch Attitude Regulation of an Underwater Telerobot;
Mobil Robot Control by a Structured Hierarchical Neural Network;
Integrating Neural Networks and Knowledge-Based Systems for Intelligent Robotic
Control;

------------------------------------------------------------------------------
. . .__. The opinions expressed herein are soley
|\./| !__! Michael Plonski those of the author and do not represent
| | | "
plonski@aero.org" those of The Aerospace Corporation.
_______________________________________________________________________________

------------------------------

Subject: feature extraction in pictures (In English)
From: andrew@karc.crl.go.jp (Andrew Jennings)
Organization: Communications Research Laboratory
Date: 15 May 90 02:56:38 +0000

The problem:

Given a set of pictures (possibly quite large) I want to break up each
picture into a set of objects, later to be used for retrieval. So if for
example a picture features a baseball cap, this is stored as a feature
that can be used for later retrieval.


This seems to me to be a good area to apply self-organising networks. I
am interested in pointers to the literature, and contacting others who
are interested in this problem.

Thanks.


Andrew Jennings Kansei Advanced Research Center, 1990: year of
Communications Research Laboratory, the demons
andrew@crl.go.jp Kobe, Japan

------------------------------

Subject: Back-propagation/NN benchmarks
From: voder!nsc!taux01!cyusta@ucbvax.Berkeley.EDU ( Yuval Shachar)
Organization: National Semiconductor (IC) Ltd, Israel
Date: 15 May 90 08:23:19 +0000


The following issues have been on my mind for a while now, so I
thought I may as well lighten the burden a little. It seems to me that
about 80% of the population of the galaxy are , at least in their
spare time, trying to improve the back-propagation algorithm, be it
propagation, percolation or imitation :-). It also seems that most of
the methods succeed to some extent, and that is where I'm a little
confused:

1. There are no strict performance criteria for bp networks

Many papers quote the number of cycles. This is meaningless when
comparing with conjugate gradient techniques for example, since there
the number of cycles is greatly reduced, but each cycle involves line
minimizations that require many evaluations of the error function (i.e
feed-forward passes). Also, very small networks (e.g the 2-2-1
classic for the XOR problem) seem to be missing the point a little.
The same goes for random starting points in error-space, unless the
points location in error-space is well understood.

2. There are no standard benchmarks for (bp) networks

This is very similar to the first point. A benchmark should consider a
moderately sized network (i.e input space and out space dimensions),
and even more important, a moderately sized problem which is WELL
UNDERSTOOD, i.e the error landscape is known, has some local minima
(sigh), narrow valleys and all the other features we have learned to
love. (I even recall reading a paper by R. Hecht-Nielsen mentioning
that they ran a complete scan of the error-space of a NN for more that
a week on a Cray, before they had the full picture). The benchmark
should include several starting points in error-space and a well
defined stopping criterion. It should also have a large set of vectors
to test the network after convergence ,both for learning precision and
for extrapolation (generalization) ability.

I would even go further and expect some sort of a standard
specification format for neural network input, structure and weight
state etc, so that anyone writing a simulator would be able to use
these benchmarks with no difficulty.

It seems to me that Minsky and Paperts Perceptrons is still one of the
most important books written in the field, and its conclusions are
still ignored by many researches. (I know I sin by oversimplifying and
generalizing things a little, but, hey, I'm a neural network myself
:-))

Finally, I am currently at a stage where such a benchmark would help
me check my own contribution to bp improvements (yes, I'm also a part
of these 80% ..). If anybody out there can contribute any kind of a nn
benchmark, or provide a pointer to one, I will appreciate it.
Otherwise I guess I will spend some time on doing just that, but will
be glad to share it with whoever is interested. Any inputs are
welcome.

------------------------------

Subject: Re: Back-propagation/NN benchmarks
From: ins_atge@jhunix.HCF.JHU.EDU (Thomas G Edwards)
Organization: The Johns Hopkins University - HCF
Date: 15 May 90 20:46:08 +0000


Well, there are some "
standard" benchmarks.

XOR...to make sure the network method has some hope of working
(similar to hooking an Op-Amp up as a follower before you begin
testing it to make sure it has some hope of working)

Binary Encoders...A binary number of n digits is applied as input.
The network has m hidden units s.t. n>m, often
m=log2(n), and the output is trained to replicate
the input pattern with n output units.

Chaotic Time Series...Used by Lapedes and Farber (for BP I think), and
Moody and Darken (for Locally Receptive Fields).
The network is trained to predict future values
of the series based on a few values from the
series in the past.

Learning to Oscillate...for recurrent networks, train the network to be
a sine or squarewave generator.

Two Spirals Problem...The network is trained to tell whether a given
set of x,y coordinates lies on spiral #1 or
spiral #2. The two spirals are intertwined.
Very difficult for backpropagation.
Doable in quickprop, even reasonable with
cascade-correlation.

-Thomas Edwards


------------------------------

Subject: Re: Back-propagation/NN benchmarks
From: golds@fjcnet.GOV (Rich Goldschmidt)
Organization: Federal Judicial Center, Washington, D.C.
Date: 16 May 90 12:54:17 +0000

One of the standard benchmarks for neural networks is the concentric
sprials problem, originally proposed by Alexis Weiland. It is described
in the Byte article by Touretzkey and (?). The problem is to classify
which of the two adjacent spirals you are on. There are some nice
examples of the representations that hidden units take on in learing to
solve this problem in that Byte article.

Rich Goldschmidt
uunet!fjcp60!golds or golds@fjcp60.uu.net

------------------------------

Subject: Re: Back-propagation/NN benchmarks
From: cyusta@taux01.UUCP ( Yuval Shachar)
Organization: National Semiconductor (IC) Ltd, Israel
Date: 17 May 90 06:49:30 +0000

>Well, there are some "
standard" benchmarks.
>
> ...
>
>XOR ... >Binary Encoders ... >Chaotic Time Series ...

Yes, these are all good examples of classic NN problems. It takes a lot
more to make benchmarks out of them however. If in addition we would
have a set of initial weight-states and a matching number for each one
specifying the performance of the net for that initial state, and if
these starting points were interesting enough, and if the performance
criteria were well defined etc we would have something like a
benchmark. The fact is that many researchers go through analyzing these
problems before they can use them on their own network/algorithm. Apart
from the fact that often this extra work could be spared, it would also
grant more meaning to posted results.


Yuval Shachar cyusta@taux01.nsc.com cyusta@nsc.nsc.com
shachar@taurus.bitnet shachar@math.tau.ac.il
National Semiconductor (Israel) P.O.B. 3007, Herzlia 46104, Israel
Tel. +972 52 522310 TWX: 33691, fax: +972-52-558322

------------------------------

End of Neuron Digest [Volume 6 Issue 35]
****************************************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT