Copy Link
Add to Bookmark
Report

Neuron Digest Volume 07 Number 39

eZine's profile picture
Published in 
Neuron Digest
 · 1 year ago

Neuron Digest   Monday,  8 Jul 1991                Volume 7 : Issue 39 

Today's Topics:
Aspirin/MIGRAINES v4.0 Users
Re: Boeing plant tours for IJCNN-91-Seattle
Neuron Digest V7 #38
Neural Chess: Presentation of Finding
Information about ...
Requesting Email Address
RE: Neuron Digest V7 #38
TR - Phase-locking without oscillations
Thesis/TR - Soft Competitive Adaptation
TSP paper available
New Paper: A Fast and Robust Learning Algorithm for Feedforward NN
two new papers on back-prop available from neuroprose


Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"
Use "ftp" to get old issues from hplpm.hpl.hp.com (15.255.176.205).

----------------------------------------------------------------------

Subject: Aspirin/MIGRAINES v4.0 Users
From: Russell Leighton <russ@oceanus.mitre.org>
Date: Fri, 28 Jun 91 10:47:09 -0400

Aspirin/MIGRAINES v4.0 Users

Could those groups presently using the Aspirin/MIGRAINES v4.0 neural
network simulator from MITRE please reply to this message. A brief
description of your motivation for using this software would be useful
but not necessary.

We are compiling a list of users so that we may more easily distribute
the next release of software (Aspirin/MIGRAINES v5.0).

Thank you.

Russell Leighton

INTERNET: russ@dash.mitre.org

Russell Leighton
MITRE Signal Processing Lab
7525 Colshire Dr.
McLean, Va. 22102
USA



------------------------------

Subject: Re: Boeing plant tours for IJCNN-91-Seattle
From: Don Wunsch <dwunsch@atc.boeing.com>
Date: Tue, 02 Jul 91 10:37:08 -0700


In a recent note, I wrote:

> IJCNN-91-Seattle is almost upon us! Come
> ...
> Boeing will be offering a limited number of free tours
> ...
> first-served basis. Families may also be accomodated, but
> only if space is available. See the 747 and 767 being built,
> ...
Forgot to mention--the minimum age for the Boeing plant tour is
twelve years old.

Hope to see you there!

Don


------------------------------

Subject: Neuron Digest V7 #38
From: aj3u@larch.cs.Virginia.EDU
Date: Tue, 02 Jul 91 22:17:16 -0400

I am looking for references to the conjugate-gradient algorithm for
training feed forward networks. Any help would be appreciated.

Asim Jalis.



------------------------------

Subject: Neural Chess: Presentation of Finding
From: David Kanecki <kanecki@vacs.uwp.edu>
Date: Tue, 02 Jul 91 21:57:27 -0500

[[ Editor's Note: Readers will remember that Kanecki published
information about a chess-playing program using neural networks several
months ago. It's a pleasure to see he is continuing the effort. As most
of you know, chess was one of th "holy grails" of AI in the 60's, though
now it's brute force which seems to succeed. The sequential nature of
play makes chess a challenge for ANN architectures, although the general
case uses board "states". Lashley's conumdrum still stands. -PM ]]


Basis: Applied Game Theory
Advanced Logic Test Using Reversed Black King and Queen Position


Author/Creator:
David H. Kanecki, Bio. Sci., A.C.S.
kanecki@vacs.uwp.wisc.edu

Introduction
------------



A chess program was developed using neural network methods. The program
does not use or have a move database as used in convential chess
programs. Also, the program thinks its way to a solution and can resolve
various traps by using its neural network.

To test the program I used the standard chess board starting position for
every piece except the black king and queen. The black king was put at
square E8 as opposed to D8. And, the black queen was put at square D8 as
opposed E8. I did this to test the ability of the program to think its
way to a solution and to see how would work in an unconvential setting.


Data
----

In this game the computer program used the white pieces while I used the
black pieces. The white pieces are initially located in rows 1 and 2 and
the black pieces are initially located in rows 7 and 8. Below is a
transcript of a game where the computer checkmated a human opponent:

White (Computer) Black (Myself)
---------------- --------------
1. E2-E3 B7-B6
2. F1-B5 E7-E6
3. H2-H3 F8-C4
4. B5xC4 C7-C6

The symbol "x" is used to indicate that a piece was capture. The first
position indicates the position of the attacking piece. The second
position indicates that square where an opposing piece was captured.

5. E3-E4 B6-B5
6. C5-D4 G8-F6
7. E4-E5 F6-D5
8. E1-E4 C8-A6
9. H3-H4 C6-C5
10. G1-F3 D8-C8
11. H4-H5 ------

Start of multistage attack and planning.

11. ------ C5-C4
12. D3-E2 B8-C6
13. E3-H4 ------

Reinforcement of plan

13. ------ C8-C7
14. A2-A3 C6xE5
15. H4-G5 F7-F6
16. G5xG7 ------

This is the first chess trap encountered in the program. The white queen
has presented herself and left black with 4 options to resolve. This trap
was derived purely from play and not from a pre-programmed strategy or
database!. Thus, black can stop only one of the four options.

16. ------ H8-F8
17. H5-H6 F8-F7
18. G7-G6 check ------

Again, a good move by the black queen since it forces black to consider
different decisions.

18. ------ E8-E7
19. G8xA8 E5xF3
20. E2xF3 C7-F4
21. A8-G8 B5-B4
22. H1-H5 C4-C3
23. A3xB4 A6-C4

Black is trying a new strategy to change the state of the game.

24. B2xC3 ------

White choose a good option to escape black's gambit.

24. ------ F4-H2

Start of secondary black gambit.

25. A1-A7 H2-H1 check

The gambit was a success for black.

26. H5xH1 ------

White is responding with its own gambit to begin the second phase of the
game.

26. ------ E6-H5
27. G8-B8 D5-F4
28. F3-G4 F4xG2
29. A7-B7 ------

White solved black's gambit third gambit and used it to black's
disadvantage.

29. ------ G2-F4
30. G4-F5 F4-E2 check
31. D1-E1 E6-E5
32. B8-D8 F7-E7
33. B7-B6 D6-D5

Beginning of run by black king.

34. D8-C7 E2-F4
35. B6xF6 E5-E4
36. C1-B2 E7-E5
37. C7xB7 checkmate of black king


Conclusion
----------

1) The game above shows that even with reversed pieces an advance logic
can be simulated and studied in a normal chess game.

2) Advance knowledge may be difficult to study because information may
not be available or have a limited distribution. The chess program
solves both limitations.

3) This program is the result of 15 years work in chess, programming ( SR
56 through micro computer ), computer science, and biological science
training with shared technology given to the benefit of all. But,
disclosure and logic transfer is based on fair equity and return by
users to developer and creator.

4) The program has the ability to form a strategy, develop a trap, and
solve a trap using neural networks.


Closing Comments
----------------

On a time to time basis I will report additional findings. Also, the work
on the current project is growing at an exponential rate due to better
resources.

If anyone has a chess layout I will be happy to simulate it on a
correspondence basis ( e-mail or regular mail ).

Finally, any pro or con comments on the above experiment and results
would be appreciated.


David H. Kanecki, Bio. Sci., A.C.S.
kanecki@vacs.uwp.wisc.edu


------------------------------

Subject: Information about ...
From: Nestor <FJIMENEZ%ANDESCOL@CUNYVM.CUNY.EDU>
Date: Tue, 02 Jul 91 16:41:27 -1100

My name is Nestor and I am working in neural networks using RAM (random
access memories). I am interested in bibliography about this, I need
locate to Mr Igor Aleksander at Brunel University or Imperial College of
London in UK. If somebody knows the address bitnet, please send me.

I need find the next paper: "annealing in ram-based learning networks" by
D.K. Milligan (1987).

I am interested in participate in some discussion about this topic, if
some is interested too, please call me. I am at fjimenez@ANDESCOL.

Thanks in advanced. Nestor.


------------------------------

Subject: Requesting Email Address
From: bahrami@spectrum.cs.unsw.oz.au (Mohammad Bahrami)
Date: Wed, 03 Jul 91 16:17:06 +0600

[[ Editor's Note: I know there is at least one USF reader. Can you be
of assistance. On the other hand, I would expect a postcard might be a
surer way of directly reaching the author's in question... even a
postcard from "down under." -PM ]]

Hi,

Will you please give me the Email address of Steve G. Romaniuk or
Lawrence O. Hall both at Department of computer science and engineering
University of South Florida, Tampa, FL 33620

I want to get more information about the paper they presented to
IJCNN'89.

Thanks
bahrami@spectrum.cs.unsw.oz.au



------------------------------

Subject: RE: Neuron Digest V7 #38
From: avlab::mrgate::"a1::raethpg"%avlab.dnet@wrdc.af.mil
Date: Fri, 05 Jul 91 14:05:46 -0400

From: NAME: Major Peter G. Raeth
FUNC: WRDC/AAWP-1
TEL: AV-785 513-255-7854 <RAETHPG AT A1 AT AVLAB>
To: NAME: VMSMail User "neuron <"neuron@hplpm.hpl.hp.com"@LABDDN@MRGATE>



Some readers asked for a longer abstract on the surface interpolation
neural network mentioned in last issue.


Technical Note Available:
Hammering Neural Networks For Surface Interpolation

For a copy of this tecnical note contact:
Peter Raeth
University of Dayton Research Institute
Kettering Laboratory, KL-463
Dayton, Ohio 45469-0140 USA


Abstract:

A hyper-surface hammering algorithm has been developed that generalizes
data from processes that have one output variable and one or more input
variables. This algorithm achieves several desirable properties by
combining standard methods in a novel manner. These methods include the
use of linear least squares, Gaussian radial basis functions, and
diagonally dominant matrices. An easily visualized physical model of the
algorithm ensures that this combination of methods is appropriate and
practical. The model has natural potential for parallel implementation
and representation as a neural network. It may be valuable for solid
modeling in CAD/CAM in that it could significantly decrease the amount of
data required to represent a given two-dimensional surface. It also has
potential for classification and other pattern recognition tasks as well
as static-process extrapolation.





------------------------------

Subject: TR - Phase-locking without oscillations
From: Christof Koch <koch@CitIago.Bitnet>
Date: Thu, 27 Jun 91 03:12:08 -0700

The following paper is available by anyonymous FTP from Ohio State
University from pub/neuroprose. The file is called "
koch.syncron.ps.Z".




A SIMPLE NETWORK SHOWING BURST
SYNCHRONIZATION WITHOUT FREQUENCY-LOCKING


Christof Koch and Heinz Schuster


ABSTRACT: The dynamic behavior of a network model consisting of
all-to-all excitatory coupled binary neurons with global inhibition is
studied analytically and numerically. It is shown that for random
input signals, the output of the network consists of synchronized
bursts with apparently random intermissions of noisy activity. We
introduce the fraction of simultaneously firing neurons as a measure
for synchrony and prove that its temporal correlation function
displays, besides a delta peak at zero indicating random processes,
strongly damped oscillations. Our results suggest that synchronous
bursts can be generated by a simple neuronal architecture which
amplifies incoming coincident signals. This synchronization process is
accompanied by damped oscillations which, by themselves, however, do
not play any constructive role in this and can therefore be considered
to be an epiphenomenon.

Key words: neuronal networks / stochastic activity / burst
synchronization / phase-locking / oscillations


For comments, send e-mail to koch@iago.caltech.edu.

Christof


P.S. And this is how you can FTP and print the file:


unix> ftp cheops.cis.ohio-state.edu (or 128.146.8.62)
Name: anonymous
Password: neuron
ftp> cd pub/neuroprose (actually, cd neuroprose)
ftp> binary
ftp> get koch.syncron.ps.Z
ftp> quit
unix> uncompress koch.syncron.ps.Z
unix> lpr koch.syncron.ps

Read and be illuminated.








------------------------------

Subject: Thesis/TR - Soft Competitive Adaptation
From: "
Steven J. Nowlan" <nowlan@helmholtz.sdsc.edu>
Date: Thu, 27 Jun 91 11:38:58 -0700

[[ Editor's Note: Please read the directions for obtaining this thesis
carefully. There is a charge for copies and the requesting address is
different than the poster. -PM ]]

The following technical report version of my thesis is now available from
the School of Computer Science, Carnegie Mellon University:

-----------------------------------------------------------------------------

Soft Competitive Adaptation:
Neural Network Learning Algorithms
based on Fitting Statistical Mixtures

CMU-CS-91-126

Steven J. Nowlan
School of Computer Science
Carnegie Mellon University


ABSTRACT

In this thesis, we consider learning algorithms for neural networks which
are based on fitting a mixture probability density to a set of data.

We begin with an unsupervised algorithm which is an alternative to the
classical winner-take-all competitive algorithms. Rather than updating
only the parameters of the ``winner'' on each case, the parameters of all
competitors are updated in proportion to their relative responsibility
for the case. Use of such a ``soft'' competitive algorithm is shown to
give better performance than the more traditional algorithms, with little
additional cost.

We then consider a supervised modular architecture in which a number of
simple ``expert'' networks compete to solve distinct pieces of a large
task. A soft competitive mechanism is used to determine how much an
expert learns on a case, based on how well the expert performs relative
to the other expert networks. At the same time, a separate gating network
learns to weight the output of each expert according to a prediction of
its relative performance based on the input to the system. Experiments
on a number of tasks illustrate that this architecture is capable of
uncovering interesting task decompositions and of generalizing better
than a single network with small training sets.

Finally, we consider learning algorithms in which we assume that the
actual output of the network should fall into one of a small number of
classes or clusters. The objective of learning is to make the variance of
these classes as small as possible. In the classical decision-directed
algorithm, we decide that an output belongs to the class it is closest to
and minimize the squared distance between the output and the center
(mean) of this closest class. In the ``soft'' version of this algorithm,
we minimize the squared distance between the actual output and a weighted
average of the means of all of the classes. The weighting factors are the
relative probability that the output belongs to each class. This idea may
also be used to model the weights of a network, to produce networks which
generalize better from small training sets.

-------------------------------------------------------------------------

Unfortunately there is NOT an electronic version of this TR. Copies may
be ordered by sending a request for TR CMU-CS-91-126 to:

Computer Science Documentation
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA
15213
USA

There will be a charge of $10.00 U.S. for orders from the U.S., Canada or
Mexico and $15.00 U.S. for overseas orders to cover copying and mailing
costs (the TR is 314 pages in length). Checks and money orders should be
made payable to Carnegie Mellon University. Note that if your institution
is part of the Carnegie Mellon Technical Report Exchange Program there
will be NO charge for this TR.

REQUESTS SENT DIRECTLY TO MY E-MAIL ADDRESS WILL BE FILED IN /dev/null.


- Steve

(P.S. Please note my new e-mail address is nowlan@helmholtz.sdsc.edu).

------------------------------

Subject: TSP paper available
From: "
B. Fritzke" <fritzke@immd2.informatik.uni-erlangen.de>
Date: Wed, 03 Jul 91 09:22:20 +0700

I just placed a paper in the Neuroprose Archive, which has been
submitted to IJCNN-91 Singapore.

The filename is: fritzke.linear_tsp.ps.Z

And here's the abstract:

FLEXMAP -- A Neural Network For The Traveling Salesman Problem
With Linear Time And Space Complexity

Bernd FRITZKE and Peter WILKE

We present a self-organizing ''neural'' network for the traveling
salesman problem. It is partly based on the model of Kohonen.
Our approach differs from former work in this direction as no
ring structure with a fixed number of elements is used. Instead
a small initial structure is enlarged during a distribution pro-
cess. This allows us to replace the central search step, which
normally needs time O(n), by a local procedure that needs time
O(1). Since the total number of search steps we have to perform
is O(n) the runtime of our model scales linear with problem size.
This is better than every known neural or conventional algorithm.
The path lengths of the generated solutions are less than 9 per-
cent longer than the optimum solutions of solved problems from
the literature.


The described network is based on:

> Fritzke, B., "
Let it grow - self-organizing feature maps with
> problem dependent cell structure," Proc. of ICANN-91, Helsinki,
> 1991, pp. 403-408.

(see the previously placed file fritzke.cell_structures.ps.Z)

Furthermore related work will be presented next week in Seattle:

> Fritzke, B., "
Unsupervised clustering with growing cell struc-
> tures," to appear in: Proc. of IJCNN-91, Seattle, 1991.

(see the previously placed file fritzke.clustering.ps.Z
and the Poster No. W19 in Seattle)


See you in Seattle,
Bernd

Bernd Fritzke ----------> e-mail: fritzke@immd2.informatik.uni-erlangen.de
University of Erlangen, CS IMMD II, Martensstr. 3, 8520 Erlangen (Germany)



------------------------------

Subject: New Paper: A Fast and Robust Learning Algorithm for Feedforward NN
From: WEYMAERE@lem.rug.ac.be
Date: Wed, 03 Jul 91 16:09:00 +0100


The following paper has appeared in "
Neural Networks", Vol. 4, No 3 (1991),
pp 361-369:

--------------------------------------------------------------------------

A Fast and Robust Learning Algorithm
for Feedforward Neural Networks


Nico WEYMAERE and Jean-Pierre MARTENS
Laboratorium voor Elektronika en Meettechniek
Rijksuniversiteit Gent
Gent, Belgium



ABSTRACT

The back-propagation algorithm caused a tremendous break-through in the
application of multilayer perceptrons. However, it has some important
drawbacks: long training times and sensitivity to the presence of local
minima. Another problem is the network topology: the exact number of
units in a particular hidden layer, as well as the number of hidden
layers need to be known in advance. A lot of time is often spent in
finding the optimal topology. In this paper, we consider multilayer
networks with one hidden layer of Gaussian units and an output layer of
conventional units. We show that for this kind of networks, it is
possible to perform a fast dimensionality analysis, by analyzing only a
small fraction of the input patterns. Moreover, as a result of this
approach, it is possible to initialize the weights of the network before
starting the back-propagation training. Several classification problems
are taken as examples.


Unfortunately, there is not an electronic version of this paper. Reprint
requests should be sent to :

Weymaere Nico
Laboratorium voor Elektronika en Meettechniek
St. Pietersnieuwstraat 41 - B9000 Gent
Phone: +32-91-23.38.21/ext. 2493
Fax: +32-91-33.22.19
email: weymaere@lem.rug.ac.be



------------------------------

Subject: two new papers on back-prop available from neuroprose
From: Wray Buntine <wray@ptolemy.arc.nasa.gov>
Date: Fri, 05 Jul 91 11:19:46 -0700




The following two reports are currently under journal review and have
been made available on the "
/pub/neuroprose" archive. Those unable to
access this should send requests to the address below.

Both papers are intended as a guide for the "
theoretically-aware
practitioner/algorithm-designer intent on building a better algorithm".


Wray Buntine
NASA Ames Research Center phone: (415) 604 3389
Mail Stop 244-17
Moffett Field, CA, 94035 email: wray@ptolemy.arc.nasa.gov




Bayesian Back-Propagation

by Wray L. Buntine and Andreas S. Weigend

available as
/pub/neuroprose/buntine.bayes1.ps.Z (pages 1-17)
/pub/neuroprose/buntine.bayes2.ps.Z (pages 1-34)

Connectionist feed-forward networks, trained with back-propagation,
can be used both for non-linear regression and for (discrete
one-of-$C$) classification, depending on the form of training. This
paper works through approximate Bayesian methods to both these
problems. Methods are presented for various statistical components
of back-propagation: choosing the appropriate cost function and
regularizer (interpreted as a prior), eliminating extra weights,
estimating the uncertainty of the remaining weights, predicting for
new patterns (``out-of-sample''), estimating the uncertainty in the
choice of this prediction (``error bars''), estimating the
generalization error, comparing different network structures, and
adjustments for missing values in the training patterns. These
techniques refine and extend some popular heuristic techniques
suggested in the literature, and in most cases require at most a
small additional factor in computation during back-propagation, or
computation once back-propagation has finished. The paper begins
with a comparative discussion of Bayesian and related frameworks for
the training problem.

Contents:
1. Introduction
2. On Bayesian methods
3. Multi-Layer networks
4. Probabilistic neural networks
4.1. Logistic networks
4.2. Cluster networks
4.3. Regression networks
5. Probabilistic analysis
5.1. The network likelihood function
5.2. The sample likelihood
5.3. Prior probability of the weights
5.4. Posterior analysis
6. Analyzing weights
6.1. Cost functions
6.2. Weight evaluation
6.3. Minimum encoding methods
7. Applications to network training
7.1. Weight variance and elimination
7.2. Prediction and generalization error
7.3. Adjustments for missing values
8. Conclusion

-----------------------

Calculating Second Derivatives on Feed-Forward Networks

by Wray L. Buntine and Andreas S. Weigend

available as /pub/neuroprose/buntine.second.ps.Z

Recent techniques for training connectionist feed-forward networks
require the calculation of second derivatives to calculate error
bars for weights and network outputs, and to eliminate weights, etc.
This note describes some exact algorithms for calculating second
derivatives. They require at the worst case approximately $2K$
back/forward-propagation cycles where $K$ is the number of nodes in
the network. For networks with two-hidden layers or less,
computation can be much quicker. Three previous approximations,
ignoring some components of the second derivative, numerical
differentiation, and scoring, are also reviewed and compared.


------------------------------

End of Neuron Digest [Volume 7 Issue 39]
****************************************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT