Copy Link
Add to Bookmark
Report

Neuron Digest Volume 07 Number 02

eZine's profile picture
Published in 
Neuron Digest
 · 1 year ago

Neuron Digest   Wednesday,  9 Jan 1991                Volume 7 : Issue 2 

Today's Topics:
reference list on parallel GA/SA


Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"
Use "ftp" to get old issues from hplpm.hpl.hp.com (15.255.176.205).

------------------------------------------------------------

Subject: reference list on parallel GA/SA
From: Yu Shen <shen@iro.umontreal.ca>
Date: Wed, 12 Dec 90 18:17:19 -0500

[[ Editor's Note: While some may consider genetic algorithms to be a bit
afield, I feel this set of references will be quite useful to a large
number of researchers since hybrid systems offer great promise for "real"
engineered systems. Further, the simulated annealing portion is
certainly applicable to a number of neural network models -- in addition
to the more traditional field of circuit layout! -PM ]]


Here is the list of reference on parallel genetic algorithm and parallel
simulated annealing, compiled from the kind resposes from people on the
mailing list and other whatever ways I ran into. I thank all the help
again!

I have done a report on the impact of parallelism on genetic algorithm
and simulated annealing. Here are some of the major points:

Assuming constant size of neighborhood in a spatially distributed
population, the parallel genectic algorithm that selects mating partner
locally can usually achieve linear speedup over the sequential version.
But PGA has mainly shown advantage in improving the quality of search.
For parallel paradigm increases the chance of crossing-over of gens,
which is believed to be the most important mechanism of optimization.
Even with linear speedup, PGA application is still very slow comparing
with some conventional heuristics, for example, for the case of Traveling
Salesman Problem.

Parallel simulated annealing is not equivalent to parallelization of
Metropolis' relaxation, where the mainstream of practices concentrate on.
Metropolis' is basically a sequential approach. More significant speedup
is usally achieved by alternatives away from it. It is possible to
devise more parallel paradigm oriented simulation techniques, because the
working mechanism of simulated annealing is Boltzman distribution,
instead of the particular generating method --- Metropolis' relaxation.

The report is being revised. It will be available to interested parties
in January.

The reference consists of two parts. The first is on genectic algorithm,
the second on simulated annealing. Very godd bibliography on SA of D.
Greening is refered to the original source to save space.

By the way, I have profited a lot by asking question which seems simple
to some BIG profs B-]. I earnestly want to make up the noise I made to
them. I therefore suggest to the fellow pratictioner-to-be's let's try to
make the noise as little as possible. For example, we can have the
subject line as concise as possible, so non-interested people can delete
it before seeing it. As a example, when we ask such "simple" question, we
can put a X: in front of the subject line. X: may mean that it might
annoy some people but it might not to some others. eg. X: Relationship
between the cinema and the connectionist mailing list 8-)

Yu Shen

PhD Student
Dept. d'Informatique et Recherche Operationnelle
University de Montreal
C.P. 6128 Succ. A.
Montreal, Que.
Canada
H3C 3J7

(514) 342-7089 (H)
shen.iro.umontreal.ca
---------------------PGA----------------------------------------------------

@TechReport{Goldberg90:Boltzman-Tournament,
author = "David E. Goldberg",
title = "A Note on Boltzmann Tournament Selection for Genetic
Algorithms and Population-oriented Simulated Annealing"
,
institution = "University of Alabama",
year = "1990",
OPTtype = "",
OPTnumber = "90003",
OPTaddress = "Tuscaloosa, AL 35487",
OPTmonth = "May",
OPTnote = "distribution across population"
}



@TechReport{ga:Goldberg90:messy,
author = "David E, Goldberg",
title = "An Investigation of Messy Genetic Algorithms",
institution = "Dept. of Engineering Mechanics, Univ. of Alabama",
year = "1990",
OPTtype = "",
OPTnumber = "90005",
OPTaddress = "",
OPTmonth = "May",
OPTnote = ""
}


@Article{ga:Goldberg89:messy,
author = "David E. Goldberg",
title = "Messy Genetic Algorithms: Motivation, Analysis, and
First Results"
,
journal = "Complex Systems",
year = "1989",
OPTvolume = "",
OPTnumber = "3",
OPTpages = "493-530",
OPTmonth = "",
OPTnote = ""
}


@TechReport{ga:Goldberg90:selection,
author = "David E. Goldberg",
title = "A Comparative Analysis of Selection Schemes Used in
Genetic Algorithms"
,
institution = "Dept. of Engineering Mechanics, Univ. of Alabama",
year = "1990",
OPTtype = "",
OPTnumber = "90007",
OPTaddress = "",
OPTmonth = "July",
OPTnote = ""
}

- -------------
Date: Tue, 30 Oct 90 10:56:47 -0600
From: Mike Rudnick <rudnick@uxh.cso.uiuc.edu>
Subject: Re: requst for parallel implementation of GA references

FOGA-90 (Foundations of Genetic Algrithms) workshop
Indiana U. in Bloomington. Summer, 90
Gregory Rawlins, a faculty member there.

There were a few presentations on parallel GA algorithms.

- -------------------

Date: Tue, 30 Oct 90 11:57:14 -0100
From: Heinz Muehlenbein <iro.umontreal.ca!relay.eu.net!gmdzi!muehlen@IRO.UMontreal.CA>
Subject: parallel genetic algorithmn


We have implemented starting in 1987 a parallel genetic algorithm. It is the
most powerful random search method in
traveling salesman
quadratic assignmment
m-graph partitioning

autocorrelation.
n-dimensional multimodal function optimization

references: Evolution algorithms in combinatorial optimization Parallel
computing 7 65-88 (1988) papers from Muehlenbein quadratic assignment and
Gorges-schleuter in 3rd conference on genetic algorithms

recent papers ( not yet published) give a survey

parallel genetic algorithms and combinatorial optimization ( SIAM)
Evolution in space and time - the parallel genetic algorithm ( FOGA)

Heinz Muehlenbein

- ----------

From: pattie@ai.mit.edu (Pattie Maes)
Date: Mon, 29 Oct 90 10:42:58 EST
Subject: Parallel Implementation of Genectic Algorithm and Simulated



Piet Spiessens and Bernard Manderick from the University of Brussels
in Belgium have done work on parallel implementations od GA's. I think
this was published in one of the conferences on GA's. Contact them
for the exact reference:

piet@arti.vub.ac.be
bernard@arti.vub.ac.be

- ---------------

Date: Mon, 29 Oct 90 22:19:56 -0500
From: marcap@concour.cs.concordia.ca

There has been much work done in this area. Your best bet is to look at
the first, second, and third conferences on genetic algorithms.

McGill has the second conference. CISTI has all three. The third
conferences title is

proceedings of the third international conference on gentic algorithms.
George mason University
June 4-7, 1989

Editor: J. david Schaffer

ISBN 1-55860-066-3

LIBRARY: QA 402.5.I512 1989
of Congress


Hope that helps, MARC


Date: Mon, 29 Oct 90 17:55:02 EST
From: lesher@ncifcrf.gov
Subject: // impl GA



@techreport{Robertson:87,
author = "G. G. Robertson",
title = "Parallel Implementation of Genetic Algorithms in a Classifier
System"
,
type = "Technical Report Series",
number = "RL87-5",
institution = "Thinking Machines Corporation",
address = "245 First Street, Cambridge, MA 01242-1214",
month = "May",
year = "1987"}

Sarah Lesher


Date: Mon, 29 Oct 1990 10:49-EST
From: Hiroaki.Kitano@a.nl.cs.cmu.edu
Subject: Re: Parallel Implementation of Genectic Algorithm and Simulated


We are currently working on massively parallel implementation on
GA and Classifier system.
By teh end of this year, we will be able to (I hope) have a paper for
distribution.


Hiroaki Kitano
Center for Machine Translation
Carnegie Mellon Univeristy,
Pittsburgh, PA 15213 U.S.A.

----------------------
%from the bibliography of parallel and supercomputing references,
%icarus.riacs.edu (128.102.16.8) in the /pub/bib directory
- -------------GA
%A Elizabeth J. O'Neill
%A Craig G. Shaefer
%T The ARGOT Strategy III: The BBN Butterfly Multiprocessor
%E Joanne L. Martin
%E Stephen F. Lundstrom
%B Science and Applications: Supercomputing'88
%V II
%I IEEE
%C Orlando, FL
%D 1988
%P 214-227
%K structural analysis, genetic algorithms, applications,


%A Chrisila C. Pettey
%A Michael R. Leuze
%Z CS Dept., Vanderbilt U.
%T Parallel Placement of Parallel Processes
%J 3rd Conference on Hypercube Concurrent Computers and Applications
%V I, Architecture, Software, Computer Systems and General Issues
%I ACM
%C Pasadena, CA
%D January 1988
%P 232-238
%K load balancing and decomposition,
parallel genetic algorithms (PGA/GA), simulated annealing,


%A Chrisila C. Petty
%A Michael R. Leuze
%A John J. Grefenstette
%T Genetic Algorithms on a Hypercube Multiprocessor
%B Hypercube Multiprocessors 1987
%I SIAM
%C Philadelphia
%D 1987
%P 333-341

- ----------classifier

%A Stephanie Forrest
%Z Teknowledge
%T A Study of Parallelism in the Classifier System and its
Application to Classification in KL-One Semantic Networks
%R PhD thesis
%I Logic of Computer Group, University of Michigan
%C Ann Arbor, MI, USA
%D 1985
%X Received from PARSYM.


%A Stephanie Forrest
%Z Teknowledge
%T The Classifier System: A Computational Model that Supports
Machine Intelligence
%J Proceedings of the 1986 International Conference on Parallel Processing
%I IEEE
%D August 1986
%P 711-716
%K Artificial intelligence, KL-one, simulation, pattern recognition,

- ------------------PSA----------------------------------------
Date: Tue, 30 Oct 90 16:26:01 +1100
From: Raymond Lister <ray@cs.su.oz.au>
Subject: Re: Parallel Implementation of Genectic Algorithm and Simulated

I have published a paper on a parallel simulated annealing approach to the
Travelling Salesman Problem. A second paper is to appear ...

Lister, R (1990), "Segment Reversal and the Traveling Salesman Problem",
International Joint Conference on Neural Networks (IJCNN-90-WASH DC),
Washington, January 1990.

Lister, R (1990), Response to a review by Van den Bout, D "A Parallel,
Simulated Annealing Solution to the Traveling Salesman Problem"
, to
appear, Neural Network Review.

I'll send you a copy of both papers.

I suggest you check the ftp repository set up by Dan Greening (see
details below). You may want to talk to Dan directly, as he has written
a review paper on parallel simulated annealing (which is in the ftp
repository in the file "greening.physicad.ps") ...

Greening, D "Parallel Simulated Annealing Techniques" Physica D: Nonlinear
Phenomena, Vol. 42, pp 293-306, 1990.


And, of course, I'd like to see the results of your survey.

Raymond Lister
Department of Computer Science, F09
University of Sydney
NSW 2006
AUSTRALIA

Internet: ray@cs.su.oz.AU

- ---

>From dgreen%cs.ucla.edu@munnari.cs.mu.oz Sat Oct 13 07:50:51 1990
>Date: Fri, 12 Oct 90 14:01:19 pdt
>From: Dan R. Greening <dgreen%cs.ucla.edu@munnari.oz>
>To: anneal@cs.ucla.edu, glb@ecegriff.ncsu.edu
>Subject: Anonymous FTP Site Available for Annealing Papers.

I have set up squid.cs.ucla.edu as an anonymous FTP site for distributing
papers on simulated annealing, spin-glass theory, etc. To get the widest
audience possible, I would encourage you to put your paper(s) there.

Please include a full reference on the first page which indicates where the
paper appeared, if it has been published already, such as

Appeared in Physica D: Nonlinear Phenomena, vol. 42, pp. 293-306, 1990.

If the paper has not yet been published, include some reference which would
allow a reader to locate the paper, such as

Submitted for publication to the Conference on Advanced Research in VLSI,
1990. Available as IBM Technical Report RC-23456.

You may also wish to include electronic mail information in your paper.
You may want to announce its availability to the mailing list, by sending a
message to anneal@cs.ucla.edu.

HOW TO PUT A PAPER ON SQUID:

REASONABLE PAPER FORMATS:

1. POSTSCRIPT. Almost everyone has access to a PostScript printer, so
unless you absolutely have no other choice, please supply your paper in
PostScript form. Append ".ps" to the filename to indicate postscript.

2. TROFF. You should only include troff if you have set up your troff
file so that it requires NO COMMAND OPTIONS. Preprocess the troff input
so all pic, tbl, eqn stuff is already included. Any macro packages
should be included in the file itself. In short, someone should be
able to produce your paper using only the file you provide.
Append ".troff" to the filename to indicate troff.

3. DVI. You should only include a dvi file if it DOES NOT INCLUDE
ENCAPSULATED POSTSCRIPT files (presumably if you have such files, you
can generate the whole paper in postscript). Append ".dvi" to the
filename to indicate a dvi file.

Let's say that you formatted your paper, and have created a postscript file
called paper.ps. Furthermore, suppose the first author is "Josie Smith" and
you have submitted this paper to IEEE Transactions on CAD. By
convention, the paper should be stored on squid as "smith.ieeetcad.ps".
You can embellish the name as you wish, however, there is a maximum of
255 characters in a filename.

Here goes (from UNIX):

% ftp 131.179.96.44 (or ftp squid.cs.ucla.edu)
login: anonymous
password: anything
ftp> cd anneal
ftp> binary
ftp> put paper.ps smith.ieeetcad.ps
ftp> quit

Now your paper will be sitting around for anyone to read! You might get
famous!

HOW TO GET A PAPER FROM SQUID:

OK, suppose someone announces the availability of a paper on squid, called
"smith.stoc1989.ps". Let's get a copy.

Here goes (from UNIX):

% ftp 131.179.96.44 (or ftp squid.cs.ucla.edu)
login: anonymous
password: anything
ftp> cd anneal
ftp> ls -l (might as well look around while we're here...)
ftp> binary
ftp> get smith.stoc1989.ps
ftp> quit

Now, just print out smith.stoc1989.ps and you discover something new!
Hooray!

I put a couple of my papers on there, already, as well as a paper by
Dannie Durand. If you guys are nice (i.e., if you make me feel fulfilled by
putting your papers there), maybe I'll put my much-discussed simulated
annealing bibliography there, too.
% I happen to have a copy of the bibliogaraphy, I think it is the most complete
% one. There might be some overlaps between here and there. Yu Shen

Happy Annealing (or Tabu Searching or Spinning or Conducting or whatever it
is you're doing).

- --

Dan Greening | NY 914-784-7861 | 12 Foster Court
dgreen@cs.ucla.edu | CA 213-825-2266 | Croton-on-Hudson, NY 10520
- ------------


Date: Mon, 29 Oct 90 14:33:51 -0800
From: sorkin@ravel.berkeley.edu

You should get in touch with two people who have done recent work
in the area:

Dan Greening (dgreen@cs.ucla.edu) and Dannie Durand (durand@cs.columbia.edu).

Both did at least some of their work while at IBM, collaborating
with Steve White (white@ibm.com) and Frederica Darema (darema@ibm.com).

- -- Greg Sorkin (sorkin@ic.berkeley.edu)
- ----------------

Some of the work by Harold Szu on fast simulated annealing compares
Cauchy and Boltzmann machines. References follow in bib form.

a:~ (52) tiblook szu cauchy
%A Harold H. Szu
%T Fast Simulated Annealing
%J |AIP151|
%P 420-426
%K Cauchy Machine

%T Fast Simulated Annealing
%A Harold H. Szu
%A Ralph Hartley
%J |PHYLA|
%V 122
%N 3,4
%P 157-162
%D |JUN| 8, 1987
%K Cauchy Machine

%T Nonconvex Optimization by Fast Simulated Annealing
%A Harold H. Szu
%A Ralph L. Hartley
%J |IEEPro|
%V 75
%N 11
%D |NOV| 1987
%K Cauchy Machine

%T Design of Parallel Distributed Cauchy Machines
%A Y. Takefuji
%A Harold H. Szu
%J |IJCNN89|

ba:~ (53) tibabb phyla
D PHYLA Phys. Lett. A
D PHYLA Physics Letters. A
ba:~ (54) tibabb IEEPro
D IEEPro IEEE Proc.
D IEEPro Institute of Electrical and Electronics Engineers. Proceedings
ba:~ (55) tibabb IJCNN89
D IJCNN89 International Joint Conference of Neural Networks\
ba:~ (56)

- ---------------------------------------------------------------------------
. . .__. The opinions expressed herein are soley
|\./| !__! Michael Plonski those of the author and do not represent
| | | "plonski@aero.org" those of The Aerospace Corporation.
_____________________________________________________________________________


@begin(refstyle)
Ackley, D. H. (1984, December).
@i(Learning evaluation functions in stochastic parallel
networks.) CMU Thesis Proposal.

- ----------psa
%A Emile H. L. Aarts
%A Jan H. M. Korst
%T Computation in Massively Parallel Networks based on the Boltzmann Machine:
A Review
%J Parallel Computing
%V 9
%N 2
%D January 1989
%P 129-145
%K PARLE: conference on parallel architectures and languages -- Europe,
Boltzmann machines, simulated annealing, combinatorial optimization,
connectionist networks, neural networks, learning,


%A Prithviraj Banerjee
%A Mark Jones
%T A Parallel Simulated Annealing Algorithm for Standard Cell
Placement on Hypercube Computer
%J NASA Review of ICLASS: Illinois Computer Laboratory for Aerospace
Systems and Software
%I University of Illinois
%C Urbana-Champaign, IL
%D October 23, 1986


%A Prithviraj Banerjee
%A Mark Howard Jones
%A Jeff S. Sargent
%T Parallel Simulated Annealing Algorithms for Cell Placement on
Hypercube Multiprocessors
%J IEEE Transactions on Parallel and Distributed Systems
%V PDS-1
%N 1
%D January 1990
%P 91-106
%K cell placement, error control, hypercube multiprocessor,
parallel processing, performance measurements, simulated annealing,
VLSI layout,


%A Valmir C. Barbosa
%A Eli Gafni
%T A Distributed Implementation of Simulated Annealing
%J Journal of Parallel and Distributed Computing
%V 6
%N 2
%D April 1989
%P 411-434
%K special issue: neural computing, research notes,


%A S. Wayne Bollinger
%A Scott F. Midkiff
%Z VA Poly, Blacksburg, VA
%T Processor and Link Assignment in Multicomputers
using Simulated Annealing
%J Proceedings of the 1988 International Conference on Parallel Processing
%V I, Architecture
%I Penn State
%C University Park, Penn
%D August 1988
%P 1-7
%K distributed systems, mapping problem, process annealing,
connection annealing, hypercube traffic,


%A Ken W. Bosworth
%A G. S. Stiles
%A Rick Pennington
%T A Parallel Nonlinear Integer Programming Algorithm Based on Branch
and Bound and Simulated Annealing
%E Garry Rodrigue
%B Parallel Processing for Scientific Computing
%I SIAM
%C Philadelphia, PA
%D 1989
%P 126-130
%K numerical methods,


%A Casotto
%A et al.
%T A parallel simulated annealing algorithm for the
placement of macro-cells
%J ICCAD, 1986


%A A. Casotto
%A A. Sangiovanni-Vincentelli
%T Placement of Standard Cells Using Simlated Annealing on a
Connection Machine
%I Thinking Machines Corp.
%R TMC TP CAD86-3
%D Dec. 1986
%K computer aided design,


%A F. Darema-Rogers
%A S. Kirkpatrick
%A V. A. Norton
%Z IBM T. J. Watson Research Center, Yorktown Heights, NY
%T Simulated Annealing on Shared Memory Parallel Systems
%J IBM J. on Res. & Dev.
%D 1987
%r RC 12195 (#54812)
%d October 1986


%A Nigel Dodd
%Z Royal Radar, Malvern
%T Graph matching by stochastic optimisation applied to the implementation
of multi-layer perceptrons on transputer networks
%J Parallel Computing
%V 10
%N 2
%D April 1989
%P 135-142
%K graph matching, graph isomorphism, transputer, parallel processing,
optimization, stochastic optimization, simulated annealing, MLP,


%A F. Ercal
%A J. Ramanujam
%A P. Sadayappan
%Z CIS, OSU
%T Task Allocation onto a Hypercube by Recursive Mincut Bipartitioning
%J 3rd Conference on Hypercube Concurrent Computers and Applications
%V I, Architecture, Software, Computer Systems and General Issues
%I ACM
%C Pasadena, CA
%D January 1988
%P 210-221
%K load balancing and decomposition, mapping problem,
ARM (Kernighan/Lin), simulated annealing (SA),


%A R. Fiebrich
%A C. Wong
%Z Thinking Machines Corp.
%T Simulated Annealing-Based Circuit Placement Algorithm on
the Connection Machine
%J IEEE Intl. Conference on Computer Design
%C Rye Brook, NY
%D October 1987
%r TMC TP CAD87-2


%A G. C. Fox
%A W. Furmanski
%Z Caltech
%T Load Balancing Loosely Synchronous Problems with a Neural Network
%J 3rd Conference on Hypercube Concurrent Computers and Applications
%V I, Architecture, Software, Computer Systems and General Issues
%I ACM
%C Pasadena, CA
%D January 1988
%P 241-278
%r cccp-363B
%d February 1988
%K load balancing and decomposition, particle analogy, graph
partitioning,
simulated annealing, statistical physics, Monte Carlo,
Hopfield & Tank and Bold networks,


%A Grover
%T A new simulated annealing algorithm for standard cell placements
%J ICCAD
%D Nov. 1986


%A Harold M. Hastings
%A Stefan Waner
%Z Hofstra
%T Neural Nets on the MPP
%J Frontiers of Massively Parallel Scientific Computation
%R Conference Proc. 2478
%I NASA Goddard Space Flight Center
%C Greenbelt, MD
%D Sept. 1986
%D 1987
%P 69-73
%K computer science, neural network, annealing system,
stochastic cellular automaton,


%A Huang
%A et al.
%T An efficient general cooling algorithm for simulated annealing
%J ICCAD
%D 1986

%T Mapping Partitioned Program Modules Onto Multicomputer Nodes Using
Simulated Annealing
%P II-292--II-293
%A Kai Hwang
%A Jian Xu
%Z USC, LA
%J Proceedings of the 1990 International Conference on Parallel Processing
%V II, Software
%I Penn State U. Press
%C University Park, Penn
%D August 1990
%K poster session, Parallel Software Management,


%A Daniel G. Jablonski
%T Simulated Annealing, Reversible Computation and Neural Nets
%I Supercomputing Research Center, IDA
%C Lanham, MD
%R SRC-TR-88-15
%D July 1988


%A Jepsen
%A Gelatt
%T Macro placement by Monte Carlo annealing
%J ICCD Conf.
%D Oct. 1983
%P 495-498


%A A. Kashko
%A H. Buxton
%A B. F. Buxton
%A D. A. Castelow
%T Parallel Matching and Reconstruction Algorithms in Computer Vision
%J Parallel Computing
%V 8
%N 1-3
%D October 1988
%P 3-17
%K Proc. Intl. Conf. on Vector and Parallel Processors in Computational
Science, III [VAPP III], Aug. 1987, Liverpool, England. [Cover typo.]
computer vision, SIMD, DAP, correlation, relaxation, simulated annealing,
graduated non-convexity,

%T Parallel Simulated Annealing Using Speculative Computation
%P III-286--III-290
%A E. E. Witte
%A R. D. Charnberlain
%A M. A. Franklin
%J Proceedings of the 1990 International Conference on Parallel Processing
%V III, Algorithms and Applications
%I Penn State U. Press
%C University Park, Penn
%D August 1990
%K parallel algorithms,

- ----------------------
Sharpening

Date: Mon, 29 Oct 90 11:06:38 -0500 (EST)
From: Javier Movellan <jm2z+@andrew.cmu.edu>
Subject: Re: Parallel Implementation of Genectic Algorithm and Simulated

Yu,

There is a third related procedure called "sharpening". My first contact
with the sharpening procedure was through the work of Akiyama et al (see
reference) in what they called Gaussian machines ( Continuous Hopfield
Networks with Gaussian noise injected in the net inputs).

Sharpening is also used in Mean Field Networks (Continuous Hopfield
Model with the Contrastive Hebbian Learning Algorithm). Sharpening may
be seen as a deterministic, continuous approximation to annealing in
Stochastic Boltzmann Machines. It works by starting settling using
logistic activations with very low gain and increasing it as settling
progresses. Sharpening, contrary to "true annealing" is deterministic
and thus it may be faster. A similar procedure is used with elastic
networks solving the TSP problem.

References:

Peterson, C & Anderson J (1987): A mean field theory learning algorithm
for neural networks. Complex Systems, 1, 995-1019.

Akiyama Y, Yamashita A, Kajiura M, Aiso H (1989) Combinatorial
Optimization with Gaussian Machines. Proceedings of the IJCNN, 1,
533-540.

Hinton G E (1989) Deterministic Boltzmann Learning Performs Stepest
Descent in Weight Space, Neural Computation, 1, 143-150.

Galland C, & Hinton G (1989) Deterministic Boltzmann Learing in Networks
with Asymetric Connectiviy. University of Toronot. Department of
Computer Science Technical Report. CRG-TR-89-6.

Movellan J R (1990) Contrastive Hebbian Learning in the Continuous
Hopfield Model. Proceedings of the 1990 Connectionist Summer School.


Javier

------------------------------

End of Neuron Digest [Volume 7 Issue 2]
***************************************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT