Copy Link
Add to Bookmark
Report

Neuron Digest Volume 06 Number 52

eZine's profile picture
Published in 
Neuron Digest
 · 1 year ago

Neuron Digest   Monday,  3 Sep 1990                Volume 6 : Issue 52 

Today's Topics:
Mactivation - new info
Mactivation ftp location
re: common ANN vocabulary
Protein folding schemes - a reply
Questions
PSYCHOLOGY / NEUROSCIENCE / AI POST-DOC IN SAN DIEGO, CA, USA
NN + Image Processing (again)
ICANN International Conference on Artificial Neural Networks
JNNS'90 Program Summary (long)
Final Call AISB'91
Rutgers' CAIP Neural Network Workshop
TR announcement (hardcopy and ftp)


Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"
Use "ftp" to get old issues from hplpm.hpl.hp.com (15.255.176.205).

------------------------------------------------------------

Subject: Mactivation - new info
From: Mike Kranzdorf <mikek@boulder.Colorado.EDU>
Date: Mon, 27 Aug 90 12:22:01 -0600


I have recently moved and thought I should update folks as well as
provide the info for new readers concerning my Macintosh neural network
simulator called Mactivation (now version 3.3):

Mactivation is an introductory neural network simulator which runs on all
Macintoshes. A graphical interface provides direct access to units,
connections, and patterns. Basic concepts of associative memory and
network operation can be explored, with many low level parameters
available for modification. Back- propagation is not supported. A user's
manual containing an introduction to connectionist networks and program
documentation is included on one 800K Macintosh disk. The current version
is 3.3, which fixes a very obscure system error and changes some menu
names for clarity (I hope).

Mactivation is available from the author, Mike Kranzdorf. The program may
be freely copied, including for classroom distribution. To obtain a
copy, send your name and address and a check payable to Mike Kranzdorf
for $5 (US). International orders should send either an international
postal money order for five dollars US or ten (10) international postal
coupons.

Mactivation 3.2 is available via anonymous ftp on boulder.colorado.edu
Please don't ask me how to deal with ftp - that's why I offer it via
snail mail. I will post 3.3 there sometime (anonymous ftp here is in a
state of flux).

Mike Kranzdorf
P.O. Box 1379
Nederland, CO 80466-1379

internet: mikek@boulder.colorado.edu
uucp:{ncar|nbires}!boulder!mikek
AppleLink: OBLIO

------------------------------

Subject: Mactivation ftp location
From: Mike Kranzdorf <mikek@wasteheat.colorado.edu>
Date: Tue, 28 Aug 90 10:24:52 -0600

Sorry I forgot to include the ftp specifics:

Machine: boulder.colorado.edu
Directory: /pub
File Name: mactivation.3.2.sit.hqx.Z

I really will try to put version 3.3 there soon. Please send me comments
if you use Mactivation. I am very responsive to good suggestions and will
add them when possible. Back-prop will come in version 4.0, but that's a
complete re-write. I can add smaller things to 3.3.

--mike

internet: mikek@boulder.colorado.edu
uucp:{ncar|nbires}!boulder!mikek

------------------------------

Subject: re: common ANN vocabulary
From: dank@moc.Jpl.Nasa.Gov (Dan Kegel)
Date: Wed, 29 Aug 90 12:23:35 -0700

Gary Fleming wrote:
> [There should be] a common artificial neural network (ANN) vocabulary to
> be shared by all of the researchers and practitioners who represent many
> diverse fields. [The current situation is an] intellectual Tower of Babel...
> There are several members of the INNS/SIG Washington interested in a
> resolution to this problem.

I have noticed that many people even disagree on the definition of
'learning'!

Another useful thing would be a good set of canonical benchmark problems
beyond the XOR problem. These two things- a standard vocabulary and a
standard benchmark suite- would go well together, because there is often
confusion about what an NN architecture is good for.

With a standard benchmark suite, you could say, "ART5 has achieved
acceptable scores on the XOR and handwritten-letter-recogition tasks, but
nobody has been able to apply it to the spoken-digit-recogition task"
,
and people would know exactly what you were talking about.

I'm sure this has gone on privately; I just haven't seen anyone mention
such a benchmark suite in this newsgroup.

- Dan Kegel (dank@moc.jpl.nasa.gov)

[[ Editor's Note: In fact, a small email group, headed by Scott Fahlman,
has collected a number of ANN benchmarks and gone about discussing some
of the problems inhgenerent in standardizing the measurement process.
Send mail to nn-bench-request@cs.cmu.edu for more information about
what's collecting bit-dust in their archives. -PM ]]

------------------------------

Subject: Protein folding schemes - a reply
From: smuskal%calv01.hepnet@Csa2.LBL.Gov
Date: Fri, 31 Aug 90 14:19:22 -0700

Doug,

One of the encoding schemes that you described is exactly the one people
have been using. i.e. A byte of length 20 (actually 21 is used for
situations when their window overlaps the end of the amino acid sequence)
with all 0's and a single 1 representing the amino acid in question.
This, if given a protein of length 100 or so, would result in a dimension
of 2,000 as you pointed out. I, however, am unclear if you can describe
an amino acid as "independent" from another. Therefore, I can not see how
one can code each amino acid with a single bit. Furthermore, while many
have described matrices or indices of amino acid similarity, I am not
sure how one can convert those to Hamming distances. Can you clarify?
Regarding you question, "What is associated with the amino acid
sequence?"
I can simply answer with, "the protein's structure." That is,
we are trying to map amino acid sequence to protein structure. The reason
we try to reduce the dimensionality of the amino acid sequence is because
of the huge dimensionality of protein structure. For 100 amino acids, one
(x,y,z) coordinate for each amino acid would be a nice goal. This, though
is problematic because of translational and rotational variations. So, a
descriptive set of distances between each amino acid would be the next
best thing. Here, a distance is an actual Cartesian distance between
(x,y,z)'s of each amino acid.
People often represent protein's in "diagonal plots." These plots are
matrices of 0's and 1's. If two amino acids are close in 3-D space (say
within 5 angstroms), a 1 is placed else a 0. If a network can be trained to
map some representation of the entire amino acid sequence to this type of
distance matrix, then distance geometry can be used to generate a set of
(x,y,z)'s for each amino acid. Bohr. et al. (1990) did this sort of thing
with "windows" of amino acids, but because of the dimensionality in both
amino acid sequence and protein structure, they could not feasibly use the
entire amino acid sequence.
It is difficult to further reduce the dimensionality of the protein
structure (have any ideas?), so that is why we are trying to reduce the
dimensionality of the protein sequence.

Steve

P.S. It should be noted that we know the structures to about 300
proteins. Of which, many are very similar. Not only do there exist
proteins with similar sequences (and as a result similar structures), but
there also exist proteins with very different sequences and very similar
structures. These issues, as you know, complicates things greatly. If
you are interested in the work done with neural networks and protein
structure prediction, here is a short list of references:


1) Qian and Sejnowski, J. Mol. Biol 202,865-84 (1988)
2) Holley and Karplus, PNAS USA 86,152-6 (1989)
3) Friedrichs and Wolynes, Science 246, 371-373 (1989)
4) McGregor et al., Protein Engineering 2,521-6 (1989)
5) Bohr et al., Febs Lett 241,223-8 (1988)
6) Bohr et al, Febs Lett 261,43-46 (1990)
7) Holbrook et al., Protein Engineering vol 3 no 8, 659-665 (1990)
8) Muskal et al., Protein Engineering vol 3 no 8, 667-672 (1990)

------------------------------

Subject: Questions
From: qian@icopen.ICO.OLIVETTI.COM (DA QUN QIAN)
Date: Tue, 28 Aug 90 11:23:42 +0100

I propose the following questions and hope to know your opinions or
references on my questions.

1. I have found that many connectionist models(neural networks) are
implemented in the form of software, instead of hardware. Therefore, I
think that it is easy to realize the function of each node by
programming. However, I do not know why in current network networks the
functions of nodes are very limited. May I define more complex transfer
functions for nodes, and introduce more logic functions into nodes ? Are
such kind of models still a neural network ?

I do know not whether there exist general approaches and conclusions not
dependent on some specific kind of neural networks,and which can be used
in analysing the following characterisitcs of neural networks,---- if not
exist, survey papers on analysis and synthesis of neural neyworks are
also needed.

At first, some symbols are defined as follows:
S(t): state vector of a neural network at time t.
S: a given state vector of a neural network.
W(t): weight vector of a neural network at time t.
W: a given weight vector of a neural network.
S(n,t): state vector of nodes at top layer(output layer) at time t.
S(n): a given state vector of nodes at output layer.
S(1,t): state vector of nodes at input layer at time t.
S(1): a given state vector of nodes at input layer.

The following characteristics are required to be analysed:

2. S(t)-convergency.

3. W(t)-convergency, which means W(t) converges to some equilbrium.

4. Stability of S(n,t).

5. Stability of W(t).

6. Stability of S(t).

7. The capacity of a neural network, which means how many stable states
can be implmented by the neural network.

8. For any S(n) and W, whether there exists S(1,t), so that S(n,t) will
converge to S(n).

9. For any S(n), whether there exists S(1,t) and W(t), S(n,t) will
converge to S(n).

10. For S(n), whether there exists S(1,t) satisfying some restrictons, so
that S(n,t) will converge to S(n).

11. For any W, whether there exists S(1,t), so that W(t) will converge to W.

12. For any W, whether there exists S(1,t) satisfying some restrictions,
so that W(t) will converge to W.

13. For any S, whether there exists S(1,t), so that S(t) will converge to S.

14. For any S, whether there exists S(1,t) satisfying some restrictions,
so that S(t) will converge to S.

15. For any S(1) and W, whether there exists S(n,t), S(n,t) will converge
to S(n) corresponding to S(1).

16. For any S(1), whether there exists S(n,t) and W(t), so that S(n,t)
will converge to S(n) corresponding to S(1).

17. For any S(1), whether there exists S(n,t) satisfying some
restrictions and W(t), so that S(n,t) will converge to S(n)
corresponding S(1).

18. For any S(1), whether there exists W(t), W(t) will converge to W
corresponding to S(1).

19. For any S(1), whether there exists W(t) satisfying some restrictions,
W(t) will converge to W corresponding to S(1).

20. For any S(1), whether there exists S(t), S(t) will converge to S
corresponding to S(1).

21. For any S(1), whether there exists S(t) satisfying some restrictions,
S(t) will converge to S corresponding to S(1).


My email address: qian@icopen.ico.olivetti.com

My address: Qian Da Qun
Artificial Intelligence Center
Olivetti Nuova ICO 3 Piano
Via Jervis 77, 10015 Ivrea(TO)
Italy.


------------------------------

Subject: PSYCHOLOGY / NEUROSCIENCE / AI POST-DOC IN SAN DIEGO, CA, USA
From: trejo@nprdc.navy.mil (Leonard J. Trejo)
Organization: Navy Personnel R & D Center
Date: 29 Aug 90 15:53:18 +0000


POSTDOCTORAL POSITION IN EXPERIMENTAL PSYCHOLOGY
____________ ________ __ ____________ __________

The Training Systems Department of the Navy Personnel Research
and Development Center (NPRDC), in San Diego, is looking for a post-
doctoral fellow to study cognitive, psychophysical, or electrophysio-
logical bases of human performance. The primary emphasis is on the
enhancement of human performance in operational military tasks. How-
ever, considerable latitude will be given to the candidate in develop-
ing a research proposal that combines basic and applied research
goals. Current research includes decomposition of human performance
in complex tasks such as radar and sonar monitoring, aircraft naviga-
tion, and weapons firing. Current methods of analysis include reac-
tion times, accuracy, ROC analysis, evoked potentials, eye movements,
and neural networks. Planned work includes prediction of performance
using neural networks, and enhancing adaptive decision aiding systems
for advanced cockpits by using combined behavioral and electrophysio-
logical measures.

Excellent laboratory facilities are available, including two
Concurrent/Masscomp computer systems, several 80386 PC systems, a
Macintosh SE, and extensive hardware and software for data acquisition
and analysis. Access privileges to VAX 11/780, IBM 4341, and SUN 4
systems, and the INTERNET network are also available.

The position is available through the Postdoctoral Fellowship
Program funded by the U.S. Navy Office of Naval Technology (ONT) and
administered by the American Society for Engineering Education (ASEE).
Duration of the appointment is for one year, and may be renewed for up
to two additional years. Stipends range from $34,000 to $38,000 per
annum depending upon experience. A relocation allowance may be nego-
tiated; the amount is based on the personal situation of the partici-
pant. Funds will be available for limited professional travel.

The successful candidate will have the following qualifications:

1. U. S. Citizenship
2. Ph. D., Sc. D., or equivalent in psychology or neuroscience
received not more than 7 years from date of award

Additional experience desired:
1. Training in cognitive or physiological psychology
2. Experience in artificial intelligence/neural networks
3. Proficiency with UNIX and C programming

The application deadline is October 1, 1990, for a term beginning
in December-January 1990-1991. For information about the ONT Postdoc-
toral Fellowship Program and an application form, please contact:

American Society for Engineering Education
Projects Office
11 Dupont Circle, Suite 200
Washington, DC 20036
(202) 293-7080

For information about the research program at NPRDC, please con-
tact:

Dr. Leonard J. Trejo
Neuroscience Division, Code 141
Navy Personnel Research and Development Center
San Diego, CA 92152-6800
(619) 553-7711


============================================================================
USENET : trejo@nprdc.navy.mil UUCP: ucsd!nprdc!trejo

U.S. Mail: Leonard J. Trejo, Ph. D. Phone: (619) 553-7711
Neurosciences Division (AV) 553-7711
NPRDC, Code 141
San Diego, CA 92152-6800
(The opinions expressed here are my own, are unofficial, and do not
necessarily reflect the views of the Navy Department.)

------------------------------

Subject: NN + Image Processing (again)
From: daft@debussy.crd.ge.com (Chris Daft)
Date: Wed, 29 Aug 90 16:34:52 -0400


Some time ago I posted a request for references on neural networks and
image processing/image understanding. I got a lot of useful mail, and
here are the results of that and a literature search.

Carver Mead's work (described in his book 'Analog VLSI and Neural
Systems') on the silicon retina is classical. Another very impressive
work is that of John Daugman on the computation of image transforms with
non-orthogonal basis functions using neural networks. This is important
for texture recognition. (see for example IJCNN '88, Relaxation neural
network for non-orthogonal image transforms, pp547-560).

Other references which reflect my interests in medical imaging are:

A. Visa, A texture classifier based on neural network principles

J.M. Boone, Neural networks in radiology

W.T. Katz, Translation-invariant aorta segmentation

N.R. Dupaguntla, A neural net architecture for texture segmentation

N.H. Farhat, Echo inversion and target shape estimation by
neuromorphic processing

C. Obellianne, Connectionist models for image processing

E. Barnard, Image processing for image understanding with neural nets

If you would like to have a complete reference for any of these, please
email me. What I would really like to find are references on
applications of Daugman's methods. If you know of any of these, please
post or mail them.

Conspicuously absent from my list is any mention of Grossberg's work. I
have to write a review paper for a non-neural net audience out of this.
I feel that he may well have the best ideas in the field, but I have yet
to read applications of his methods which I can understand sufficiently
to put them in a review paper!

Chris Daft, GE Corporate Research and Development Center.


------------------------------

Subject: ICANN International Conference on Artificial Neural Networks
From: Pasi Koikkalainen <pako@neuronstar.it.lut.fi>
Date: Thu, 30 Aug 90 12:05:47 +0300



ICANN-91
INTERNATIONAL CONFERENCE ON ARTIFICIAL NEURAL NETWORKS

Helsinki University of Technology
Espoo, Finland, June 24-28, 1991




Conference Chair: Conference Committee:
Teuvo Kohonen (Finland) Bernard Angeniol (France)
Eduardo Caianiello (Italy)
Program Chair: Rolf Eckmiller (FRG)
Igor Aleksander (England) John Hertz (Denmark)
Luc Steels (Belgium)

CALL FOR PAPERS
===================


THE CONFERENCE:
===============
Theories, implementations, and applications of Artificial Neural Networks
are progressing at a growing speed both in Europe and elsewhere.
The first commercial hardware for neural circuits and systems are emerging.
This conference will be a major international contact forum for experts
from academia and industry worldwide. Around 1000 participants are expected.

ACTIVITIES:
===========
- Tutorials
- Invited talks
- Oral and poster sessions
- Prototype demonstrations
- Video presentations
- Industrial exhibition

- -------------------------------------------------------------------------

Complete papers of at most 6 pages are invited for oral or poster
presentation in one of the sessions given below:

1. Mathematical theories of networks and dynamical systems
2. Neural network architectures and algorithms
(including organizations and comparative studies)
3. Artificial associative memories
4. Pattern recognition and signal processing (especially vision and speech)
5. Self-organization and vector quantization
6. Robotics and control
7. "Neural" knowledge data bases and non-rule-based decision making
8. Software development
(design tools, parallel algorithms, and software packages)
9. Hardware implementations (coprocessors, VLSI, optical, and molecular)
10. Commercial and industrial applications
11. Biological and physiological connection
(synaptic and cell functions, sensory and motor functions, and memory)
12. Neural models for cognitive science and high-level brain functions
13. Physics connection (thermodynamical models, spin glasses, and chaos)

- --------------------------------------------------------------------------

Deadline for submitting manuscripts is January 15, 1991. The Conference
Proceedings will be published as a book by Elsevier Science Publishers B.V.
Deadline for sending final papers on the special forms is March 15, 1991.
For more information and instructions for submitting manuscripts, please
contact:

Prof. Olli Simula
ICANN-91 Organization Chairman
Helsinki University of Technology
SF-02150 Espoo, Finland
Fax: +358 0 451 3277
Telex: 125161 HTKK SF
Email (internet): icann91@hutmc.hut.fi

- ---------------------------------------------------------------------------

In addition to the scientific program, several social occasions will be
included in the registration fee. Pre- and post-conference tours and
excursions will also be arranged. For more information about registration
and accommodation, please contact:

Congress Management Systems
P.O.Box 151
SF-00141 Helsinki, Finland
Tel.: +358 0 175 355
Fax: +358 0 170 122
Telex: 123585 CMS SF


------------------------------

Subject: JNNS'90 Program Summary (long)
From: Hideki KAWAHARA <kawahara@av-convex.ntt.jp>
Date: Fri, 31 Aug 90 23:43:46 +0900

The first annual conference of Japan Neural Network Society (JNNS'90)
will be held from 10 to 12 September, 1990. Followings are the program
summary and related information on JNNS. There are 2 Invited
presentations, 23 oral presentations and 53 poster presentations.
Unfortunately, a list of the presentation titles in English is not
available yet, because many authors didn't provide English titles for
their presentations (Official languages for the proceding were Japanese
and English. But only two articles were written in English). I will try
to compile the English list by the end of September and would like to
introduce it.

If you have any questions or comments, please e-mail to the following
address. (Please *DON'T REPLY*.)

kawahara@nttlab.ntt.jp
----------------------------------------------
Hideki Kawahara
NTT Basic Research Laboratories
3-9-11, Midori-cho
Musashino, Tokyo 180, JAPAN
Tel: +81 422 59 2276, Fax: +81 422 59 3393
----------------------------------------------

JNNS'90
1990 Annual Conference of
Japan Neural Network Society

September 10-12, 1990

Tamagawa University,
6-1-1 Tamagawa-Gakuen
Machida, Tokyo 194, Japan

Program Summary

Monday, 10 September 1990
12:00 Registration
13:00 - 16:00 Oral Session O1: Learning
16:00 - 18:00 Poster session P1: Learning, Motion and Architecture
18:00 Organization Committee

Tuesday, 11 September 1990
9:00 - 12:00 Oral Session O2: Motion and Architecture
13:00 - 13:30 Plenary Session
13:30 - 15:30 Invited Talk;
"Brain Codes of Shapes: Experiments and Models" by
Keiji Tanaka
"Theories: from 1980's to 1990's" by
Shigeru Shinomoto
15:30 - 18:30 Oral Session O3: Vision I
19:00 Reception

Wednesday, 12 September 1990
9:00 - 12:00 Oral Session O4: Vision II, Time Series and Dynamics
13:00 - 15:00 Poster Session P2: Vision I, II, Time Series and Dynamics
15:00 - 16:45 Oral Session O5: Dynamics

Room 450 is for Oral Session, Plenary Session and Invited talk.
Rooms 322, 323, 324, 325 and 350 are for Poster Session.

Registration Fees for Conference
Members 5000 yen
Student members 3000 yen
Otherwise 8000 yen

Reception
19:00 Tuesday, 12 September 1990
Sakufuu-building
Fee: 5000 yen


JNNS Officers and Governing board

Kunihiko Fukushima Osaka University
President
Shiun-ichi Amari University of Tokyo
International Affair
Secretary
Minoru Tsukada Tamagawa University
Takashi Nagano Hosei University

Publication
Shiro Usui Toyohashi University of Technology
Yoichi Okabe University of Tokyo
Sei Miyake NHK Science and Technical Research Labs.

Planning
Yuichiro Anzai Keio University
Keisuke Toyama Kyoto Prefectural School of Medicine
Nozomu Hoshimiya Tohoku University

Treasurer
Naohiro Ishii Nagoya Institute of Technology
Hideaki Saito Tamagawa University

Regional Affair
Ken-ichi Hara Yamagata University
Hiroshi Yagi Toyama University
Eiji Yodogawa ATR
Syozo Yasui Kyushu Institute of Technology

Supervisor
Noboru Sugie Nagoya University


Committee members

Editorial Committee (Newsletter and mailing list)
Takashi Omori Tokyo University of Agriculture and Technology
Hideki Kawahara NTT Basic Research Labs.
Itirou Tsuda Kyushu Institute of Technology

Planning Committee
Kazuyuki Aihara Tokyo Denki University
Shigeru Shinomoto Kyoto University
Keiji Tanaka The Institute of Physical and Chemical Research


JNNS'90 Conference Organizing Committee

Sei Miyake NHK Science and Technical Research Labs.
General Chairman
Keiji Tanaka The Institute of Physical and Chemical Research
Program Chairman
Shigeru Shinomoto Kyoto University
Publicity Chairman
Program
Takayuki Ito NHK Science and Technical Research Labs.
Takashi Omori Tokyo University of Agriculture and Technology
Koji Kurata Osaka University
Kenji Doya University of Tokyo
Kazuhisa Niki Electrotechnical Laboratory
Ryoko Futami Tohoku University

Publicity
Kazunari Nakane ATR

Publication
Hideki Kawahara NTT Basic Research Labs.
Mahito Fujii NHK Science and Technical Research Labs.

Treasurer
Shin-ichi Kita University of Tokyo
Manabu Sakakibara Toyohashi University of Technology

Local Arrangement
Shigeru Tanaka Fundamental Research Labs., NEC
Makoto Mizuno Tamagawa University


For more details, please contact:

Japan Neural Network Society Office
Faculty of Engineering, Tamagawa University
6-1-1 Tamagawa-Gakuen
Machida, Tokyo 194, Japan
Telephone: +81 427 28 3457
Facsimile: +81 427 28 3597


------------------------------

Subject: Final Call AISB'91
From: B M Smith <bms@dcs.leeds.ac.uk>
Date: Fri, 24 Aug 90 13:31:19 +0100


FINAL CALL FOR PAPERS

AISB'91

8th SSAISB CONFERENCE ON ARTIFICIAL INTELLIGENCE

University of Leeds, UK
16-19 April, 1991

The Society for the Study of Artificial Intelligence and Simulation of
Behaviour (SSAISB) will hold its eighth biennial conference at Bodington
Hall, University of Leeds, from 16 to 19 April 1991. There will be a
Tutorial Programme on 16 April followed by the full Technical Programme.
The Programme Chair will be Luc Steels (AI Lab, Vrije Universiteit
Brussel).

Scope:
Papers are sought in all areas of Artificial Intelligence and Simulation of
Behaviour, but especially on the following AISB91 special themes:

* Emergent functionality in autonomous agents
* Neural networks and self-organisation
* Constraint logic programming
* Knowledge level expert systems research

Papers may describe theoretical or practical work but should make a
significant and original contribution to knowledge about the field of
Artificial Intelligence.

A prize of 500 pounds for the best paper has been offered by British
Telecom Computing (Advanced Technology Group). It is expected
that the proceedings will be published as a book.

Submission:
All submissions should be in hardcopy in letter quality print and
should be written in 12 point or pica typewriter face on A4 or 8.5" x
11"
paper, and should be no longer than 10 sides, single-spaced.
Each paper should contain an abstract of not more than 200 words and a
list of up to four keywords or phrases describing the content of the
paper. Five copies should be submitted. Papers must be written in
English. Authors should give an electronic mail address where possible.
Submission of a paper implies that all authors have obtained
all necessary clearances from the institution and that an author will
attend the conference to present the paper if it is accepted. Papers
should describe work that will be unpublished on the date of the
conference.

Dates:
Deadline for Submission: 1 October 1990
Notification of Acceptance: 7 December 1990
Deadline for camera ready copy: 16 January 1991

Location: Bodington Hall is on the edge of Leeds, in 14 acres of private
grounds. The city of Leeds is two and a half hours by rail from London,
and there are frequent flights to Leeds/Bradford Airport from London
Heathrow, Amsterdam and Paris. The Yorkshire Dales National Park is close
by, and the historic city of York is only 30 minutes away by rail.

Information:
Papers and all queries regarding the programme should be sent to
Judith Dennison. All other correspondence and queries regarding the
conference to the Local Organiser, Barbara Smith.

Ms. Judith Dennison Dr. Barbara Smith
Cognitive Sciences Division of AI
University of Sussex School of Computer Studies
Falmer University of Leeds
Brighton BN1 9QN Leeds LS2 9JT
UK UK

Tel: (+44) 273 678379 Tel: (+44) 532 334627
Email: judithd@cogs.sussex.ac.uk FAX: (+44) 532 335468
Email: aisb91@ai.leeds.ac.uk


------------------------------

Subject: Rutgers' CAIP Neural Network Workshop
From: ananth sankar <sankar@caip.RUTGERS.EDU>
Date: Fri, 24 Aug 90 17:19:35 -0400

Rutgers University

CAIP Center

CAIP Neural Network Workshop

15-17 October 1990

A neural network workshop will be held during 15-17 October 1990 in
East Brunswick, New Jersey under the sponsorship of the CAIP Center of
Rutgers University. The theme of the workshop will be

"Theory and impact of Neural Networks on future technology"

Leaders in the field from government, industry and academia will
present the state-of-the-art theory and applications of neural
networks. Attendance will be limited to about 100 participants.

A Partial List of Speakers and Panelists include:

J. Alspector, Bellcore
A. Barto, University of Massachusetts
R. Brockett, Harvard University
L. Cooper, Brown University
J. Cowan, University of Chicago
K. Fukushima, Osaka University
D. Glasser, University of California, Berkeley
S. Grossberg, Boston University
R. Hecht-Nielsen, HNN, San Diego
J. Hopfield, California Institute of Technology
L. Jackel, AT&T Bell Labs.
S. Kirkpatrick, IBM, T.J. Watson Research Center
S. Kung, Princeton University
F. Pineda, JPL, California Institute of Technology
R. Linsker, IBM, T.J. Watson Research Center
J. Moody, Yale University
E. Sontag, Rutgers University
H. Stark, Illinois Institute of Technology
B. Widrow, Stanford University
Y. Zeevi, CAIP Center, Rutgers University and The
Technion, Israel

The workshop will begin with registration at 8:30 AM on Monday, 15
October and end at 7:00 PM on Wednesday, 17 October. There will be
dinners on Tuesday and Wednesday evenings followed by special-topic
discussion sessions. The $395 registration fee ($295 for participants
from CAIP member organizations), includes the cost of the dinners.

Participants are expected to remain in attendance throughout the entire
period of the workshop. Proceedings of the workshop will subsequently
be published in book form.

Individuals wishing to participate in the workshop should fill out the
attached form and mail it to the address indicated.

If there are any questions, please contact

Prof. Richard Mammone
Department of Electrical and Computer Engineering
Rutgers University
P.O. Box 909
Piscataway, NJ 08854
Telephone: (201)932-5554
Electronic Mail: mammone@caip.rutgers.edu
FAX: (201)932-4775
Telex: 6502497820 mci


------------------------------ CUT HERE -------------------------------

Rutgers University

CAIP Center

CAIP Neural Network Workshop

15-17 October 1990


I would like to register for the Neural Network Workshop.


Title:________ Last:_________________ First:_______________ Middle:__________

Affiliation _________________________________________________________

Address _________________________________________________________

______________________________________________________

Business Telephone: (___)________ FAX:(___)________

Electronic Mail:_______________________ Home Telephone:(___)________




I am particularly interested in the following aspects of neural networks:

_______________________________________________________________________

_______________________________________________________________________

Fee enclosed $_______
Please bill me $_______

Please complete the above and mail this form to:

Neural Network Workshop
CAIP Center, Rutgers University
Brett and Bowser Roads
P.O. Box 1390
Piscataway, NJ 08855-1390 (USA)



------------------------------

Subject: TR announcement (hardcopy and ftp)
From: Nici Schraudolph <schraudo%cs@ucsd.edu>
Date: Fri, 24 Aug 90 15:18:46 -0700

The following technical report is now available in print:


Dynamic Parameter Encoding for Genetic Algorithms
-------------------------------------------------

Nicol N. Schraudolph Richard K. Belew


The selection of fixed binary gene representations for real-valued
parameters of the phenotype required by Holland's genetic algorithm (GA)
forces either the sacrifice of representational precision for efficiency
of search or vice versa. Dynamic Parameter Encoding (DPE) is a mechanism
that avoids this dilemma by using convergence statistics derived from the
GA population to adaptively control the mapping from fixed-length binary
genes to real values. By reducing the length of genes DPE causes the GA
to focus its search on the interactions between genes rather than the
details of allele selection within individual genes. DPE also highlights
the general importance of the problem of premature convergence in GAs,
explored here through two convergence models.

--------

To obtain a hardcopy, request technical report LAUR 90-2795 via e-mail
from office%bromine@LANL.GOV, or via plain mail from

Technical Report Requests
CNLS, MS-B258
Los Alamos National Laboratory
Los Alamos, NM 87545
USA

--------

As previously announced, the report is also available in compressed
PostScript format for anonymous ftp from the Artificial Life archive
server. To obtain a copy, use the following procedure:

$ ftp iuvax.cs.indiana.edu % (or 129.79.254.192)
login: anonymous
password: <anything>
ftp> cd pub/alife/papers
ftp> binary
ftp> get schrau90-dpe.ps.Z
ftp> quit
$ uncompress schrau90-dpe.ps.Z
$ lpr schrau90-dpe.ps

--------

The DPE algorithm is an option in the GENESIS 1.1ucsd GA simulator, which
will be ready for distribution (via anonymous ftp) shortly. Procedures
for obtaining 1.1ucsd will then be announced on this mailing list.

--------

Nici Schraudolph, C-014 nschraudolph@ucsd.edu
University of California, San Diego nschraudolph@ucsd.bitnet
La Jolla, CA 92093 ...!ucsd!nschraudolph

------------------------------

End of Neuron Digest [Volume 6 Issue 52]
****************************************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT