Copy Link
Add to Bookmark
Report

Neuron Digest Volume 02 Number 08

eZine's profile picture
Published in 
Neuron Digest
 · 14 Nov 2023

NEURON Digest       20 MAR 1987       Volume 2 Number 8 

Topics in this digest --
Queries - Genetic Algorithms &
NASA ANS Survey
News - Newsletters for Neural Networks
Seminars/Courses - The CONE Computational Network Environment (IBM) &
A Stochastic Genetic Search Method (CMU) &
Creative Analogies in Scientific Progress (UPenn) &
On The Connectionist Model (MIT) &
An Emerging Framework for Connectionist Visual Recognition (Berkeley?
Fluid Concepts and Creative Analogies
Conferences/Call for papers - Meeting Announcement & Call for Papers

----------------------------------------------------------------------

Date: 11 Mar 87 00:57:31 GMT
From: amdcad!amd!intelca!mipos3!omepd!uoregon!hp-pcd!hpcvlo!karen@ucbvax.Berkeley.EDU (Karen Helt)
Subject: Genetic Algorithms

I am investigating genetic algorithms as they relate to
machine learning and in particular classifier systems.
I hope to do my master's thesis in this area. I am
trying to locate literature in this area. Does anyone
know how I can get a copy of the "Proceedings of an International
Conference on Genetic Algorithms and Their Applications,1985"?

Also, it appears that a lot of work on genetic algorithms
has been done at the University of Michigan. There are a number
of Ph D theses of Univ. of Michigan students referenced in the
articles I have found. Is Univ. of Michigan on the net? Will
someone there please contact me and tell me how I can get copies
of some of the theses? I would appreciate any help and information
anyone can give me.

Thanks.

Karen Helt
Hewlett-Packard Company
Corvallis Workstation Operation
Corvallis, Oregon
part-time graduate student at Oregon State University
hplabs!hp-pcd!karen

------------------------------

Date: 12-MAR-1987
From: GATELY%CRL1@TI-CSL.CSNET
Subject: Newsletters for Neural Networks

This message is ment simply to inform the reader of two monthly
newsletters which seem to be focusing on neural networks. The
first is named "Intelligence," is edited by Edward Rosenfeld, and
is available for $295 per year (published monthly). The address
for more information (and perhaps a free copy) is POBox 20008,
New York, NY 10025, (212) 749-8048.

The second newsletter is titled "Neurocomputers," is edited by
Derek F. Stubbs, and is available (on a new member basis?) for
US$24 (USA, Canada, and Mexico) or US$32 (all other countries) per
year (published bi-monthly). The address is: NEUROCOMPUTERS,
Gallifrey Publishing, POBox 155, Vicksburg, Michigan 49097.

Intelligence seems to be an older (seasoned) newsletter, dealing
with all aspects of AI - but focusing on neural networks. The
issue of Neurocomputers that I have (V1 #1) has a wide variety
of NN items (news, books, results).

I have no ties with either of these newsletters!

------------------------------

Date: 13-MAR-1987 19:47
From: VERACSD@A.ISI.EDU
Subject: NASA ANS Survey

Neural Network Researchers:

NASA (Johnson Space Center) is sponsoring a survey of recent work in
Artificial Neural Systems. The primary interest is in work on automatic
perception for NASA mission planning and flight control, but the survey
is also aimed at accessing the potential of current research in Artificial
Neural Systems in general. Professor Terrence Smith of the Department
of Computer Science of the University of California at Santa Barbara; and
Dan Greenwood and Cris Kobryn of VERAC, Inc., are conducting the survey
for NASA.

It would be greatly appreciated if you would send summaries or preprints
of your current (within the last year) work in Artificial Neural Networks.
Please send any relevant material to:

Dan Greenwood
VERAC, Inc.
9605 Scranton Road, Suite 500
San Diego, CA 92121-1771

ARPA: VERACSD@>I.ISI.EDU
Phone: (619)457-5550

Please also indicate if you wish to receive a copy of the survey results
in about five months.

Dan Greenwood
Senior Scientist

------------------------------

Date: 6-MAR-1987 15:28
From: not(LAWS@SRI-STRIPE.ARPA)
Subject: Seminar - The CONE Computational Network Environment (IBM)


IBM Almaden Research Center
650 Harry Road
San Jose, CA 95120-6099

CALENDAR

March 9 - 13, 1987


THE COMPUTATIONAL NETWORK ENVIRONMENT (CONE):
A TOOL FOR FLOW-OF-ACTIVATION PROCESSING
C. A. Cruz, IBM Palo Alto Scientific Center

Computer Science Seminar Mon., Mar. 9 11:00 A.M. Room: B2-307

Since late 1981, a group at IBM's Palo Alto Scientific Center has been
studying a neutral-network-based computational mechanism called
"flow-of-activation networks," or "FAN." A FAN net processes
information through the collective parallel computation of a large
number of simple processors ("nodes"), communicating with each other
through directed point-to-point channels ("links"). As a means for
exploring various FAN applications, such as knowledge-based systems,
robotic vision and distributed real-time control, we have developed
numerous hardware and software tools for network design, debug, and
delivery. This "toolkit" supports the following functions: (1) A FAN
net must be designed to perform the processing required by a given
application. Much of our recent effort has been in the design and
implementation of a "Generalized Network Language" compiler, which
reduces the abstract network description to a generic intermediate
network description. This "NETSPEC" description is then "assembled"
into an executable form. The resulting executable network can then be
loaded into a compatible network emulation engine. (2) The executable
network must be exercised to verify its correct operation. This
requires the use of a network "development system" analogous to that
commonly used for microprocessor system design. Our "Interactive
Execution Program," or "IXP," supports rapid interactive emulation of
large FAN nets. This system also drives a network under test with
stimulus signals, and captures and displays the evolving network
state. (3) Useful FAN nets are often very large. The computing
capabilities of a FAN may not be attractive unless the network can be
run very quickly (especially in real-time applications). The IXP can
make use of a parallel processor (the Network Emulation Processor, or
"NEP") which greatly speeds up the processing of large networks. We
will describe the CONE system, and will sketch some early applications
of the FAN mechanism.
Host: D. Petkovic

[...]

------------------------------

Date: Sun 1 Mar 87 17:49:12-EST
From: Dave Ackley <David.Ackley@C.CS.CMU.EDU>
Subject: Seminar - A Stochastic Genetic Search Method (CMU)


David H. Ackley
Carnegie Mellon Computer Science doctoral dissertation defense
Tuesday, February 24, 1987 at 1pm
Wean Hall 5409

"Stochastic iterated genetic hillclimbing"

Abstract

In the "black box function optimization" problem, a search strategy is
required to find an extremal point of a function without knowing the
structure of the function or the range of possible function values.
Solving such problems efficiently requires two abilities. On the one
hand, a strategy must be capable of "learning while searching": It must
gather global information about the space and concentrate the search in
the most promising regions. On the other hand, a strategy must be
capable of "sustained exploration": If a search of the most promising
region does not uncover a satisfactory point, the strategy must redirect
its efforts into other regions of the space.

This dissertation describes a connectionist learning machine that
produces a search strategy called "stochastic iterated genetic
hillclimbing" (SIGH). Viewed over a short period of time, SIGH displays
a coarse-to-fine searching strategy, like simulated annealing and
genetic algorithms. However, in SIGH the convergence process is
reversible. The connectionist implementation makes it possible to
"diverge" the search after it has converged, and to recover
coarse-grained information about the space that was suppressed during
convergence. The successful optimization of a complex function by SIGH
usually involves a series of such converge/diverge cycles.

SIGH can be viewed as a generalization of a genetic algorithm and a
stochastic hillclimbing algorithm, in which genetic search discovers
starting points for subsequent hillclimbing, and hillclimbing biases the
population for subsequent genetic search. Several search
strategies---including SIGH, hillclimbers, genetic algorithms, and
simulated annealing---are tested on a set of illustrative functions and
on a series of graph partitioning problems. SIGH is competitive with
genetic algorithms and simulated annealing in most cases, and markedly
superior in a function where the uphill directions usually lead \away/
from the global maximum. In that case, SIGH's ability to pass
information from one coarse-to-fine search to the next is crucial.
Combinations of genetic and hillclimbing techniques can offer dramatic
performance improvements over either technique alone.

------------------------------

Date: Tue, 3 Mar 87 19:11 EST
From: Tim Finin <Tim@cis.upenn.edu>
Subject: Seminar - Creative Analogies in Scientific Progress (UPenn)


SPECIAL JOINT COLLOQUIUM
Computer Science, Psychology and Physics
University of Pennsylvania

THE ROLE OF CREATIVE ANALOGIES IN SCIENTIFIC PROGRESS: COMPUTER MODELING

Professor Douglas R. Hofstadter, University of Michigan

2:30 p.m. Wednesday, March 4, 1987
Tea served at 2:00 in the Faculty Lounge (2E17)

David Rittenhouse Lab - Auditorium A1

The Copycat project is a computer model of analogical thought processes,
particularly ones in which a creative or daring leap is made of the sort that
when done in science often postulates new theoretical constructs or objects
(genes, particles, etc.). Examples of such analogies in science will be
presented and the copycat model will be discussed.

------------------------------

Date: 13-MAR-1987 19:45
From: Raul.Valdes-Perez@b.gp.cs.cmu.edu
Subject: Seminar - On The Connectionist Model (MIT)

I came across this talk, and haven't seen it reported in AILIST,
so I'm posting it here. I think the speaker means `connectionist'
machines, not `connection' machines.

The place is: Theory of Computation Seminar, MIT LCS.

------------------------------------------------------------------------------
Thursday, 5 March 3:00pm Room: NE43-512A


TOC SEMINAR

On The Connectionist Model


Prof. JIA-WEI HONG
University of Chicago
and
Peking University



"The one area where it seems that there may be a breakthrough
is that of the connectionist machines. Very little is known,
as yet, about the computational potential and limitations of
such machines." -- See Bobrow: Where are we?

This comment belies the understanding in complexity theory of systematic
relationships between many computational models -- TM, VM, RAM,.... For
example, the present author has established that these models all need
essentially the same viz., polynomially related) parallel time,
sequential time, and space simultaneously.

A prima facie contrast between connection machines and other models is
that connection machines are specified by n by n real matrices, and the
information content of an arbitrary real number is potentially infinite. On
the other hand, finite information is sufficent to specify a machine of the
other models. So it is natural to ask: Does the computational power of
connectionist machines differ fundamentally from other known models?

We show the answer is NO -- the connectionist model has essentially the
same computational power as the others. In particular, any connectionist
machine defined by an n by n real matrix can be simulated by an
aggregate of O(n^3 log(n)) gates having information complexity
O(n^2 log(n)), with an O(log n) time slow-down factor. The idea of the
simulation comes from the "gap theory" which the author developed recently
for proving theorems in geometry by testing examples.

Host: Prof. Albert R. Meyer


------------------------------

Date: 18-MAR-1987 23:59
From: admin%cogsci.Berkeley.EDU@berkeley.edu
Subject: An Emerging Framework for Connectionist Visual Recognition (Berkeley?

SPEAKER: Dr. Daniel Sabbah, IBM Yorktown Heights
TITLE: An Emerging Framework for Connectionist Visual Recognition
WHERE: Hogan Room, 5th floor Cory Hall
WHEN: Thursday March 19, 2:00- 330 pm

ABSTRACT: Several years ago, Ballard proposed the notion of Parameter Spaces
as a possible framework for vision and visual recognition. Since then, little
effort has been made to show whether this notion has any merit in building
complex vision systems. In this talk, we lay the foundation for a vision
system based on parameter networks called QCV (Quadric Connectionist Vision)
to recognize objects which are intersecting quadric surfaces of revolution.
QCV generalizes Sabbah's origami world vision system (which used intersecting
planar patches). A large portion of man-made objects can be either modeled
exactly, or at least well approximated by a number of patches of such surfaces.

A major goal is to test whether this connectionist formulation of the
recognition problem can 'scale' to larger worlds/scenes (ca. 100s of complex
objects in a model base). We develop the basic ideas that reduce the
`indexing' problem: feature and resolution hierarchies, combinatorial
implosion via parallel search, shape extraction and representation using
differential geometry. Some first experiments in shape extraction illustrate
QCV's behavior.
---------------------------------------------------------------------------
Dr. Sabbah will be in Berkeley from about 1030 that day, so if anyone would
like to meet with him, please send a message to malik@ernie.
-------

------------------------------

Date: 9-MAR-1987 19:06
From: MM%FARG.UMICH.EDU@UMIX.CC.UMICH.EDU
Subject: Talk: Fluid Concepts and Creative Analogies


WEEKLY AI SEMINAR, UNIVERSITY OF MICHIGAN, ANN ARBOR
SPEAKER: Melanie Mitchell, EECS Dept., University of Michigan
DATE: Tuesday, March 17
TIME: 4:30 pm
PLACE: 1303 EECS Building (North Campus)
TITLE: "Fluid Concepts and Creative Analogies:
A Theory and its Computer Implementation"

Abstract

This talk is based on research done by Douglas R. Hofstadter,
Melanie Mitchell, and Robert M. French. We describe the principles
of Copycat, a computer model of how humans use concepts fluidly in
order to create analogies. Our model is centered on the Slipnet, a
network of overlapping concepts whose shapes are determined dynamically
by the situations faced by the program. Reciprocally, the state of the
Slipnet controls how Copycat perceives situations. The heart of what
Copycat does, given two situations, is to produce a worlds-mapping: a
coarse-grained mental correspondence between the situations, involving
two interdependent and mutually consistent facets: an object-to-object
mapping realized in structures called bridges, and a concept-to-concept
mapping realized in structures called pylons. Each pylon expresses a
so-called conceptual slippage, borrowed from the slipnet. Taken together,
the slippages constitute a recipe for translating actions in one situation
into their analogues in the other. Through the "coattails effect",
slippages can induce closely related slippages, allowing deeper and more
subtle analogies to be produced than would otherwise be possible.

For copies of a paper describing this research, send messages to
mm@farg.umich.EDU

------------------------------

Date: 12-MAR-1987 9::24
From: GATELY%CRL1@TI-CSL.CSNET
Subject: Meeting Announcement & Call for Papers

MEETING ANNOUNCEMENT
CALL FOR PAPERS

IEEE Conference on:

NEURAL INFORMATION PROCESSING
SYSTEMS - NATURAL AND SYNTHETIC

November 8-12, 1987 (Sun - Thurs)
Boulder, Colorado

Sponsors:
IEEE Information Theory Group (Organizing Sponsor)
IEEE Sponsoring Societies:
Acoustics, Speech, and Signal Processing
Computer (expected)
Circuits and Systems
Systems, Man and Cybernetics (expected)
Non-IEEE Co-Sponsors:
American Physical Society
Society for Neuroscience (requested)

Host Institution:
University of Colorado, Boulder

Supporting Agencies
NASA; SDIO; ONR; others expected

Organizing Committee:
Edward C. Posner, CIT - EE Dept and JPL (General Chairman)
Yaser Abu-Mostafa, Caltech (Program Chairman)
Charles Butler, BDM Corp. (Treasurer)
Clifford Lau, ONR (Publicity Chairman)
Howard Wachtel, U of Colorado (Local Arrangements Chairman)
Larry Jackel, Bell Labs (Physics Liaison)
James Bower, Caltech (Neurobiology Liaison)

Technical Theme:
Electronic and optical realizations of neural networks have been
shown to be able to perform various and surprising processing
functions such as associative recall, combinatorial optimization,
source encoding and channel decoding, pattern recognition, and
others. Conversly, simulation and perhaps soon experiment have
shown that actual neural networks may sometimes work in a similar
fashion to engineering realizations inspired by neurobiology.
The collaborative behavior of neural networks has many properties
closely related to that of a spin glass in statistical physics,
and to random coding in information throry. It is the purpose of
this conference to bring together researchers from engineering,
physics, and biology to provide for wide research interactions in
this rapidly evolving field, including circuit and system
builders, information theorists, computational complexity and
optimization researchers, experimental and theoretical
neurobiologists, and statistical and device physicists. Both the
information processing industry and neurobiology are expected to
benefit.

Submission of contributed papers:
Original research contributions of direct relevance to the
technical theme above are solicited and will be refereed by
several experts representing all disciplines involved. Both
significance and appeal will be criteria for acceptance. Only a
limited number of papers will be accepted for oral presentation
in a maximum of two parallel sessions. Authors should send six
copies of a 500-word summary and one copy of a 50-100 word
abstract clearly stating their results to the Program Chariman:
Professor Yaser S. Abu-Mostafa, Caltech 116-81, Pasadena, CA
91125. The deadline for receiving the abstract and summary is
May 1, 1987. Earlier submissions are encouraged.

Post-meeting mountain retreat and/or workshops:
If preliminary interest is sufficient, we will arrange group
travel and lodging in Summit County, Colorado, for the period
Thursday, November 12 through Sunday, November 15 for the purpose
of holding workshops and/or skiing and other mountain
recreations. Three major ski resorts (Keystone, Breckenridge,
and Copper Mountain) which provide complimentary transportation
are close by and expect to be in operation at that time. Buss
transportation and three nights lodging (double occupancy) will
probably cost less that $100, and ski lift tickest will be
abailable to us at approximately half the regular season proce.
Please indicate your potential interset along with suggestions
you may have for workshop topics.

Expression of Interest:
Mail a message to E.C. Posner
Neural Networks Meeting
Caltech 116-81
Pasadena, CA 91125
Indicate:
O I'll Probably be attending. Please
put me on the mailing list.
O I don't know yet if I will attend,
but please put me on the mailing
list anyway.
O I expect to submit a paper titled
________________________________
O I'm probably interested in the post-
meeting mountain retreat and/or
workshop. The following are my
suggestions for topics:
________________________________

[The only e-mail address that I have for any of the above
folks is that for Prof. Abu-Mostafa: yaser@csvax.caltech.edu
I do not whether or not he wants to receive a flood of
mail about this, but...MTG]

------------------------------

End of NEURON Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT