Copy Link
Add to Bookmark
Report
Neuron Digest Volume 06 Number 41
Neuron Digest Monday, 25 Jun 1990 Volume 6 : Issue 41
Today's Topics:
Research Interest
Re: Protein Structure Prediction
Facial Feature Detector
Picture filter?
Research Positions Available
Cascade-Correlation simulator in C
A GA Tutorial and a GA Short Course
Re: Neuron Digest V6 #40
Independent Synaptic Rules
Re: Independent, again
report offered
$$ for TR
THIRD INTERNATIONAL SYMPOSIUM ON IA IN MEXICO
tech report on story comprehension
Call for Papers - connectionist analogy etc.
Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"
Use "ftp" to get old issues from hplpm.hpl.hp.com (15.255.176.205).
------------------------------------------------------------
Subject: Research Interest
From: "Drago T. Indjic, +38 (11) 322 633 " <indjic@ijs.ac.mail.yu>
Date: 14 Jun 90 00:18:00 +0200
It was pleasure to read Neuro Digest during last few months.
I am student of EE at the Univ. of Beograd, Yugoslavia. I am
interested in Soviet research in neural networks and advanced computing
and I own over fifty books on this topic, most of them published during
last few years (of course, in Russian language).
Is there are any other researchers monitoring Soviet research
from unclassified sources? I am currently compiling small bibliography
and I am looking for help.
Few days ago I received final program of NEURONET 90 conference,
to be held in Prague (Czechoslovakia) in September. There will be more
than 60 papers by Soviet researchers, greatest East participation at an
neural network conference (as far as I know). If NEURONET 90 mailing did
not reach large number of USA and West Europe researchers, I will submit
registration informations for Digest.
(b) Recently I contacted IEEE Computer Society Tech Committee on
Computer languages and I am interested in helping them on neural networks
description languages. If anyone is interested to join, please contact me
or TC CL (bb@cs.tulane.edu).
(c) At least several dozen times I was asked to mention at least
one real product using VLSI chips implementing neural networks. However,
I have heard only about SAIC bomb detector. What about (many) other
products (on the market) using fancy neurochips?
Drago Indjic
27. marta 39/9
11000 Beograd
Yugoslavia
phone: +38 (11) 322 633
fax: +38 (11) 623 169
e-mail: indjic@ijs.ac.mail.yu
------------------------------
Subject: Re: Protein Structure Prediction
From: Benny Lautrup <LAUTRUP@nbivax.nbi.dk>
Date: Thu, 14 Jun 90 13:38:00 +0200
In answer to russo@caip.rutgers.edu : The following two references are
concerned with protein structure prediction with neural networks.
Bohr et al, FEBS Letters 241, 223-228 (1988)
Bohr et al, FEBS Letters 261, 43-46 (1990)
The last paper deals with aspects of the tertiary structure whereas the
first deals with secondary structure.
------------------------------
Subject: Facial Feature Detector
From: MRE1%VMS.BRIGHTON.AC.UK@CORNELLC.cit.cornell.edu
Date: Thu, 14 Jun 90 14:49:00 +0100
I would like to know if anyone has or knows of a database of images of
human faces or protrait shots of humans. I require the images to form
training and test sets for a neural network for facial feature detection.
This is part of my PhD program and not a commercial development. I would
gratefully appreciate any help.
Mark Evans
mre1@uk.ac.bton.vms
------------------------------
Subject: Picture filter?
From: "Bill Ross" <ross@cgl.ucsf.edu>
Date: Thu, 14 Jun 90 11:40:04 -0700
An idea for a nn application: a "picture filter" which would transmit a
picture through the network, displaying a modified version. One would
train it by selecting preferred regions for reinforcement and perhaps by
penalizing other areas. After iterating on a single picture, it would be
interesting to see how the next picture was treated. Could we derive an
"eye of the artist" which would be like a living painting? Taking it a
step further, motion effects could be filtered by a network and also
modified.
Alternatively, the input could be a set of photos of scenes painted by a
given artist, and the network would be trained to reproduce the
corresponding paintings. Would the style carry over to new paintings?
What if the input to a nn is an actor mimicking presidential speeches,
and the reinforcement is the presidential original. Could anything the
actor said be mapped to a simulated presidential utterance?
What is the computational complexity of these problems? The first idea
in particular might allow arbitrary simplification, e.g. to the point of
being dumber than feature detection applications. What methods/hardware
would be most suitable? Can I buy a PC or Y-MP & get to work? Will
political PACs fund the creation of a "presidentializing" filter that
will paint a candidate with the qualities of chosen predecessors?
Bill Ross
------------------------------
Subject: Research Positions Available
From: "Dr. Thaddeus F. Pawlicki" <pawlicki@Kodak.COM>
Date: Fri, 15 Jun 90 14:30:19 -0400
The Signal Processing Research Group of Eastman Kodak has on
going research in the areas of Image Processing and Neural Networks. The
focus of this work is document understanding and pattern recognition. We
are actively recruiting individuals who are interested in these areas.
Serious inquirees should contact (hardcopy/phone) :
Dr. Roger Gaborski
Eastman Kodak Company
901 Elmgrove Road
Rochester, New York, 14653-5722
(716) 726-4169
------------------------------
Subject: Cascade-Correlation simulator in C
From: Scott.Fahlman@SEF1.SLISP.CS.CMU.EDU
Date: Sun, 17 Jun 90 01:02:35 -0400
Thanks to Scott Crowder, one of my graduate students at Carnegie Mellon,
there is now a C version of the public-domain simulator for the
Cascade-Correlation learning algorithm. This is a translation of the
original simulator that I wrote in Common Lisp. Both versions are now
available by anonymous FTP -- see the instructions below. Before anyone
asks, we are *NOT* prepared to make tapes and floppy disks for people.
Since this code is in the public domain, it is free and no license
agreement is required. Of course, as a matter of simple courtesy, we
expect people who use this code, commercially or for research, to
acknowledge the source. I am interested in hearing about people's
experience with this algorithm, successful or not. I will maintain an
E-mail mailing list of people using this code so that I can inform users
of any bug-fixes, new versions, or problems. Send me E-mail if you want
to be on this list.
If you have questions that specifically pertain to the C version, contact
Scott Crowder (rsc@cs.cmu.edu). If you have more general questions about
the algorithm and how to run it, contact me (fahlman@cs.cmu.edu). We'll
try to help, though the time we can spend on this is limited. Please use
E-mail for such queries if at all possible. Scott Crowder will be out of
town for the next couple of weeks, so C-specific problems might have to
wait until he returns.
The Cascade-Correlation algorithm is described in S. Fahlman and C
Lebiere, "The Cascade-Correlation Learning Architecture" in D. S.
Touretzky (ed.) _Advances_in_Neural_Information_Processing_Systems_2_,
Morgan Kaufmann Publishers, 1990. A tech report containing essentially
the same information can be obtained via FTP from the "neuroprose"
collection of postscript files at Ohio State. (See instructions below.)
Enjoy,
Scott E. Fahlman
School of Computer Science
Carnegie-Mellon University
Pittsburgh, PA 15217
- ---------------------------------------------------------------------------
To FTP the simulation code:
For people (at CMU, MIT, and soon some other places) with access to the
Andrew File System (AFS), you can access the files directly from
directory "/afs/cs.cmu.edu/project/connect/code". This file system uses
the same syntactic conventions as BSD Unix: case sensitive names, slashes
for subdirectories, no version numbers, etc. The protection scheme is a
bit different, but that shouldn't matter to people just trying to read
these files.
For people accessing these files via FTP:
1. Create an FTP connection from wherever you are to machine "pt.cs.cmu.edu".
2. Log in as user "anonymous" with no password. You may see an error
message that says "filenames may not have /.. in them" or something like
that. Just ignore it.
3. Change remote directory to "/afs/cs/project/connect/code". Any
subdirectories of this one should also be accessible. The parent
directories may not be.
4. At this point FTP should be able to get a listing of files in this
directory and fetch the ones you want. The Lisp version of the
Cascade-Correlation simulator lives in files "cascor1.lisp". The C version
lives in "cascor1.c".
If you try to access this directory by FTP and have trouble, please
contact me.
The exact FTP commands you use to change directories, list files, etc.,
will vary from one version of FTP to another.
- ---------------------------------------------------------------------------
To access the postscript file for the tech report:
unix> ftp cheops.cis.ohio-state.edu (or, ftp 128.146.8.62)
Name: anonymous
Password: neuron
ftp> cd pub/neuroprose
ftp> binary
ftp> get fahlman.cascor-tr.ps.Z
ftp> quit
unix> uncompress fahlman.cascor-tr.ps.Z
unix> lpr fahlman.cascor-tr.ps (use flag your printer needs for Postscript)
- ---------------------------------------------------------------------------
------------------------------
Subject: A GA Tutorial and a GA Short Course
From: Clay Bridges <clay@CS.CMU.EDU>
Date: Tue, 19 Jun 90 12:35:16 -0400
A tutorial entitled "Genetic Algorithms and Classifier Systems" will be
presented on Wednesday afternoon, August 1, at the AAAI conference in
Boston, MA by David E. Goldberg (Alabama) and John R. Koza (Stanford).
The course will survey GA mechanics, power, applications, and advances
together with similar information regarding classifier systems and other
genetics-based machine learning systems. For further information
regarding this tutorial write to AAAI-90, Burgess Drive, Menlo Park, CA
94025, (415)328-3123.
A five-day short course entitled "Genetic Algorithms in Search,
Optimization, and Machine Learning" will be presented at Stanford
University's Western Institute in Computer Science on August 6-10 by
David E. Goldberg (Alabama) and John R. Koza (Stanford). The course
presents in-depth coverage of GA mechanics, theory and application in
search, optimization, and machine learning. Students will be encouraged
to solve their own problems in hands-on computer workshops monitored by
the course instructors. For further information regarding this course
contact Joleen Barnhill, Western Institute in Computer Science, PO Box
1238, Magalia, CA 95954, (916)873-0576.
------------------------------
Subject: Re: Neuron Digest V6 #40
From: Javier Movellan <jm2z+@andrew.cmu.edu>
Date: Tue, 19 Jun 90 17:34:16 -0400
In response to: " I want to use the Hopfield network [1] with graded
response for classification purposes, but I have trouble finding an
appropriate learning rule. Are there any articles on this? Does anyone
have experience? "
Contrastive Hebbian Learning works very well specially when the
activation function is -1.0 to 1.0. Here are some references:
Peterson C and Anderson JR (1987) A mean field theory learning algorithm
for neural networks. Complex Systmes, 1, 995-1019.
Hinton GE (1989) Deterministic Boltzmann Learning perform Steepest
Dewscent in Weight Space, Neural Computation, 1, 143-150.
Movellan J (1990) Contrastive Hebbian Learning in Interactive Networks,
submited to Neural Computation. Preprint at Ohio Database.
------------------------------
Subject: Independent Synaptic Rules
From: Patrick Thomas <thomasp@lan.informatik.tu-muenchen.dbp.de>
Date: 21 Jun 90 14:01:23 +0200
I wonder who is currently supporting the idea of INDEPENDENT (non
hebbian) rules for synaptic plasticity apart from Finkel & Edelman (1).
They formulated a synaptic plasticity mechanism which is based on a
PRESYNAPTIC RULE (the efficacy of the presynaptic terminal is dependent
only on the activity of the presynaptic neuron, no postsynaptic firing or
above-treshold depolarization is necessary, ALL presynaptic terminals are
affected) and on a POSTSYNAPTIC RULE which is a heterosynaptic
modification rule similiar to that of Changeux, Alkon and others.
There is general agreement that Hebbs original notion of postsynaptic
FIRING as a condition of synaptic weight increase is inappropriate.
Usually a postsynaptic DEPOLARIZATION is said to be needed with a further
refinement preventing unbounded weight increase, namely some kind of
ANTI-HEBB condition which decreases synaptic weight in the absence of
correlated conditions of "activity" (cf Stent 1973, Palm and others). Of
course there are numerous other variations of Hebb rules not to be
considered here (cf Brown, 1990, Ann Rev NS).
But, what shall we do with the following two facts:
1) No mechanism is known to detect coincidence of pre/postsynaptic
"activity". The NMDA-Receptor complex is currently en vogue, but
available data is inconclusive.
2) There is a growing amount of data related to heteroassociative
interactions LOCAL on the dendritic tree between neighbouring
synapses. So why not redefine our models based on this observations ?
All of the heteroassociative effects of synaptic plasticity, of course,
rely on some kind of "postsynaptic activity". But this is not meant in
the hebb-sense as to involve the postsynaptic neuron as a functional
whole but rather in the context of local depolarization affecting
neighbouring membrane channel properties, for example. Alkon therefore
simulates with his "neurons" having distinct patches for incoming
signals.
In addition to a postsynaptic/heterosynaptic mechanism there is ample
evidence for homo/multisynaptic facilitation and depression which is
independent of postsynaptic activity.
Edelmans DUAL RULES MODEL sketched earlier could therefore well be an
appropriate starting point for the investigation of new learning laws to
be applied within the context of Artificial Neural Networks (actually, it
needs some refinements).
Could someone provide references to work either crushing the idea of
independent modification rules or supporting it ?
Thanx in advance.
Patrick Thomas
Computer Science, Munich Technical University
(1) "Synaptic Function", Edelman/Gall/Cowan (eds), Wiley, 1987.
PS: Bad timing. I bet everbody is in San Diego.
PSS: The Kelso (1986) and Bonhoeffer (1989) results are admittedly a
challenge to non-hebbian rules. Hopefully a moderate one.
------------------------------
Subject: Re: Independent, again
From: Terry Sejnowski <tsejnowski@UCSD.EDU>
Date: Thu, 21 Jun 90 12:50:15 -0700
There are two types of LTP in the hippocampus, one in area CA1 (and
elsewhere) that depends on the NMDA receptor (and is blocked by AP5) and
another type in area CA3 that is not blocked by AP5. The latter appears
not be associative and may not be Hebbian (but the experimental evidence
is not yet definitive on this point).
In addition to heterosynaptic depression (postsynaptic activity in the
absence of presynaptic activity) there is also evidence for homosynaptic
depression (presynaptic activity in the absence of postsynaptic
activity). For a review of these mechanisms, see Sejnowski et al.,
Induction of synaptic plasticity by Hebbian covariance in the
hippocampus, In: R. Durbin, C. Miall and G. Mitchison. (Eds.), The
Computing Neuron, Addison-Wesley, 1989. Incidently, this collection of
papers is one of the best on the interface of biology with computational
models. Another good recent collection specifically on biologically
relevant connectionist models is Connectionist Modeling and Brain
Function, Hanson and Olson (Eds.), MIT Press, 1990.
The emerging evidence from neurobiologists is that there is a
multiplicity of mechanisms for plasticity at synapses. Furthermore,
there are mechanisms that can change the excitability of a neuron, such
as changing the density or voltage dependence of ion-selective channels
in the membrane. This is similar to changing the threshold and shape of
the nonlinearity, except that the change may be specific to a dendritic
branch, not the whole neuron. This gives Nature (and modelers) a much
richer palate of mechanisms to work with.
Terry
------------------------------
Subject: report offered
From: FRANKLINS%MEMSTVX1.BITNET@VMA.CC.CMU.EDU
Date: Fri, 15 Jun 90 12:44:00 -0500
What follows is the abstract of a technical report, really more of a
position paper, by Max Garzon and myself. It deals more with neural
networks as computational tools than as models of cognition. It was
motivated by more technical work of ours on the outer reaches of neural
computation under ideal conditions. An extended abstract of this work
appeared as "Neural computability II", in Proc. 3rd Int. Joint. Conf. on
Neural Networks, Washington, D.C. 1989 I, 631- 637.
*******************************************************
When does a neural network solve a problem?
Stan Franklin and Max Garzon
Institute for Intelligent Systems
Department of Mathematical Sciences
Memphis State University
Memphis, TN 38152 USA
Abstract
Reproducibility, scalability, controlability, and
physical realizability are characteristic features of conventional
solutions to algorithmic problems. Their desirability for neural network
approaches to computational problems is discussed. It is argued that
reproducibility requires the eventual stability of the network at a
correct answer, scalability requires consideration of successively larger
finite (or just infinite) networks, and the other two features require
discrete activations. A precise definition of solution with these
properties is offered. The importance of the stability problem in
neurocomputing is discussed, as well as the need for study of infinite
networks.
*******************************************************
A hard copy of the position paper (report 90-11) and/or a full version of
"Neural computability II" may be requested from
franklins@memstvx1.bitnet. We would greatly appreciate your comments.
Please do not REPLY to this message.
Stan Franklin
------------------------------
Subject: $$ for TR
From: white@cs.rochester.edu
Date: Mon, 18 Jun 90 12:49:19 -0400
>The following technical report is now available:
>
>
> LEARNING TO PERCEIVE AND ACT
>
> Steven D. Whitehead and Dana H. Ballard
>
> Technical Report # 331 (Revised)
> Department of Computer Science
> University of Rochester
> Rochester, NY 14627
>
>ABSTRACT: This paper considers adaptive control architectures that
>integrate active sensory-motor systems with decision systems based
>on reinforcement learning. One unavoidable consequence of active perception
>is that the agent's internal representation often confounds external world
>states. We call this phenomenon perceptual aliasing and show that it
>destabilizes existing reinforcement learning algorithms with respect
>to the optimal decision policy. We then describe a new decision system
>that overcomes these difficulties for a restricted class of decision
>problems. The system incorporates a perceptual subcycle within the overall
>decision cycle and uses a modified learning algorithm to suppress the effects
>of perceptual aliasing. The result is a control architecture that learns not
>only how to solve a task but also where to focus its attention in order to
>collect necessary sensory information.
>
>
>The report can be obtained by sending requests to either peg@cs.rochester.edu
>or white@cs.rochester.edu. Be sure to mention TR331(revised) in your request.
I failed to mention that the TR costs $2.00. If you have already
requested the TR and NO LONGER WANT IT, PLEASE MAIL ME. Otherwise, I'll
just send the TR (along with the bill.) My original intension was to
bypass our standard billing procedure and distrubute the TR freely,
however the overwheling number of requests has made that impractical.
I apologize for the hassle.
-Steve
------------------------------
Subject: THIRD INTERNATIONAL SYMPOSIUM ON IA IN MEXICO
From: "Centro de Inteligencia Artificial(ITESM)" <ISAI%tecmtyvm.mty.itesm.mx@RELAY.CS.NET>
Organization: Instituto Tecnologico y de Estudios Superiores de Monterrey
Date: Mon, 18 Jun 90 17:23:40 -0600
To whom it may concern:
Here you will find the information concerning the
"THIRD INTERNATIONAL SYMPOSIUM ON ARTIFICIAL INTELLIGENCE".
Please display it in your bulletin board.
Thank you very much in advance. We look forward to your
participation. Sincerely,
The Symposium Publicity Committee.
====================================================================
THIRD INTERNATIONAL SYMPOSIUM ON
ARTIFICIAL INTELLIGENCE:
APPLICATIONS OF ENGINEERING DESIGN & MANUFACTURING IN
INDUSTRIALIZED AND DEVELOPING COUNTRIES
OCTOBER 22-26, 1990
ITESM, MEXICO
The Third International Symposium on Artificial Intelligence will
be held in Monterrey, N.L. Mexico on October 22-26, 1990.
The Symposium is sponsored by the ITESM (Instituto Tecnologico y
de Estudios Superiores de Monterrey) in cooperation with the
International Joint Conferences on Artificial Intelligence Inc.,
the American Association for Artificial Intelligence, the Sociedad
Mexicana de Inteligencia Artificial and IBM of Mexico.
GOALS:
* Promote the development and use of AI technology in the
solution of real world problems. Analyze the state-of-the-art
of AI technology in different countries. Evaluate efforts
made in the use of AI technology in all countries.
FORMAT:
ISAI consists of a tutorial and a conference.
Tutorial.- Oct. 22-23
Set of seminars on relevant AI topics given in two days.
Topics covered in the Tutorial include:
"Expert Systems in Manufacturing"
Mark Fox, Ph.D., Carnegie Mellon University, USA
"A.I. as a Software Development Methodology"
Randolph Goebel, Ph.D., University of Alberta, Canada
Conference.- Oct. 24-26
Set of lectures given during three days. It consists of
invited papers and selected papers from the "Call for Papers"
invitation. Areas of application include: computer aided product
design, computer aided product manufacturing, use of industrial
robots, process control and ES, automatic process inspection and
production planning.
Confirmed guest speakers:
Nick Cercone, Ph.D, Simon Fraser University, Canada
Alan Mackworth, Ph.D, University of British Columbia, Canada
IMPORTANT:
Computer manufacturers, AI commercial companies,
universities and selected papers with working programs could
present products and demonstrations during the conference.
In order to encourage an atmosphere of friendship and exchange
among participants, some social events are being organized.
For your convinience we have arranged a free shuttle bus
service between the hotel zone and the ITESM during the three
day conference.
FEES: (Valid before August 31)
Tutorial.-
Professionals $ 250 USD + Tx(15%)
Students $ 125 USD + Tx(15%)
Conference.-
Professionals $ 180 USD + Tx(15%)
Students $ 90 USD + Tx(15%)
Simultaneous Translation $ 7 USD
Formal dinner $ 25 USD *
*(Includes dinner, open bar, music (Oct 26))
Tutorial fee includes:
Tutorial material.
Welcoming cocktail party (Oct.22)
Conference fee includes:
Proceedings.
Welcoming cocktail party (Oct.24)
Cocktail party. (Oct.25)
HOTELS:
Call one to the hotels listed below and mention that you
are going to the 3rd. ISAI. Published rates are single or
double rooms.
HOTEL PHONE* RATE
Hotel Ambassador 42-20-40 $85 USD + Tx(15%)
Gran Hotel Ancira 42-48-06 $75 USD + Tx(15%)
91(800) 83-060
Hotel Monterrey 43-51-(20 to 29) $60 USD + Tx(15%)
Hotel Rio 44-90-40 $48 USD + Tx(15%)
* The area code for Monterrey is (83).
REGISTRATION PROCEDURE:
Send personal check payable to "I.T.E.S.M." to:
"Centro de Inteligencia Artificial,
Attention: Leticia Rodriguez,
Sucursal de Correos "J", C.P. 64849,
Monterrey, N.L. Mexico"
INFORMATION:
CENTRO DE INTELIGENCIA ARTIFICIAL, ITESM.
SUC. DE CORREOS "J", C.P. 64849 MONTERREY, N.L. MEXICO.
TEL. (83) 58-20-00 EXT.5132 or 5143.
TELEFAX (83) 58-07-71, (83) 58-89-31,
NET ADDRESS:
ISAI AT TECMTYVM.BITNET
ISAI AT TECMTYVM.MTY.ITESM.MX
------------------------------
Subject: tech report on story comprehension
From: "Mark St. John" <stjohn%cogsci@ucsd.edu>
Date: Tue, 19 Jun 90 13:44:00 -0700
The Story Gestalt
Text Comprehension by Cue-based Constraint Satisfaction
Mark F. St. John
Department of Cognitive Science, UCSD
Abstract:
Cue-based constraint satisfaction is an appropriate algorithm for many
aspects of story comprehension. Under this view, the text is seen to
contain cues that are used as evidence to constrain a full interpretation
of a story. Each cue can constrain the interpretation in a number of
ways, and cues are easily combined to produce an interpretation. Using
this algorithm, a number of comprehension tasks become natural and easy.
Inferences are drawn and pronouns are resolved automatically as an
inherent part of processing the text. The developing interpretation of a
story is revised as new information becomes available. Knowledge learned
in one context can be shared in new contexts. Cue-based constraint
satisfaction is naturally implemented in a recurrent connectionist
network where the weights encode the constraints. Propositions are
processed sequentially to add constraints to refine the story
interpretation. Each of the processes mentioned above is seen as an
instance of a general constraint satisfaction process. The model learns
its representation of stories in a hidden unit layer called the Story
Gestalt. Learning is driven by asking the model questions about a story
during processing. Errors in question answering are used to modify the
weights in the network via Back Propagation.
------------
The report can be obtained from the neuroprose database by the
following procedure.
unix> ftp cheops.cis.ohio-state.edu # (or ftp 128.146.8.62)
Name (cheops.cis.ohio-state.edu:): anonymous
Password (cheops.cis.ohio-state.edu:anonymous): neuron
ftp> cd pub/neuroprose
ftp> type binary
ftp> get
(remote-file) stjohn.story.ps.Z
(local-file) foo.ps.Z
ftp> quit
unix> uncompress foo.ps.Z
unix> lpr -P(your_local_postscript_printer) foo.ps
------------------------------
Subject: Call for Papers - connectionist analogy etc.
From: Keith J Holyoak <holyoak@cognet.ucla.edu>
Date: Thu, 21 Jun 90 10:41:57 -0700
[[ Editor's Note: This looks like it's troff format with -me macros. If
necessary, just ignore the lines which start with a ".". -PM ]]
.ll 7i
.nr LL 7i
.ps 11
.nr PS 11
.nr VS 13
.nf
.ta 1.5i
.UL "Information for inclusion in a CALL FOR PAPERS"
SERIES: Advances in Connectionist and Neural Computation Theory
SERIES EDITOR: John A. Barnden
VOLUME: 2
VOLUME TITLE: \fIConnectionist Approaches to Analogy, Metaphor and
Case-Based Reasoning\fR.
.ta 2i
VOLUME EDITORS: Keith J. Holyoak
Department of Psychology
University of California
Los Angeles, CA 90024.
(213) 206-1646
holyoak@cognet.ucla.edu
John A. Barnden
Computer Science Department
& Computing Research Laboratory
Box 30001/3CRL
New Mexico State University
Las Cruces, NM 88003.
(505) 646-6235
jbarnden@nmsu.edu
.fi
.LP
DESCRIPTION
.PP
Connectionist capabilities such as associative retrieval, approximate
matching, soft constraint handling and adaptation hold considerable
promise for supporting analogy-based reasoning, case-based reasoning and
metaphor processing. At the same time, these three strongly related
forms of processing traditionally involve complex symbol structures, and
connectionism continues to have difficulty in providing the benefits
normally supplied by such structures. Recently, some connectionist
approaches to metaphor, analogy and case-based reasoning have begun to
appear, and the purpose of our volume is to encourage further work and
discussion in this area.
.PP
The volume will include both invited and submitted peer-reviewed
articles. We are seeking submissions from researchers in any relevant
field \*- artificial intelligence, psychology, philosophy, linguistics,
and others. Articles can be positive, neutral or negative on the
applicability of connectionism to analogy/metaphor/case-based processing.
They can be of any type, including subfield reviews, general discussions,
critiques, detailed presentations of models or supporting mechanisms,
formal theoretical analyses, empirical studies, and methodological
studies.
.LP
SUBMISSION PROCEDURE
Submissions may be sent to either editor, by 20 November 1990. The
suggested length is 7000-20,000 words excluding figures, references,
abstract and so on. Format details, etc. will be supplied on request.
Authors are strongly encouraged to discuss ideas for possible submissions
with the editors.
.sp 3
.LP
((ADVERTISEMENT FOR VOLUME 1, TO BE INSERTED BY ABLEX))
------------------------------
End of Neuron Digest [Volume 6 Issue 41]
****************************************