Copy Link
Add to Bookmark
Report
Neuron Digest Volume 05 Number 32
Neuron Digest Sunday, 30 Jul 1989 Volume 5 : Issue 32
Today's Topics:
Administrivia
Abstracts from Journal of Experimental and Theoretical AI
Book Reviews,Journal of Mathematical Psychology
Report available
sort of connectionist:
Subcognition and the Limits of the Turing Test
Technical Report Available
TR: CONNECTIONISM AND COMPOSITIONAL SEMANTICS
TR announcement
Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"
Use "ftp" to get old issues from hplpm.hpl.hp.com (15.255.176.205)
------------------------------------------------------------
Subject: Administrivia
From: "Neuron-Digest Moderator -- Peter Marvit" <neuron@hplabs.hp.com>
Date: Sun, 30 Jul 89 21:21:30 -0700
Greetings,
My department moved last week. My systems's name remain the same, but its
Internet address changed. It was, however, unavailable for most of the time
in any case. Things have stabilized and you can now retrieve Digest back
issues via anonymous ftp from hplpm.hpl.hp.com (15.255.176.205).
I'm interested in any trip reports or reactions from IJCNN last month. What
did people see, hear, or think?
-Peter Marvit
Your Immoderator
------------------------------
Subject: Abstracts from Journal of Experimental and Theoretical AI
From: cfields@NMSU.Edu
Date: Sun, 09 Apr 89 15:56:55 -0600
_________________________________________________________________________
The following are abstracts of papers appearing in the second issue of the
Journal of Experimental and Theoretical Artificial Intelligence, to appear
in April, 1989.
For submission information, please contact either of the editors:
Eric Dietrich Chris Fields
PACSS - Department of Philosophy Box 30001/3CRL
SUNY Binghamton New Mexico State University
Binghamton, NY 13901 Las Cruces, NM 88003-0001
dietrich@bingvaxu.cc.binghamton.edu cfields@nmsu.edu
JETAI is published by Taylor & Francis, Ltd., London, New York, Philadelphia
_________________________________________________________________________
Generating plausible diagnostic hypotheses with self-processing causal
networks
Jonathan Wald, Martin Farach, Malle Tagamets, and James Reggia
Department of Computer Science, University of Maryland
A recently proposed connectionist methodology for diagnostic problem solving
is critically examined for its ability to construct problem solutions. A
sizeable causal network (56 manifestation nodes, 26 disorder nodes, 384
causal links) served as the basis of experimental simulations. Initial
results were discouraging, with less than two-thirds of simulations leading
to stable solution states (equilibria). Examination of these simulation
results identified a critical period during simulations, and analysis of the
connectionist model's activation rule during this period led to an
understanding of the model's nonstable oscillatory behavior. Slower
decrease in the model's control parameters during the critical period
resulted in all simulations reaching a stable equilibrium with plausible
solutions. As a consequence of this work, it is possible to more rationally
determine a schedule for control parameter variation during problem solving,
and the way is now open for real-world experimental assessment of this
problem solving method.
_________________________________________________________________________
Organizing and integrating edge segments for texture discrimination
Kenzo Iwama and Anthony Maida
Department of Computer Science, Pennsylvania State University
We propose a psychologically and psychophysically motivated texture
segmentation algorithm. The algorithm is implemented as a computer program
which parses visual images into regions on the basis of texture. The
program's output matches human judgements on a very large class of stimuli.
The program and algorithm offer very detailed hypotheses of how humans might
segment stimuli, and also suggest plausible alternative explanations to
those presented in the literature. In particular, contrary to Julesz and
Bergen (1983), the program does not use crossings as textons and does use
corners as textons. Nonetheless, the program is able to account for the
same data. The program accounts for much of the linking phenomena of Beck,
Pradzny, and Rosenfeld (1983). It does so by matching structures between
feature maps on the basis of spatial overlap. These same mechanisms are
also used to account for the feature integration phenomena of Triesman
(1985).
- ----------------------------------------------------------------------------
Towards a paradigm shift in belief representation methodology
John Barnden
Computing Research Laboratory, New Mexico State University
Research programs must often divide issues into managable sub-issues. The
assumption is that an approach developed to cope with a sub-issue can later
be integrated into an approach to the whole issue - possibly after some
tinkering with the sub-approach, but without affecting its fundamental
features. However, the present paper examines a case where an AI issue has
been divided in a way that is, apparently, harmless and natural, but is
actually fundamentally out of tune with the realities of the issue. As a
result, some approaches developed for a certain sub-issue cannot be extended
to a total approach without fundamental modification. The issue in question
is that of modeling people's beliefs, hopes, intentions, and other
``propositional attitudes'', and/or interpreting natural language sentences
that report propositional attitudes. Researchers have, quite
understandably, de-emphasized the problem of dealing in detail with nested
attitudes (e.g. hopes about beliefs, beliefs about intentions about
beliefs), in favor of concentrating on the sub-issue of nonnested attitudes.
Unfortunately, a wide variety of approaches to attitudes are prone to a deep
but somewhat subtle problem when they are applied to nested attitudes. This
problem can be very roughly described as an AI system's unwitting imputation
of its own arcane ``theory'' of propositional attitudes to other agents.
The details of this phenomenon have been published elsewhere by the author:
the present paper merely sketches it, and concentrates instead on the
methodological lessons to be drawn, both for propositional attitude research
and, more tentatively, for AI in general. The paper also summarizes an
argument (presented more completely elsewhere) for an approach to attitude
representation based in part on metaphors of mind that are commonly used by
people. This proposed new research direction should ultimately coax
propositional attitude research out of the logical armchair and into the
pyschological laboratory.
- ---------------------------------------------------------------------------
The graph of a boolean function
Frank Harary
Department of Computer Science, New Mexico State University
(Abstract not available)
------------------------------
Subject: Book Reviews,Journal of Mathematical Psychology
From: INAM000 <INAM%MCGILLB.BITNET@VMA.CC.CMU.EDU>
Date: Sat, 01 Apr 89 12:40:00 -0500
The purpose of this mailing is to (re)draw your attention to the
fact that the Journal of Mathematical Psychology, published by Academic
Press, publishes reviews of books in the general area of mathematical
(social, biological,....) science. For instance, in a forthcoming issue, a
review of the revised edition of Minsky and Papert's PERCEPTRONS will appear
(written by Jordan Pollack). The following is a partial list of books that
we have recently received that I would like to get reviewed for the Journal
- -those most relevant to this group are marked by *s. As you will see, most
of them are edited readings, which are hard to review. However, if you are
interested in reviewing one or more of the books, I would like to hear from
you. Our reviews are additions to the literature, not "straight" reviews, so
writing a review for us gives you an opportunity to express your views on a
field of research. I would also like to be kept informed of new books in
this general area that you think we should review (or at least list in our
Books Received section). And, of course, one reward for writing a review is
that you receive a complimentary copy of the book.
(SELECTED) Books Received
The following books have been received for review.We encourage readers
to volunteer themselves as reviewers.We consider our reviews contributions
to the literature ,rather than "straight" reviews,and thus reviewers have
considerable freedom in terms of format,length,and content of their
reviews.Readers who would like to review any of these or previously listed
books should contact A.A.J.Marley , Department of Psychology , McGill
University,1205 Avenue Dr. Penfield,Montreal,Quebec H3A 1B1, Canada.(Email
address: inam@musicb.mcgill.ca on BITNET).
*Amit, D. J. Modelling brain function: The world of attractor
neural networks. Cambridge, England: Cambridge University
Press,1989. Pp. 500.
Collins,A. and Smith,E.E. Readings in Cognitive Science.A
Perspective from Psychology and Artificial Intelligence.San
Mateo,California:Morgan Kaufmann,1988.661pp.
*Cotterill,R. M.J. (Ed).Computer Simulation in Brain Sciences.New
York,New York: Cambridge University Press,1988.576pp,$65.00.
*Grossberg,S. (Ed) Neural Networks and Natural
Intelligence.Cambridge, Massachusetts : MIT Press,1988. 637pp.
$35.00.
Hirst,W. The Making of Cognitive Science.Essays In Honor of
George A.Miller.New York,New York: Cambridge University
Press,1988.288pp,$29.95.
Laird,P.D. Learning from Good and Bad Data.Norwell,Massachusetts:
Kluwer Academic,1988.211pp.
*MacGregor, R. J. Neural and Brain Modeling. San Diego,
California: Academic Press, 1987. 643pp. $95.50.
Ortony,A,Clore,G.L. and Collins,A. The Cognitive Structure of
Emotions.New York,New York: Cambridge University Press,1988.
175pp,$24.95.
*Richards, W. (Ed). Natural Computation. Cambridge, Massachusetts:
MIT Press, 1988. 561pp.
Shrobe,H.E. and the American Association for Artificial
Intelligence (Eds). Exploring Artificial Intelligence:Survey
Talks from the National Conferences on Artificial
Intelligence.San Mateo,California:Morgan Kaufmann,1988.693pp.
Vosniadou,S. and Ortony,A.Similarity and Analogical Reasoning.New
York,New York: Cambridge University Press,1988.410pp,$44.50.
*Richards,W. (Ed.) Natural Computation.Cambridge,Massachusetts:
Bradford/MIT Press,1988.561pp.$25.00.
Wilkins,D.E. Practical Planning:Extending the Classical AI
Planning Paradigm. San Mateo, California : Morgan Kaufmann, 1988.
205pp.
------------------------------
Subject: Report available
From: Catherine Harris <harris%cogsci@ucsd.edu>
Date: Tue, 09 May 89 20:51:02 -0700
CONNECTIONIST EXPLORATIONS IN COGNITIVE LINGUISTICS
Catherine L. Harris
Department of Psychology and Program in Cognitive Science
University of California, San Diego
Abstract:
Linguists working in the framework of cognitive linguistics have suggested
that connectionist networks may provide a computational formalism well
suited for the implementation of their theories. The appeal of these
networks include the ability to extract the family resemblance structure
inhering in a set of input patterns, to represent both rules and exceptions,
and to integrate multiple sources of information in a graded fashion. The
possible matches between cognitive linguistics and connectionism were
explored in an implementation of the Brugman and Lakoff (1988) analysis of
the diverse meanings of the preposition "over." Using a gradient-descent
learning procedure, a network was trained to map patterns of the form
"trajector verb (over) landmark" to feature-vectors representing the
appropriate meaning of "over." Each word was identified as a unique item,
but was not further semantically specified. The pattern set consisted of a
distribution of form-meanings pairs that was meant to be evocative of
English usage, in that the regularities implicit in the distribution spanned
the spectrum from rules, to partial regularities, to exceptions. Under
pressure to encode these regularities with limited resources, the nework
used one hidden layer to recode the inputs into a set of abstract
properties. Several of these categories, such as dimensionality of the
trajector and vertical height of the landmark, correspond to properties B&L
found to be important in determining which schema a given use of "over"
evokes. This abstract recoding allowed the network to generalize to
patterns outside the training set, to activate schemas to partial patterns,
and to respond sensibly to "metaphoric" patterns. Furthermore, a second
layer of hidden units self-organized into clusters which capture some of the
qualities of the radial categories described by B&L. The paper concludes by
describing the "rule-analogy continuum". Connectionist models are
interesting systems for cognitive linguistics because they provide a
mechanism for exploiting all points of this continuum.
A short version of this paper will be published in The Proceedings of the
Fifteenth Annual Meeting of the Berkeley Linguistics Society, 1989.
Send requests to: harris%cogsci.ucsd.edu
------------------------------
Subject: sort of connectionist:
From: James Hendler <hendler@icsib9.Berkeley.EDU>
Date: Wed, 03 May 89 14:29:29 -0700
CALL FOR PAPERS
CONNECTION SCIENCE
(Journal of Neural Computing, Artificial
Intelligence and Cognitive Research)
Special Issue --
HYBRID SYMBOLIC/CONNECTIONIST SYSTEMS
Connectionism has recently seen a major resurgence of interest among both
artificial intelligence and cognitive science researchers. The spectrum of
connectionist approaches is quite large, ranging from structured models, in
which individual network units carry meaning, through distributed models of
weighted networks with learning algorithms. Very encouraging results,
particularly in ``low-level'' perceptual and signal processing tasks, are
being reported across the entire spectrum of these models. Unfortunately,
connectionist systems have had more limited success in those ``higher
cognitive'' areas where symbolic models have traditionally shown promise:
expert reasoning, planning, and natural language processing.
While it may not be inherently impossible for purely connectionist
approaches to handle complex reasoning tasks someday, it will require
significant breakthroughs for this to happen. Similarly, getting purely
symbolic systems to handle the types of perceptual reasoning that
connectionist networks perform well would require major advances in AI. One
approach to the integration of connectionist and symbolic techniques is the
development of hybrid reasoning systems in which differing components can
communicate in the solving of problems.
This special issue of the journal Connection Science will focus on the state
of the art in the development of such hybrid reasoners. Papers are
solicited which focus on:
Current artificial intelligence systems which use
connectionist components in the reasoning tasks they
perform.
Theoretical or experimental results showing how symbolic
computations can be implemented in, or augmented by,
connectionist components.
Cognitive studies which discuss the relationship between
functional models of higher level cognition and the ``lower
level'' implementations in the brain.
The special issue will give special consideration to papers sharing the
primary emphases of the Connection Science Journal which include:
1) Replicability of Results: results of simulation models
should be reported in such a way that they are repeatable by
any competent scientist in another laboratory.
The journal will be sympathetic to the problems that
replicability poses for large complex artificial intelligence
programs.
2) Interdisciplinary research: the journal is by nature
multidisciplinary and will accept articles from a variety of
disciplines such as psychology, cognitive science, computer
science, language and linguistics, artificial intelligence,
biology, neuroscience, physics, engineering and philosophy.
It will particularly welcome papers which deal with issues
from two or more subject areas (e.g. vision and language).
Papers submitted to the special issue will also be considered for
publication in later editions of the journal. All papers will be refereed.
The expected publication date for the special issue is Volume 2(1), March,
1990.
DEADLINES:
Submission of papers June 15, 1989
Reviews/decisions September 30, 1989
Final rewrites due December 15, 1989.
Authors should send four copies of the article to:
Prof. James A. Hendler
Associate Editor, Connection Science
Dept. of Computer Science
University of Maryland
College Park, MD 20742
USA
Those interested in submitting articles are welcome to contact the editor
via e-mail (hendler@brillig.umd.edu - US Arpa or CSnet) or in writing at the
above address.
------------------------------
Subject: Subcognition and the Limits of the Turing Test
From: Bob French <french@cogsci.indiana.edu>
Date: Tue, 23 May 89 11:41:17 -0500
A pre-print of an article on subcognition and the Turing Test
to appear in MIND:
"Subcognition and the Limits of the Turing Test"
Robert M. French
Center for Research on Concepts and Cognition
Indiana University
Ostensibly a philosophy paper (to appear in MIND at the end
of this year), this article is of special interest to connectionists.
It argues that:
i) as a REAL test for intelligence, the Turing Test is
inappropriate in spite of arguments by some philosophers to the
contrary;
ii) only machines that have experienced the world as we
have could pass the Test. This means that such machines would
have to learn about the world in approximately the same way that we
humans have -- by falling off bicycles, crossing streets, smelling
sewage, tasting strawberries, etc. This is not a statement about
the inherent inability of a computer to achieve intelligence, it
is rather a comment about the use of the Turing Test as a means of
testing for that intelligence;
iii) (especially for connectionists) the physical,
subcognitive and cognitive levels are INEXTRICABLY interwoven and it
is impossible to tease them apart. This is ultimately the reason why
no machine that had not experienced the world as we had could ever
pass the Turing Test.
The heart of the discussion of these issues revolves around
humans' use of a vast associative network of concepts that operates,
for the most part, below cognitive perceptual thresholds and that
has been acquired over a lifetime of experience with the world. The
Turing Test tests for the presence or absence of this HUMAN
associative concept network, which explains why it would be so
difficult -- although not theoretically impossible -- for any machine
to pass the Test. This paper shows how a clever interrogator could
always "peek behind the screen" to unmask a computer that had not
experienced the world as we had by exploiting human abilities based
on the use of this vast associative concept network, for example, our
abilities to analogize and to categorize;
This paper is short and non-technical but nevertheless focuses
on issues that are of significant philosophical importance to AI
researchers, and to connectionists in particular.
If you would like a copy, please send your name and address
to:
Helga Keller
C.R.C.C.
510 North Fess
Bloomington, Indiana 47401
or send an e-mail request to helga@cogsci.indiana.edu
- - Bob French
french@cogsci.indiana.edu
------------------------------
Subject: Technical Report Available
From: <THEPCAP%SELDC52.BITNET@VMA.CC.CMU.EDU>
Date: Wed, 17 May 89 13:00:00 +0200
LU TP 89-1
A NEW METHOD FOR MAPPING OPTIMIZATION PROBLEMS ONTO NEURAL NETWORKS
Carsten Peterson and Bo Soderberg
Department of Theoretical Physics, University of Lund
Solvegatan 14A, S-22362 Lund, Sweden
Submitted to International Journal of Neural Systems
ABSTRACT:
A novel modified method for obtaining approximate solutions to difficult
optimization problems within the neural network paradigm is presented. We
consider the graph partition and the travelling salesman problems. The key
new ingredient is a reduction of solution space by one dimension by using
graded neurons, thereby avoiding the destructive redundancy that has plagued
these problems when using straightforward neural network techniques. This
approach maps the problems onto Potts glass rather than spin glass theories.
A systematic prescription is given for estimating the phase transition
temperatures in advance, which facilitates the choice of optimal parameters.
This analysis, which is performed for both serial and synchronous updating
of the mean field theory equations, makes it possible to consistently avoid
chaotic bahaviour.
When exploring this new technique numerically we find the results very
encouraging; the quality of the solutions are in parity with those obtained
by using optimally tuned simulated annealing heuristics. Our numerical
study, which extends to 200-city problems, exhibits an impressive level of
parameter insensitivity.
For copies of this report send a request to THEPCAP@SELDC52 [don't forget
to give your mailing address].
------------------------------
Subject: TR: CONNECTIONISM AND COMPOSITIONAL SEMANTICS
From: Dave.Touretzky@B.GP.CS.CMU.EDU
Date: Wed, 31 May 89 21:53:16 -0400
CONNECTIONISM AND COMPOSITIONAL SEMANTICS
David S. Touretzky
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213-3890
Technical report CMU-CS-89-147
May 1989
Abstract:
Quite a few interesting experiments have been done applying neural networks
to natural language tasks. Without detracting from the value of these early
investigations, this paper argues that current neural network architectures
are too weak to solve anything but toy language problems. Their downfall is
the need for ``dynamic inference,'' in which several pieces of information
not previously seen together are dynamically combined to derive the meaning
of a novel input. The first half of the paper defines a hierarchy of
classes of connectionist models, from categorizers and associative memories
to pattern transformers and dynamic inferencers. Some well-known
connectionist models that deal with natural language are shown to be either
categorizers or pattern transformers. The second half examines in detail a
particular natural language problem: prepositional phrase attachment.
Attaching a PP to an NP changes its meaning, thereby influencing other
attachments. So PP attachment requires compositional semantics, and
compositionality in non-toy domains requires dynamic inference. Mere
pattern transformers cannot learn the PP attachment task without an
exponential training set. Connectionist-style computation still has many
valuable ideas to offer, so this is not an indictment of connectionism's
potential. It is an argument for a more sophisticated and more symbolic
connectionist approach to language.
An earlier version of this paper appeared in the Proceedings of the 1988
Connectionist Models Summer School.
================
TO ORDER COPIES of this tech report: send electronic mail to
copetas@cs.cmu.edu, or write the School of Computer Science at the address
above.
------------------------------
Subject: TR announcement
From: eric@mcc.com (Eric Hartman)
Date: Wed, 17 May 89 14:29:47 -0500
The following technical report is now available. Requests may be sent to
eric@mcc.com or via physical mail to the MCC address below.
- ---------------------------------------------------------------
MCC Technical Report Number:
ACT-ST-146-89
Optoelectronic Implementation of Multi-Layer Neural Networks
in a Single Photorefractive Crystal
Carsten Peterson*, Stephen Redfield,
James D. Keeler, and Eric Hartman
Microelectronics and Computer Technology Corporation
3500 W. Balcones Center Dr.
Austin, TX 78759-6509
Abstract:
We present a novel, versatile optoelectronic neural network architecture for
implementing supervised learning algorithms in photorefractive materials.
The system is based on spatial multiplexing rather than the more commonly
used angular multiplexing of the interconnect gratings. This simple,
single-crystal architecture implements a variety of multi-layer supervised
learning algorithms including mean-field-theory, back-propagation, and
Marr-Albus-Kanerva style algorithms. Extensive simulations show how beam
depletion, rescattering, absorption, and decay effects of the crystal are
compensated for by suitably modified supervised learning algorithms.
*Present Address: Department of Theoretical Physics,
University of Lund, Solvegatan 14A, S-22362 Lund, Sweden.
------------------------------
End of Neurons Digest
*********************