Copy Link
Add to Bookmark
Report
AIList Digest Volume 4 Issue 185
AIList Digest Friday, 19 Sep 1986 Volume 4 : Issue 185
Today's Topics:
Query - Connectionist References,
Cognitive Psychology - Connectionist Learning,
Review - Notes on AAAI '86
----------------------------------------------------------------------
Date: 21 Aug 86 12:11:25 GMT
From: lepine@istg.dec.com@decwrl.dec.com (Normand Lepine 225-6715)
Subject: Connectionist references
I am interested in learning about the connectionist model and would appreciate
any pointers to papers, texts, etc. on the subject. Please mail references to
me and I will compile and post a bibliography to the net.
Thanks for your help,
Normand Lepine
uucp: ...!decwrl!cons.dec.com!lepine
ARPA: lepine@cons.dec.com
lepine%cons.dec.com@decwrl.dec.com (without domain servers)
------------------------------
Date: 22 Aug 86 12:04:30 GMT
From: mcvax!ukc!reading!brueer!ckennedy@seismo.css.gov (C.M.Kennedy )
Subject: Re: Connectionist Expert System Learning
The following is a list of the useful replies received so far:
Date: Wed, 30 Jul 86 8:56:08 BST
From: Ronan Reilly <rreilly%euroies@reading.ac.uk>
Sender: rreilly%euroies@reading.ac.uk
Subject: Re: Connectionist Approaches To Expert System Learning
Hi,
What you're looking for, effectively, are attempts to implement
production systems within a connectionist framework. Researchers
are making progress, slowly but surely, in that direction. The
most recent paper I've come across in thge area is:
Touretzky, D. S. & Hinton, G. E. (1985). Symbols among the neurons
details of a connectionist inference architecture. In
Proceedings IJCAI '85, Los Angeles.
I've a copy of this somewhere. So if the IJCAI proceedings don't come
to hand, I'll post it onto you.
There are two books which are due to be published this year, and they
are set to be the standard reference books for the area:
Rumelhart, D. E. & McClelland, J. L. (1986). Parallel distributed
processing: Explorations in the microstructure of cognition.
Vol. 1: Foundations. Cambridge, MA: Bradford Books.
Rumelhart, D. E. & McClelland, J. L. (1986). Parallel distributed
processing: Explorations in the microstructure of cognition.
Vol. 2: Applications. Cambridge, MA: Bradford Books.
Another good source of information on the localist school of
connectionism is the University of Rochester technical report series.
They have one report which lists all their recent connectionist
reports. The address to write to is:
Computer Science Department
The University of Rochester
Rochester, NY 14627
USA
I've implemented a version of the Rochester ISCON simulator in
Salford Lisp on our Prime 750. The simulator is a flexible system
for building and testing connectionist models. You're welcome to
a copy of it. Salford Lisp is a Maclisp variant.
Regards,
Ronan
...mcvax!euroies!rreilly
Date: Sat, 2 Aug 86 09:33:46 PDT
From: Mike Mozer <mozer%ics.ucsd.edu@reading.ac.uk>
Subject: Re: Connectionist Approaches To Expert System Learning
I've just finished a connectionist expert system paper, which I'd be glad
to send you if you're interested (need an address, though).
Here's the abstract:
RAMBOT: A connectionist expert system that learns by example
Expert systems seem to be quite the rage in Artificial Intelligence, but
getting expert knowledge into these systems is a difficult problem. One
solution would be to endow the systems with powerful learning procedures
which could discover appropriate behaviors by observing an expert in action.
A promising source of such learning procedures
can be found in recent work on connectionist networks, that is, massively
parallel networks of simple processing elements. In this paper, I discuss a
Connectionist expert system that learns to play a simple video game by
observing a human player. The game, Robots, is played on a two-dimensional
board containing the player and a number of computer-controlled robots. The
object of the game is for the player to move around the board in a
manner that will force all of the robots to collide with one another
before any robot is able to catch the player. The connectionist system
learns to associate observed situations on the board with observed
moves. It is capable not only of replicating the performance of the
human player, but of learning generalizations that apply to novel
situations.
Mike Mozer
mozer@nprdc.arpa
Date: Fri, 8 Aug 86 18:53:57 edt
From: Tom Frauenhofer <tfra%ur-tut@reading.ac.uk>
Subject: Re: Connectionist Approaches To Expert System Learning
Organization: U. of Rochester Computing Center
Catriona,
I am (slightly) familiar with a thesis by Gary Cotrell of the U of R here
that dealt with a connectionist approach to language understanding. I believe
he worked closely with a psychologist to figure out how people understand
language and words, and then tried to model the behavior in a connectionist
framework. You should be able to get a copy of the thesis from the Computer
Science Department here. It's not expert systems, but it is fascinating.
- Tom Frauenhofer
...!seismo!rochester!ur-tut!tfra
>From sandon@ai.wisc.edu Sat Aug 9 17:25:29 1986
Date: Fri, 8 Aug 86 11:38:43 CDT
From: Pete Sandon <sandon%ai.wisc.edu@reading.ac.uk>
Subject: Connectionist Learning
Hi,
You may have already received this information, but I will pass it
along anyway. Steve Gallant, at Northeastern University, has done some
work on using a modified perceptron learning algorithm for expert
system knowledge acquisition. He has written a number of tech reports
in the last few years. His email address is: sig@northeastern.csnet.
His postal address is: Steve Gallant
College of Computer Science
Boston, MA. 02115
--Pete Sandon
------------------------------
Date: 17 Aug 86 22:08:30 GMT
From: ix133@sdcc6.ucsd.EDU (Catherine L. Harris)
Subject: Q: How can structure be learned? A: PDP
[Excerpted from the NL-KR Digest by Laws@SRI-STRIPE.]
[Forwarded from USENET net.nlang]
[... The following portion discusses connectionist learning. -- KIL]
One Alternative to the Endogenous Structure View
Jeffrey Goldberg says (in an immediately preceding article) [in net.nlang -B],
> Chomsky has set him self up asking the question: "How can children,
> given a finite amount of input, learn a language?" The only answer
> could be that children are equipped with a large portion of language to
> begin with. If something is innate than it will show up in all
> languages (a universal), and if something is unlearnable then it, too,
> must be innate (and therefore universal).
The important idea behind the nativist and language-modularity
hypotheses are that language structure is too complex, time is too
short, and the form of the input data (i.e., parent's speech to
children) is too degenerate for the target grammar to be learned.
Several people (e.g., Steven Pinker of MIT) have bolstered this
argument with formal "learnability" analyses: you make an estimate of
the power of the learning mechanism, make assumptions about factors in
the learning situation (e.g., no negative feedback) and then
mathematically prove that a given grammar (a transformational grammar,
or a lexical functional grammar, or whatever) is unlearnable.
My problem with these analyses -- and with nativist assumptions in
general -- is that they aren't considering a type of learning mechanism
that may be powerful enough to learn something as complex as a grammar,
even under the supposedly impoverished learning environment a child
encounters. The mechanism is what Rumelhart and McClelland (of UCSD)
call the PDP approach (see their just-released from MIT Press, Parallel
Distributed Processing: Explorations in the Microstructure of
Cognition).
The idea behind PDP (and other connectionist approaches to explaining
intelligent behavior) is that input from hundred/thousands/millions
of information sources jointly combine to specify a result. A
rule-governed system is, according to this approach, best represented
not by explicit rules (e.g., a set of productions or rewrite rules) but
by a large network of units: input units, internal units, and output
units. Given any set of inputs, the whole system iteratively "relaxes"
to a stable configuration (e.g., the soap bubble relaxing to
a parabola, our visual system finding one stable interpretation of
a visual illustion).
While many/most people accept the idea that constraint-satisfaction
networks may underlie phenomenon like visual perception, they are more
reluctant to to see its applications to language processing or language
acquisition. There are currently (in the Rumelhart and McClelland
work -- and I'm sure you cognitive science buffs have already rushed
to your bookstore/library!) two convincing PDP models on language,
one on sentence processing (case role assignment) and the other on
children's acquisition of past-tense morphology. While no one has yet
tried to use this approach to explain syntactic acquisition, I see this
as the next step.
For people interested in hard empirical, cross-linguistic data that
supports a connectionist, non-nativist, approach to acquisition, I
recommend *Mechanisms of Language Acquisition*, Brain MacWhinney Ed.,
in press.
I realize I rushed so fast over the explanation of what PDP is that
people who haven't heard about it before may be lost. I'd like to see
a discussion on this -- perhaps other people can talk about the brand
of connectionism they're encountering at their school/research/job and
what they think its benefits and limitations are -- in
explaining the psycholinguistic facts or just in general.
Cathy Harris "Sweating it out on the reaction time floor -- what,
when you could be in that ole armchair theo-- ? Never mind;
it's only til 1990!"
------------------------------
Date: 21 Aug 86 11:28:53 GMT
From: ulysses!mhuxr!mhuxt!houxm!hounx!kort@ucbvax.Berkeley.EDU (B.KORT)
Subject: Notes on AAAI '86
Notes on AAAI
Barry Kort
Abstract
The Fifth Annual AAAI Conference on Artificial Intelligence
was held August 11-15 at the Philadelphia Civic Center.
These notes record the author's personal impressions of the
state of AI, and the business prospects for AI technology.
The views expressed are those of the author and do not
necessarily reflect the perspective or intentions of other
individuals or organizations.
* * *
The American Association for Artificial Intelligence held
its Fifth Annual Conference during the week of August 11,
1986, at the Philadelphia Civic Center.
Approximately 5000 attendees were treated to the latest
results of this fast growing field. An extensive program of
tutorials enabled the naive beginner and technical-
professional alike to rise to a common baseline of
understanding. Research and Science Sessions concentrated on
the theoretical underpinnings, while the complementary
Engineering Sessions focused on reduction of theory to
practice.
Dr. Herbert Schorr of IBM delivered the Keynote Address.
His message was simple and straightforward: AI is here
today, it's real, and it works. The exhibit floor was a sea
of high-end workstations, running flashy applications
ranging from CAT scan imagery to automated fault diagnosis,
to automated reasoning, to 3-D scene animation, to
iconographic model-based reasoning. Symbolics, TI, Xerox,
Digital, HP, Sun, and other vendors exhibited state of the
art hardware, while Intellicorp, Teknowledge, Inference,
Carnegie-Mellon Group, and other software houses offered
knowledge engineering power tools that make short work of
automated reasoning.
Knowledge representation schema include the ubiquitous tree,
as well as animated iconographic models of dynamic systems.
Inductive and deductive reasoning and goal-directed logic
appear in the guise of forward and backward chaining
algorithms which seek the desired chain of nodes linking
premiss to predicted conclusion or hypothesis to observed
symptoms. Such schema are especially well adapted to
diagnosis of ills, be it human ailment or machine
malfunction.
Natural Language understanding remains a hard problem, due
to the inscrutable ambiguity of most human-generated
utterances. Nevertheless, silicon can diagram sentences as
well as a precocious fifth grader. In limited domain
vocabularies, the semantic content of such diagrammatic
representations can be reliably extracted.
Robotics and vision remain challenging fields, but advances
in parallel architectures may clear the way for notable
progress in scene recognition.
Qualitative reasoning, model-based reasoning, and reasoning
by analogy still require substantial human guidance, perhaps
because of the difficulty of implementing the interdomain
pattern recognition which humans know as analogy, metaphor,
and parable.
Interesting philosophical questions abound when AI moves
into the fields of automated advisors and agents. Such
systems require the introduction of Value Systems, which may
or may not conflict with individual preferences for
benevolent ethics or hard-nosed business pragmatics. One
speaker chose the provocative title, "Can Machines Be
Intelligent If They Don't Give a Damn?" We may be on the
threshold of Artificial Intelligence, but we have a long way
to go before we arrive at Artificial Wisdom. Nevertheless,
some progress is being made in reducing to practice such
esoteric concepts as Theories of Equity and Justice, leading
to the possibility of unbiased Jurisprudence.
AI goes hand in hand with Theories of Learning and
Instruction, and the field appears to be paying dividends in
the art and practice of knowledge exchange, following the
strategy first suggested by Socrates some 2500 years ago.
The dialogue format abounds, and mixed initiative dialogues
seem to capture the essence of mutual teaching and
mirroring. Perhaps sanity can be turned into an art form
and a science.
Belief Revision and Truth Maintenance enable systems to
unravel confusion caused by the injection of mutually
inconsistent inputs. Nobody's fool, these systems let the
user know that there's a fib in there somewhere.
Psychology of computers becomes an issue, and the Silicon
Syndrome of Neuroses can be detected whenever the machines
are not taught how to think straight. Machines are already
sapient. Soon they will acquire sentience, and maybe even
free will (nothing more than a random number generator
coupled with a value system). Perhaps by the end of the
Millenium (just 14 years away), the planet will see its
first Artificial Sentient Being. Perhaps Von Neumann knew
what he was talking about when he wrote his cryptic volume
entitled, On the Theory of Self-Reproducing Automata.
There were no Cybernauts in Philadelphia this year, but many
of the piece parts were in evidence. Perhaps it is just a
matter of time until the Golem takes its first step.
In the mean time, we have entered the era of the Competent
System, somewhat short on world-class expertise, but able to
hold it's own in today's corporate culture. It learns about
as fast as its human counterpart, and is infinitely
clonable.
Once upon a time it was felt that machines should work and
people should think. Now that machines can think, perhaps
people can take more time to enjoy the state of being called
Life.
* * *
Lincroft, NJ
August 17, 1986
------------------------------
End of AIList Digest
********************