Copy Link
Add to Bookmark
Report
AIList Digest Volume 4 Issue 240
AIList Digest Thursday, 30 Oct 1986 Volume 4 : Issue 240
Today's Topics:
Queries - PD Parser for Simple English & Public Domain Prolog &
Faculty Compensation & Hierarchical Constraints &
Model-Based Reasoning & Monotonic Reasoning,
Neural Networks - Simulation & Nature of Computation
----------------------------------------------------------------------
Date: 26 Oct 86 19:10:57 GMT
From: trwrb!orion!heins@ucbvax.Berkeley.EDU (Michael Heins)
Subject: Seeking PD parser for simple English sentences.
I am looking for public domain software which I can use to help me parse
simple English sentences into some kind of standardized representation.
I guess what I am looking for would be a kind of sentence diagrammer
which would not have to have any deep knowledge of the meanings of the
nouns, verbs, adjectives, etc.
The application is for a command interface to a computer, for use by
novice users. C routines would be ideal. Also, references to published
algorithms would be useful. Thanks in advance.
--
...!hplabs!sdcrdcf!trwrb!orion!heins
We are a way for the universe to know itself. -- Carl Sagan
------------------------------
Date: 27 Oct 86 14:26:33 GMT
From: ihnp4!drutx!mtuxo!mtune!mtunf!mtx5c!mtx5d!mtx5a!mtx5e!mtx5w!drv@
ucbvax.Berkeley.EDU
Subject: NEED PUBLIC DOMAIN PROLOG
A friend of mine needs a copy of a public domain
Prolog that will run on a VAX 11/780 under Unix.
If such a program exists, please contact me and
I will help make arrangements to get it sent to
him.
Dennis R. Vogel
AT&T Information Systems
Middletown, NJ
(201) 957-4951
------------------------------
Date: Tue, 28 Oct 86 09:31 EST
From: Norm Badler <Badler@cis.upenn.edu>
Subject: request for information
If you are a faculty member or a researcher at a University, I would like
to have a BRIEF response to the following question:
Do you have an "incentive" or "reward" or "benefit" plan that returns
to you some amount of your (external) research money for your own
University discretionary use?
If the answer is NO, that information would be useful. If YES, then a brief
account would be appreciated. If you don't want to type much, send me
your phone number and I will call you for the information.
Thanks very much!
Norm Badler
Badler@cis.upenn.edu
(215)898-5862
------------------------------
Date: Mon, 27 Oct 86 11:47:52 EST
From: Jim Hendler <hendler@brillig.umd.edu>
Subject: seeking info
Subject: seeking info on multi-linear partial orderings
I recently had a paper rejected from a conference that discussed, among other
things, using a set of hierarchical networks for constraint propogation (i.e.
propogating the information through several levels of network simultaneously).
One of the reviewers said "they apply a fairly standard AI technique..."
and I wonder about this. I thought I was up on various constraint propagation
techniques, but wonder if anyone has a pointer to work (preferably in
a qualitative reasoning system) that discusses the use of multi-layer
constraint propogation?
thanks much
Jim Hendler
Ass't Professor
U of Md
College Park, Md. 20742
[I would check into the MIT work (don't have the reference handy, but
some of it's in the two-volume AI: An MIT Perspective) on modeling
electronic circuits. All but the first papers used multiple views of
subsystems to permit propagation of constraints at different granularities.
Subsequent work on electronic fault diagnosis (e.g., Randy Davis) goes
even further. Other work in "pyramidal" parsing (speech, images, line
drawings) has grown from the Hearsay blackboard architecture. -- KIL]
------------------------------
Date: 27 Oct 86 14:13 EST
From: SHAFFER%SCOVCB.decnet@ge-crd.arpa
Subject: Model-base Reasoning
Hello:
I am looking for articles and books which describe the
theory of Model-based Reasoning, MBR. Here at GE we
have an interest in MBR for our next generation of KEE-based
ESEs. I will publish a summary of my findings sometime
in the future. Also, I would be interested in any related
topics which related to MBR and its uses.
Thanks,
Earl Shaffer
GE - VFSC - Bld 100
Po Box 8555
Phila, PA 19101
------------------------------
Date: 28 Oct 86 09:52 EST
From: SHAFFER%SCOVCB.decnet@ge-crd.arpa
Subject: Monotonic Reasoning
I am somewhat new to AI and I am confused about the
definition of "non-monotonic" reasoning, as in the
documentation in Inference's ART system. It say that
features allow for non-monotonic reasoning, but does
not say what that type of reasoning is, or how it
differs from monotonic reasoning, if there is such a
thing.
Earl Shaffer
GE - VFSC
Po box 8555
Phila , Pa 19101
[Monotonic reasoning is a process of logical inference using only
true axioms or statements. Nonmonotonic reasoning uses statements
believed to be true, but which may later prove to be false. It is
therefore necessary to keep track of all chains of support for each
conclusion so that the conclusion can be revoked if its basis
statements are revoked. Other names for nonmonotonic reasoning are
default reasoning and truth maintenance. -- KIL]
------------------------------
Date: 28 Oct 86 21:05:49 GMT
From: uwslh!lishka@rsch.wisc.edu (a)
Subject: Re: simulating a neural network
I just read an interesting short blurb in the most recent BYTE issue
(the one with the graphics board on the cover)...it was in Bytelines or
something. Now, since I skimmed it, my info is probably a little sketchy,
but here's about what it said:
Apparently Bell Labs (I think) has been experimenting with neural
network-like chips, with resistors replacing bytes (I guess). They started
out with about 22 'neurons' and have gotten up to 256 or 512 (can't
remember which) 'neurons' on one chip now. Apparently these 'neurons' are
supposed to run much faster than human neurons...it'll be interesting to see
how all this works out in the end.
I figured that anyone interested in the neural network program might
be interested in the article...check Byte for actual info. Also, if anyone
knows more about this experiment, I would be interested, so please mail me
any information at the below address.
--
Chris Lishka /l lishka@uwslh.uucp
Wisconsin State Lab of Hygiene -lishka%uwslh.uucp@rsch.wisc.edu
\{seismo, harvard,topaz,...}!uwvax!uwslh!lishka
------------------------------
Date: 27 Oct 86 19:50:58 GMT
From: yippee.dec.com!glantz@decwrl.dec.com
Subject: Re: Simulating neural networks
*********************
Another good reference is:
Martin, R., Lukton, A. and Salthe, S.N., "Simulation of
Cognitive Maps, Concept Hierarchies, Learning by Simile, and
Similarity Assessment in Homogeneous Neural Nets," Proceedings
of the 1984 Summer Computer Simulation Conference, Society for
Computer Simulation, vol. 2, 808-821.
In this paper, Martin discusses (among other things) simulating
the effects of neurotransmittors and inhibitors, which can have
the result of generating goal-seeking behavior, which is closely
linked to the ability to learn.
Mike Glantz
Digital Equipment Centre Technique Europe
BP 29 Sophia Antipolis
06561 Valbonne CEDEX
France
My employer is not aware of this message.
*********************
------------------------------
Date: 27 Oct 86 17:36:23 GMT
From: zeus!berke@locus.ucla.edu (Peter Berke)
Subject: Glib "computation"
In article <1249@megaron.UUCP> wendt@megaron.UUCP writes:
>Anyone interested in neural modelling should know about the Parallel
>Distributed Processing pair of books from MIT Press. They're
>expensive (around $60 for the pair) but very good and quite recent.
>
>A quote:
>
>Relaxation is the dominant mode of computation. Although there
>is no specific piece of neuroscience which compels the view that
>brain-style computation involves relaxation, all of the features
>we have just discussed have led us to believe that the primary
>mode of computation in the brain is best understood as a kind of
>relaxation system in which the computation proceeds by iteratively
>seeking to satisfy a large number of weak constraints. Thus,
>rather than playing the role of wires in an electric circuit, we
>see the connections as representing constraints on the co-occurrence
>of pairs of units. The system should be thought of more as "settling
>into a solution" than "calculating a solution". Again, this is an
>important perspective change which comes out of an interaction of
>our understanding of how the brain must work and what kinds of processes
>seem to be required to account for desired behavior.
>
>(Rumelhart & Mcclelland, Chapter 4)
>
Isn't 'computation' a technical term? Do R&Mc prove that PDP is
equivalent to computation? Would Turing agree that "settling into
a solution" is computation? Some people have tried to show that
symbols and symbol processing can be represented in neural nets,
but I don't think anyone has proved anything about the problems
they purportedly "solve," at least not to the extent that Turing
did for computers in 1936, or Church in the same year for lambda
calculus.
Or are R&Mc using 'computing' to mean 'any sort of machination whatever'?
And is that a good idea?
Church's Thesis, that computing and lambda-conversion (or whatever he
calls it) are both equivalent to what we might naturally consider
calcuable could be extended to say that neural nets "settle" into
the same solutions for the same class of problems. Or, one could
maintain, as neural netters tend to implicitly, that "settling" into
solutions IS what we might naturally consider calculable, rather than
being merely equivalent to it. These are different options.
The first adds "neural nets" to the class of formalisms which can
express solutions equivalent to each other in "power," and is thus
a variant on Church's thesis. The second actually refutes Church's
Thesis, by saying this "settling" process is clearly defined and
that it can realize a different (or non-comparable) class of problems,
in which case computation would not be (provably) equivalent to it.
Of course, if we could show BOTH that:
(1) "settling" is equivalent to "computing" as formally defined by Turing,
and (2) that "settling" IS how brains work,
then we'd have a PROOF of Church's Thesis.
Until that point it seems a bit misleading or misled to refer to
"settling" as "computation."
Peter Berke
------------------------------
End of AIList Digest
********************