Copy Link
Add to Bookmark
Report
AIList Digest Volume 4 Issue 134
AIList Digest Friday, 30 May 1986 Volume 4 : Issue 134
Today's Topics:
Query - MIT Research on Symbolic/Numeric Processing,
AI Tools - Functional Programming and AI & Common LISP Style,
References - Neural Networks & Lenat's AM,
Linguistics - 'Xerox' vs. 'xerox',
Psychology - Doing AI Backwards & Learning
----------------------------------------------------------------------
Date: Wed, 28 May 86 14:34:04 PDT
From: SERAFINI%FAE@ames-io.ARPA
Subject: MIT research on symbolic/numeric processing
>>AIList Digest Volume 4 : Issue 133
>>From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
>>Subject: Spang Robinson Report, Volume 2 No. 5
>>MIT has started a project to explore the relationship
>>between symbolic and numeric computing, called Mixed Computing.
Does anybody have more info about this project?
Reply to serafini%far@ames-io.ARPA
Thanks.
------------------------------
Date: 29 May 86 11:32:00 edt
From: Walter Hamscher <hamscher@ht.ai.mit.edu>
Subject: Functional programming and AI
Date: 21 May 86 13:14:00 EST
From: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Reply-to: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Do working AI programs really exploit these features a lot?
Eg, do "learning" programs construct unforeseen rules, perhaps
based on generalization from examples, and then use the rules?
Or is functional programming just a trick that happens to be
easy to implement in an interpreted language?
I think this is a slightly odd characterization of `functional
programming.' Maybe I'm confused, but I always thought a `functional
language' meant (in a nutshell) that there are no side effects. In
contrast, the one important `side effect' you're talking about here is
constructing a function at runtime and squirreling it away in a
knowledge base, to be run later. In theory you could do the
squirreling by passing around the whole state of the world and
non-destructively modifying that datastucture as you go, but that's
orthogonal to what you seem to be talking about (besides being
painful).
Whatever it's called -- this indistinguishability between code and
data -- it's true that it's a ``trick,'' but I think it's an important
one. In fact as I think about it now, every AI program I've ever seen
_at_some_point_ passes functions around, sticks them in places like on
property lists as demons, and/or mashes together portions of bodies of
different functions and sticks the resulting lambda-expression
somewhere to run later (Well, maybe Mycin didn't (but Teiresias did)).
As far as learning programs that construct functions, it's all in the
eyes of the interpreter. A rule that is going to be run by a rule
interpreter counts as a kind of function (it's just not necessarily in
LISP per se). So, since Tom Mitchell's LEX (for example) builds and
modifies the bodies of heuristic rules which later get applied to the
integration problem, it falls in this category. Tom Diettrich's EG
does something like this too. I'm sure there are jillions of other
examples but I'm not that deep into machine learning.
And of course there's always AM (which by now should be familiar to
all readers of AiList) which (among other things) did random structure
modifications to LISP functions, then ran them to see what they did.
For example, it might start with the following definition of EQUAL:
(defun EQUAL (a b)
(cond ((eq a b) t)
((and (consp a) (consp b))
(and (EQUAL (car a) (car b))
(EQUAL (cdr a) (cdr b))))
(t
nil)))
To generalize the function, it drops one of the conjunctions and
changes its name (including the recursive call):
(defun SOME-NEW-FUNCTION (a b)
(cond ((eq a b) t)
((and (consp a) (consp b))
(SOME-NEW-FUNCTION (cdr a) (cdr b)))
(t
nil)))
Lo and behold, SOME-NEW-FUNCTION is a new predicate meaning
something like "same length list." So there's an existence
proof at least.
Walter Hamscher
------------------------------
Date: 15 May 86 17:42:18 GMT
From: tektronix!uw-beaver!ssc-vax!bcsaic!michaelm@ucbvax.berkeley.edu
(michael maxwell)
Subject: Re: Common LISP style standards.
In article <3787@utah-cs.UUCP> shebs@utah-cs.UUCP (Stanley Shebs) writes:
>Sequence functions and mapping functions are generally preferable to
>handwritten loops, since the Lisp wizards will probably have spent
>a lot of time making them both efficient and correct (watch out though;
>quality varies from implementation to implementation).
I'm in a little different boat, since we're using Franz rather than Common
Lisp, so perhaps the issues are a bit different when you're using Monster, I
mean Common, Lisp... so at the risk of rushing in where angels etc.:
A common situation we find ourselves in is the following. We have a long list,
and we wish to apply some test to each member of the list. However, at some
point in the list, if the test returns a certain value, there is no need to
look further: we can jump out of processing the list right there, and thus
save time. Now you can jump out of a do loop with "(return <value>)", but you
can't jump out of a mapc (mapcar etc.) with "return." So we wind up using
"do" a lot of places where it would otherwise be natural to use "mapcar". I
suppose I could use "catch" and "throw", but that looks so much like "goto"
that I feel sinful if I use that solution...
Any style suggestions?
--
Mike Maxwell
Boeing Artificial Intelligence Center
...uw-beaver!uw-june!bcsaic!michaelm
------------------------------
Date: 27 May 86 21:37:58 GMT
From: ulysses!mhuxr!mhuxn!mhuxm!mhuxf!mhuxi!mhuhk!mhuxt!houxm!mtuxo!mtfmt
!brian@ucbvax.berkeley.edu (B.CASTLE)
Subject: Neural Networks
For those interested in some historical references on
neural network function, the following may be of interest :
Dynamics:
NUNEZ, P.L. (1981). ELECTRIC FIELDS OF THE BRAIN. The
Neurophysics of EEG. Oxford University Press, NY.
This book contains a pretty good overview of EEG,
and also contains an interesting model of brain
dynamics based on neural network connectivity.
Learning:
OJA, E. (1983). SUBSPACE METHODS OF PATTERN RECOGNITION.
Research Studies Press, Ltd. Letchworth, Hertfordshire,
England. (John Wiley and Sons, Inc., New York.)
(For those with a PR background, and those having read
and understood Kohonen).
KOHONEN, T.
(1977) - ASSOCIATIVE MEMORY. A System-Theoretical
Approach. Springer-Verlag, Berlin.
(1980) - CONTENT ADDRESSABLE MEMORIES. Springer-
Verlag, Berlin.
(1984) - SELF-ORGANIZATION AND ASSOCIATIVE MEMORY.
Springer Series in Info. Sci. 8.
Springer-Verlag, New York.
These works provide a basic introduction to the
nature of CAM systems (frame-based only), and
the basic philosophy of self-organization in such
systems.
SUTTON, R.S. and A.G. BARTO (1981). "Toward A Modern Theory
of Adaptive Networks: Expectation and Prediction."
Psychological Review 88(2):135.
This article provides an overview of the 'tuning'
of synaptic parameters in self-organizing systems,
and a reasonable bibliography.
Classic:
MINSKY, M. and S. PAPERT (1968). PERCEPTRONS. An Introduction
to Computational Geometry. MIT Press, Cambridge, MA.
This book should be read by all neural network
enthusiasts.
In a historical context, the Hopfield model is important insofar
as it uses Monte Carlo methods to generate the network behavior.
There are many other synchronous and asynchronous neural network
models in the literature on neuroscience, biophysics, and cognitive
psychology, as well as computer and electrical engineering. I have
amassed a list of over a hundred books and articles, which I will
be glad to distribute, if anyone is interested. However, keep in
mind that the connection machines and chips are still very far
from approaching neural networks in functional capability and
diversity.
brian castle @ att (MT 2D-217 middletown, nj, 07748)
(...!allegra!orion!brian)
(...!allegra!mtfmt!brian)
------------------------------
Date: Thu, 29 May 1986 01:07 EDT
From: "David D. Story" <FTD%MIT-OZ @ MC.LCS.MIT.EDU>
Subject: Need Ref for "Automated Mathematician" by Doug Lenat
Discussion of "Automated Mathematician"
His thesis was in "Knowledge Based Systems on Artful
Dumbness"
- McGraw-Hill - 1982 ISBN 0-07-015557-7.
Wrong again...Oh well, try this one. The price is 20 odd
bucks.
Sorry. I called it Artful Dumbness cause it had to rediscover
primes. In fact it is quite a study - Does anyone have
source?
Working Papers are not referenced in the thesis so the
searcher is on his own. I'm sure they must exist someplace.
Nice bibliography in the back of the Thesis.
------------------------------
Date: Thu, 8 May 86 21:01:58 cdt
From: ihnp4!uiucdcs!ccvaxa!aglew@seismo.CSS.GOV (Andy Glew)
Subject: 'Xerox' vs. 'xerox'?
>It's interesting to note that at one time, "frigidaire" (no caps) was
>considered to be a synonym for "refrigerator." Frigidaire, the
>company, fought this in order not to lose trademark status. How often
>does one hear this usage these days?
>
>Rich Alderson
>Alderson@Score.Stanford.EDU (=SU-SCORE.ARPA)
Do you speak French? Could common usage in another language lead to the loss
of trademark status?
Andy "Krazy" Glew. Gould CSD-Urbana. USEnet: ihnp4!uiucdcs!ccvaxa!aglew
1101 E. University, Urbana, IL 61801 ARPAnet: aglew@gswd-vms
------------------------------
Date: 29 May 86 10:55:41 edt
From: Walter Hamscher <hamscher@ht.ai.mit.edu>
Subject: Doing AI backwards (from machine to man)
Date: 18 May 86 05:39:39 GMT
From: ernie.Berkeley.EDU!tedrick@ucbvax.berkeley.edu (Tom Tedrick)
More on Barry Kort's "Problem of the right-hand tail"
(ie social persecution of those with high intelligence).
My heart bleeds for those unfortunate people on the right-hand tail.
How about a Take a Genius to Lunch Week. Maybe we could get some rock
stars to do a ``Brain Aid.''
I take it this problem is distinct from the ``problem of the left-hand
tail'' and the ``problem of the right-hand tail against the big hump
in the middle''.
(* * *)
Thus we see again how women have a
brilliant gift for asking seemingly innocent favors which
are really enormously costly. The subtle nature of the problem
makes it difficult to pin down the real poison in their approach.
And it's a good thing you pointed this out. We men better watch out
for those seemingly innocent favors, *especially* from women! Hmm,
poison, you say...
Speaking of favors, please do us all a favor; keep your grim and
pathetic misogyny to yourself. Or send your ravings to bandykin.
(* * *)
I am studying how machines function in order to find better
ways for humans to function.
Why not study how machines live in order to find better ways for
humans to live. Or how machines laugh in order to find better ways
for humans to laugh. Or how machines get over their insecurities in
order to find better ways for humans to get over their insecurities.
(* * *)
(Since it is only recently that the need for rigorous treatment
of models of computation has induced us to really make some
progress in understanding these things.)
Yes, I'm sure there there's a `cybernetic' explanation for all of this.
Walter Hamscher
------------------------------
Date: 9 May 86 05:02:09 GMT
From: ihnp4!ltuxa!ttrdc!levy@ucbvax.berkeley.edu (Daniel R. Levy)
Subject: Re: "The Knowledge"
In article <5500032@uiucdcsb>, schraith@uiucdcsb.CS.UIUC.EDU writes:
> It seems to me that if AI researchers wish to build a system which
> has any versatility, it will have to be able to learn, probably
> in a similar manner to the taxicab drivers. Bierre states this problem:
> "Organize a symbolic recording of an ongoing stream of fly-by
> sensory data, on the fly, such that at any given time as much as
> possible can be quickly remembered of the entire stream."
> Surely computer professionals have better things to do, ultimately,
> than spoonfeed all the knowledge to a computer it will ever need.
As nothing but an interested observer in this discussion (I am in no
wise an AI guru, so please forgive me if I bumble) your observation
indeed makes sense me, that an A.I. system could well do better by
"learning" than by having all its "smarts" hardcoded in beforehand.
But it also seems possible that once a computer system HAS been
"trained" in this way, it should be quite easy to mass produce as
many equally capable copies of that system as desired; just dump its
"memory" and reload it on other systems.
Any comments? Does a "learning" system (or one that knows how to teach
itself) indeed hold more promise than distilling expert human knowledge
and hardcoding it in? Perhaps I've answered my own question, that the
system that can "learn" is better able to adapt to new developments in
the area it is supposed to be "intelligent" in than one which is static.
Maybe the best of both worlds could apply (the distilled human knowledge
coded in as a solid base, but the system is free to expand on that base
as it "learns" more and more)?
--
------------------------------- Disclaimer: The views contained herein are
| dan levy | yvel nad | my own and are not at all those of my em-
| an engihacker @ | ployer or the administrator of any computer
| at&t computer systems division | upon which I may hack.
| skokie, illinois |
-------------------------------- Path: ..!{akgua,homxb,ihnp4,ltuxa,mvuxa,
vax135}!ttrdc!levy
------------------------------
End of AIList Digest
********************