Copy Link
Add to Bookmark
Report

AIList Digest Volume 4 Issue 125

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest            Tuesday, 20 May 1986     Volume 4 : Issue 125 

Today's Topics:
Queries - Neural Networks & Inside and Outside & EURISKO &
Strength of Chess Computers & Conway's Game of LIFE & Prolog nth,
Replies - Prolog nth & Neural Networks & Shape & Conway's Game of LIFE,
AI Tools - PCLS Common Lisp

----------------------------------------------------------------------

Date: 11 May 86 09:02:16 GMT
From: tektronix!uw-beaver!bullwinkle!rochester!seismo!gatech!akgua!whu
xlm!whuxl!houxm!mtuxo!orsay@ucbvax.berkeley.edu (j.ratsaby)
Subject: Re: neural networks

>
> Stephen Grossberg has been publishing on neural networks for 20 years.
> He pays special attention to designing adaptive neural networks that
> are self-organizing and mathematically stable. Some good recent
> references are:


I would like to ask you the following:
>From all the books that you read,was there any machine built or simulation
that actually learned by adapting its inner structure ?

if so then what type of information was learned by the machine and in what
quantities ? what action was taken to ask the machine to "remember" and
retrive information ? and finally , where are we standing today,that is, to
your knowledge, which is the machine that behaves the closest to the
biological brain ?
I would very much apreciate reading some of your thoughts about the above,

thanks in advance.
joel Ratsaby
!mtuxo!orsay

------------------------------

Date: Wed 14 May 86 17:36:44-PDT
From: Pat Hayes <PHayes@SRI-KL>
Subject: inside and outside

Dr. Who's Tardis seems to have a larger interior than exterior. People find
this not outrageously unintuitive, and I am trying to understand why. Which
of the following 'explanations' do people find intuitively satisfying?

1. the inside is just larger than the outside, thats all.
2. there is a different kind of space inside the Tardis, so more can
be fitted into it.
3. the 'interior' isnt inside the police box at all, its somewhere else,
and the door is a transporter device.
4. the door changes sizes, shrinking things on the way in and magnifying
them on the way out, and the interior is built on a small scale. ( As in
Disneys 'fantastic voyage' )
5. something else ( what? )

This particular idea recurs in folklore and childrens fantasy, whereas other
equally impossible concepts are met with less often ( something being in two
places at once, for example ). This suggests that it might illustrate a
natural separation between different parts of our spatial intuition.

Send intuitions, explanations, comments to PHAYES@SRI-KL. Thanks.
Pat Hayes
SPAR

------------------------------

Date: 15 May 86 07:45:26 GMT
From: ingres.Berkeley.EDU!grady@ucbvax.berkeley.edu (Steven Grady)
Subject: AI in IAsfm

In the June 86 issue of IAsfm ,there's a fascinating article on AI and
common sense. In this article, the author mentions a program called
Eurisko, which I had heard about before briefly, but which I'm now
reminded of. Do people have references to this? How can I find out
more about it?

Steven Grady
...!ucbvax!grady
grady@ingres.berkeley.edu


[I've sent Steven the list of Lenat references that appeared in
#2/117, 12 Sep 1984. The most pertinent is

D. B. Lenat, "EURISKO: A Program that Learns New Heuristics and
Domain Concepts,"
Journal of Artificial Intelligence, March 1983.
Also available as Report HPP-82-26, Heuristic Programming Project,
Dept. of Computer Science and Medicine, Stanford University, Stanford,
California, October 1982.

-- KIL]

------------------------------

Date: Fri, 16 May 86 16:50:48 PDT
From: cracraft@isi-venera.arpa
Subject: strength of commercially available chess computers

This message is primarily addressed to Tony Marsland, AILIST member,
but is of general interest to the rest of the list as well.

Tony, for our readers, what are the three strongest available
chess machines according to the Swedish article in the most
recent issue of the ICCA Journal?

I was told today by a third-party (THE PLAYERS company in
Los Angeles), that the Novag Constellation Expert is
approximately 2100 in chess strength. I find this impossible
to believe because it runs on a tiny 8-bit processor at
1/50,000th the speed of a Belle, Cray Blitz, or Hitech
which barely pass 2100 in chess strength. It should be
noted that Hitech's rating is based on very few
tournament games. The same is true of Cray Blitz. Only
Belle has a sufficient base to qualify its 2200 rating claim.

Well, Tony, what are they? Take care.

Stuart Cracraft

------------------------------

Date: 16 May 86 15:21:15 GMT
From: allegra!mit-eddie!genrad!panda!husc6!harvard!seismo!mcvax!ukc!re
ading!onion.cs.reading.AC.UK!scm@ucbvax.berkeley.edu (Stephen Marsh)
Subject: Conway's game of LIFE


Here's an enquiry about John Conway's game of LIFE,
a simulation of the birth, life and death of cells placed on
a grid. It was devised about 1970 and was based on the theory
of cellular automata. It became of great interest to a large
number of people after it was discussed by Martin Gardner
in Scientific American (Oct 1970-Mar 1971).

I would like to know if anyone has done or knows of
any investigation into aspects of the LIFE simulation since
the outburst of interest in 1970. If they have, or know of
any book that contains a (not too theoretical) run-down of
cellular automata, perhaps with reference to LIFE, could they let
me know.

Many thanks
Steve Marsh

scm@onion.cs.reading.uk
Steve Marsh,
Department of Computer Science,
PO BOX 220,
University of Reading,
Whiteknights,
READING UK.

------------------------------

Date: 14 May 86 09:00:11 GMT
From: allegra!mit-eddie!genrad!panda!husc6!harvard!topaz!lll-crg!boote
r@ucbvax.berkeley.edu (Elaine Richards)
Subject: HELP!!!!!

I have been given this silly assignment to do in Prolog, a language which
is rapidly losing my favor. We are to do the folowing:

Define a Prolog predicate len(N,L) such that N is the length of list L.

Define a Prolog predicate nth(X,N,L) such that X is the Nth element of
list L.

I cannot seem to instantiate N past the value 0 on any of these.

My code looks like this:

len(N,[]) :- 0 !.
len(N,[_|Y] :- N! is N + 1,len(N1,L].

It gives me an error message indicating "_6543etc" or somesuch ghastly
number/variable refuses to take the arithmetic operation.

The code for nth is similar and gives a similar error message.

Please send replies to {whateverthepathis}lll-crg!csuh!booter.

E
*****

------------------------------

Date: 16 May 86 19:48:00 GMT
From: pur-ee!uiucdcs!uiucdcsb!mozetic@ucbvax.berkeley.edu
Subject: Re: HELP!!!!!


% How about the following:

len( 0, [] ).
len( N, [_ | L] ) :- len( N0, L ), N is N0 + 1.

nth( X, 1, [X | _] ).
nth( X, N, [_ | L] ) :- N > 1, N0 is N - 1, nth( X, N0, L ).

------------------------------

Date: 14 May 86 20:44:09 GMT
From: tektronix!uw-beaver!bullwinkle!rochester!seismo!lll-crg!topaz!ha
rvard!bu-cs!jam@ucbvax.berkeley.edu (Jonathan A. Marshall)
Subject: Re: neural networks

In article <1583@mtuxo.UUCP> orsay@mtuxo.UUCP (j.ratsaby) writes:
> In article <538@bu-cs.UUCP> jam@bu-cs.UUCP (Jonathan Marshall) writes:
>>
>> Stephen Grossberg has been publishing on neural networks for 20 years.
>> He pays special attention to designing adaptive neural networks that
>> are self-organizing and mathematically stable. ...
>
> I would like to ask you the following:
> From all the books that you read,was there any machine built or simulation
> that actually learned by adapting its inner structure ?

TRW is building a chip called the MARK-IV which implements some of
Grossberg's earlier adaptive neural networks. The chip basically acts
as an adaptive pattern recognizer.

Also, Grossberg's group, the Center for Adaptive Systems, has
simulated some of his parallel learning algorithms in software. In
particular, "masking fields" have been applied to speech-recognition,
the "boundary contour system" has been applied to visual pattern
segmentation, and other networks have been applied to symbolic
pattern-recognition.

> if so then what type of information was learned by the machine and in what
> quantities ? what action was taken to ask the machine to "remember" and
> retrive information ? and finally , where are we standing today,that is, to
> your knowledge, which is the machine that behaves the closest to the
> biological brain ?
> I would very much apreciate reading some of your thoughts about the above,
> thanks in advance. joel Ratsaby !mtuxo!orsay

The network simulations learned to discriminate patterns based on
arbitrary similarity measures. They also performed associative
learning tasks that explain psychological data such as "inverted U,"
"overshadowing," "attentional priming," "speed-accuracy trade-off,"
and more. The networks learned and remembered spatial patterns of
neural activity. The networks then later retrieved the patterns, using
them as "expectation templates" to match with newer patterns. The
degree of match or mismatch determined whether (1) the newer patterns
were represented as instances of the "expected" pattern, or (2) a fast
parallel search was initiated for another matching template, or (3)
the new pattern was allocated its own separate representation as an
unfamiliar pattern.

One of Grossberg's main contributions to learning theory has been the
design of self-organizing associative learning networks. His networks
function more robustly than most other designs because they are
self-scaling (big patterns get processed just as effectively as small
patterns), self-tuning (the networks dynamically adjust their own
capacities to simultaneously prevent saturation and suppress noise),
and self-organizing (learning occurs within the networks to produce
finer or coarser pattern discriminations, as required by experience).
Grossberg's mathematical analyses of "mass-action" systems enabled him
to design networks with these properties.

In addition, his networks are physiologically realistic and unify a
great deal of otherwise fragmented psychological data. Read one or
two of his latest papers to see his claims.

The question of which _machine_ behaves closest to the biological
brain is not yet appropriate. The candidates I know of are all
software simulations, with the possible exception of the TRW Mark-IV,
which is quite limited in capacity. Other schemes, such as Hopfield
nets, are not mass-action (in the technical sense) simulations, and
hence fail to observe certain kinds of local-global tradeoffs that
characterize biological systems.

However, the situation is hopeful today. More AI researchers have
been recognizing the importance of studying biological systems in
detail, to gain intuition and insight for designing adaptive neural
networks.

------------------------------

Date: Sat, 17 May 86 18:43:44 pdt
From: John B. Nagle <jbn@su-glacier.arpa>
Subject: Geometry-oriented AI


There are some ideas worth pursuing here. There is a class of
problems for which solid geometric modeling, rather than predicate
calculus, seems an appropriate underlying model. The hook and ring
problem seems to be of this type. Alex Pentland at SRI has done some
work on concise mathematical representations of the physical universe,
and I suspect that a system that could manipulate objects in Pentland's
representation, calculating interferences and contacts, driven by various
search strategies, would be an appropriate way to attack the hook and
ring problem.
One can dimly imagine a solid geometric modelling system with
approximate representations a la Pentland ("fuzzy solid modelling?")
enhanced by some notions of force, strength of materials, and inertia,
as a base for working on such problems. Unlike the Blocks World and
its successors, where the geometric information was transformed to
expressions in predicate calculus as soon as possible, I'm suggesting
that we stay in the 3D geometric domain and work there. We might even
want to take problems that are not fundamentally geometric and construct
geometric analogues of them so that geometric problem solving techniques
can be applied. (Please, no flames from the right brain/left brain
crowd). Has anyone been down this road yet and actually implemented
something?
Interesting thought: could the new techniques for performing
optimization calculations being developed
by the neural-nets people be applied to the computationally-intensive
tasks in solid geometric modelling? I suspect so, especially if we are
willing to accept approximate answers ("Will the couch fit through the
door?"
might return "Can't be sure; within .25 inch error tolerance")
some of the closed-loop feedback analog techniques proposed may be applicable.
The big bottleneck in solid geometric modelling is usually performing the
interference calculations to decide what is running into what. The
brain is good at this, and probably doesn't do it by number-crunching.


John Nagle
415-856-0767

------------------------------

Date: 19 May 86 09:05:16 GMT
From: brahms!weemba@ucbvax.berkeley.edu (Matthew P. Wiener)
Subject: Re: Conway's game of LIFE

I'm directing followups to net.games only.

A good reference to LIFE:

Berlekamp, Elwyn R ; Conway, John H ; Guy, Richard K
Winning Ways II: Games in Particular
Academic Press 1982

The last chapter is devoted to the proof that LIFE is universal.
The rest of the book is worth reading anyway. You will learn why
E R Berlekamp is the world's greatest Dots-and-Box player, for
example.

A good reference to cellular automata:

Farmer, Doyne ; Toffoli, Tommaso ; Wolfram, Stephen ; (editors)
Cellular Automata: Proceedings
North-Holland 1984

The latter is a reprint of Physica D Volume 10D (1984) Nos 1&2.
Mostly technical, with interest in physical applications, but the
article by Gosper on how to high speed compute LIFE is quite
intriguing and readable.

Also, Martin Gardner occasionally had an update after his original
article. His newest book, "Life, Wheels, and other Mathematical
Amusements"
(???), reprints the latest.

ucbvax!brahms!weemba Matthew P Wiener/UCB Math Dept/Berkeley CA 94720

------------------------------

Date: Thu, 15 May 86 16:56:12 MDT
From: shebs@utah-cs.arpa (Stanley Shebs)
Subject: PCLS Common Lisp Available

This is to announce the availability of the Portable Common Lisp Subset
(PCLS), a Common Lisp subset developed at the University of Utah which
runs in Portable Standard Lisp (PSL).

PCLS is a large subset which implements about 550 of the 620+ Common Lisp
functions. It lacks lexical closures, ratios, and complex numbers. Streams
and characters are actually small integers, some of the special forms
are missing, and a number of functions (such as FORMAT) lack many of the
more esoteric options. PCLS does include a fully working package system,
multiple values, lambda keywords, lexical scoping, and most data types
(including hash tables, arrays, structures, pathnames, and random states).
The PCLS compiler is the PSL compiler which produces very efficient code,
augmented by a frontend that does a number of optimizations specific to
Common Lisp. Gabriel benchmarks and others show that PCLS programs can
be made to run as fast as their PSL counterparts - almost all uses of
lambda keywords are optimized away, and a type declaration/inference
optimizer replaces many function calls with efficient PSL equivalents.
PCLS has been used at Utah and elsewhere for about 6 months, and a number
of programs have been ported both to and from PCLS and other Common Lisps.

PCLS is being distributed along with an updated version of PSL (3.2a).
We require that you sign a site license agreement. The distribution fee
is $250 US for nonprofit institutions, plus a $750 license fee for
corporations. Full sources to both PSL and PCLS are included along with
documentation on the internals and externals of the system. At present,
we are distributing PCLS for 4.2/4.3 BSD Vax Un*x and for Vax VMS.
Releases for Apollo and Sun are anticipated soon, and versions for other
PSL implementations are likely. If interested, send your USnail address to:

Loretta Cruse
Computer Science Department, 3160 MEB
University of Utah
Salt Lake City UT 84112

cruse@utah-20.ARPA {seismo, ihnp4, decvax}!utah-cs!cruse.UUCP

Technical questions about PCLS, flames about absence of closures, etc
may be directed to shebs@utah-cs.ARPA, loosemore@utah-20.ARPA, or
kessler@utah-cs.ARPA.

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT