Copy Link
Add to Bookmark
Report
AIList Digest Volume 4 Issue 035
AIList Digest Sunday, 23 Feb 1986 Volume 4 : Issue 35
Today's Topics:
Games - Computer Othello & Computer Chess,
Automata - Self-Replication and Artificial Life,
Methodology - A Thought on Thinking,
Humor - AI Koans & The Naive Dog Physics Manifesto
----------------------------------------------------------------------
Date: 16 Feb 86 21:34:13 EST
From: Kai-Fu.Lee@SPEECH2.CS.CMU.EDU
Subject: Computer Othello (Bill)
[Forwarded from the CMU bboard by Laws@SRI-AI.]
In the recent North American Computer Othello Championship Tournament,
our department's entry, BILL, placed second in a field of 11. The
final standings were:
1. Aldaron (C. Heath) 7.5 - 0.5
2. Bill (K. Lee & S. Mahajan) 7 - 1
3. Brand (A. Kierulf) 5 - 3
3. Fort Now (?) 5 - 3
Bill's only loss was to Aldaron, the defending champion, as well as
the program that should have beaten Iago in 1981. However, Bill's
loss was due to the choice of color in the game with Aldaron. In an
unofficial rematch with Aldaron, Bill won with the colors reversed.
Furthermore, Bill soundly defeated the program that tied Aldaron.
With the many improvements that we have in mind and the enthusiastic
participation this year, we expect an exciting championship next year.
If anyone is interested in more information about Bill, this tournament,
or the game transcripts, please send mail to kfl@speech2 or mahajan@h.
------------------------------
Date: 17 February 1986 1954-EST
From: Hans Berliner@A.CS.CMU.EDU
Subject: computer chess (final)
[Forwarded from the CMU bboard by Laws@SRI-AI.]
The Eastern Team championship is essentially over. Hitech won and
drew today, producing a final score of 5.5 - .5. It played
remarkably well. Outside of falling into an opening trap due to a
deficiency in its book, and being outplayed a little in game four but
recovering when the opponent made an error, its play is above
criticism. It played mainly against expert level players, a class
that is almost extinct in Pittsburgh, and beat every one of them. It
drew its final game with a strong master rated nearly equal (2291) to
Hitech. It had black in 4 games, and white in two; a noticable
disadvantage. Mike Valvo who directs the ACM tournaments played on
board one for the team and finished with a score of 4.5 to 1.5.
Hitech played on board two, and Belle played on board three. Belle
apparently has had a hardware overhaul, and played much better than
it had recently. However, on a comparison basis, Belle scored 5- 1,
losing in the last round, and it had 4 whites and two blacks and
played against slightly weaker opponents than Hitech. The fourth
board human on the team was a catastrophe, scoring less than 50%.
The crucial match was in the fifth round and ended in a draw with
both computers winning and both humans losing, thus making the match
a draw and ruining our chances of winning the title (the team had won
all its previous matches). In the final round, there are still some
unfinished games, but the team should do no worse than draw, giving a
team record of 5 -1 (two drawn matches). Overall, it is safe to say
that on our team the species @u[robot sapiens] far outperformed the
species @u[homo sapiens].
------------------------------
Date: Wed, 19 Feb 86 10:20:20 EST
From: Chris_Langton%UMich-MTS.Mailnet@MIT-MULTICS.ARPA
Subject: Artificial Life
I read in the ailist a series of comments on the size of self-reproducing
systems (ARPA.AIList, Volume 3, Issue 71, 06/01/85 - starting with the
message from zim@mitre of 05/24/85 )
I have published an article wherein I exhibit a self-reproducing configuration
embedded in a cellular automaton which occupies a mere 10x15 cell rectangle.
The construction is based on a modification of one of Codd's components
(see Codd: Cellular Automata) in his simplification of von Neumann's
self-reproducing machine. My article is published in: Physica 10D (1984)
North Holland, pp 135-144, entitled 'self-reproduction in cellular automata'.
Basicly, this configuration consists of a looped pathway with a construction
arm extending out from one corner. Signals cycling around the loop cause
the construction arm to be extended by a certain amount and then cause a
90 degree corner to be built. as this sequence is executed 4 times (due
to the same signal sequence cycling around the loop 4 times), the four
sides of an offspring loop are built. When the extended construction arm
runs into itself, the resulting collision causes the two loops to detach
from each other and also triggers the construction of a new construction
arm on each loop. The new arm on the parent loop is located at the
next corner 'downstream' (in the sense of signal flow) from the original
site. Thus, the parent loop will go on to build another loop in a new
direction. Meanwhile, when the offspring was formed, a copy of the signal
sequence that serves as the description was trapped inside it when the
two detached from one another, thus it, too, goes on to build offspring.
The result is a growing colony of loops that expands out into the array,
consisting of a reproductive outer fringe surrounding a growing 'dead'
core, in the manner of a coral reef or the cross section of a tree.
Details are to be found in the article. Although this construction is
not capable of universal construction or computation, it clearly
reproduces itself in a non-trivial manner, unlike the reproduction under
modulo addition rules, of which Fredkin's reproducing system is an example.
I am also working on cellular automaton simulations of insect colonies and
artificial biochemistries. I have another article coming out in the proceedings
of the conference on 'Evolution, Games, & Learning' held at the Los Alamos
National Labs last May. It is entitled 'studying artificial life with cellular
automata'. There will be a video tape available soon from Aerial Press in
Santa Cruz which illustrates the self-reproducing loops as well as the
artificial insect colony simulations and other examples of `artificial life'.
I would be very interested in hearing from anybody who is working on anything
which might fall under the general heading 'artificial life'. I would also
like to try to get together a workshop, with computer support, where people
who have been working in this area could get together and have a 'jam session'
of sorts, and see each other's stuff. Any proceedings from such a workshop
would benefit greatly from having a video published along with it. If anybody
is interested in helping to organize such a workshop, send me a message. I
can be reached at: CGL%UMICH-MTS@MIT-MULTICS.ARPA
USPS: Christopher G. Langton / EECS Dept. / University of Michigan /
Ann Arbor MI 48109
MA-BELL (now divorced from PA-ATT) 313-763-6491
------------------------------
Date: Sat, 15 Feb 86 15:09:11 GMT
From: gcj%qmc-ori.uucp@cs.ucl.ac.uk
Subject: A Thought
From Vol 4 # 26:- "The idea is
to get kids to be more thoughtful about thinking by getting them to
try to think about how animals think, and by taking the results of
these comtemplations and actually building animal-like creatures that
work." Alan Kay.
From Vol 3 # ??:- Date: Tue, 12 Mar 85
"Just as man had to study birds, and was able to derive the underlying
mechanism of flight, and then adapt it to the tools and materials
at hand, man must currently study the only animal that thinks
in order to derive the underlying principles there also." Frank Ritter
I am struck by two (or more?) very different uses of the word "think"!
Gordon Joly
ARPA: gcj%qmc-ori@ucl-cs.arpa
UUCP: ...mcvax!ukc!kcl-cs!qmc-ori!gcj
------------------------------
Date: Thu, 13 Feb 86 00:08:02 PST
From: "Douglas J. Trainor" <trainor@LOCUS.UCLA.EDU>
Subject: a cuppla ai koans
from <colonel%buffalo.csnet@CSNET-RELAY.ARPA>
One day SIMON was going to the cafeteria when he met WEIZENBAUM, who
said: "I have a problem for you to solve." SIMON replied, "tell me more
about your problem," and walked on.
===================================================================
from <Kelley.pa@Xerox.COM>
How long would a simulation of its own lifetime survive?
What is the rate of change of all metaphors for the viability of that rate?
===================================================================
someone resent me Gabriel's old '83 koan <robins@usc-isib>:
A famous Lisp Hacker noticed an Undergraduate sitting in front of a
Xerox 1108, trying to edit a complex Klone network via a browser.
Wanting to help, the Hacker clicked one of the nodes in the network
with the mouse, and asked "what do you see?" Very earnesty, the
Undergraduate replied "I see a cursor." The Hacker then quickly pressed
the boot toggle at the back of the keyboard, while simultaneously
hitting the Undergraduate over the head with a thick Interlisp Manual.
The Undergraduate was then Enlightened.
------------------------------
Date: Wed, 19 Feb 86 14:38 PST
From: Cottrell@NPRDC
Subject: The Naive Dog Physics Manifesto
From: Leslie Kaelbling <Kaelbling@SRI-AI.ARPA>
From: MikeDixon.pa@Xerox.COM
From: haynes@decwrl.DEC.COM (Charles Haynes)
SEMINAR
From PDP to NDP through LFG:
The Naive Dog Physics Manifesto
Garrison W. Cottrell
Department of Dog Science
Condominium Community College of Southern California
The Naive Physics Manifesto (Hayes, 1978) was a seminal paper in
extending the theory of knowledge representation to everyday phenomena.
The goal of the present work is to extend this approach to Dog Physics,
using the connectionist (or PDP) framework to encode our everyday,
commonsense knowledge about dogs in a neural network[1]. However,
following Hayes, the goal is not a working computer program. That is in
the province of so-called performance theories of Dog Physics (see, for
example, my 1984 Modelling the Intentional Behavior of the Dog). Such
efforts are bound to fail, since they must correspond to empirical data,
which is always changing. Rather, we will first try to design a
competence theory of dog physics[2], and, as with Hayes and Chomsky, the
strategy is to continually refine that, without ever getting to the
performance theory.
The approach taken here is to develop a syntactic theory of dog
actions which is constrained by Dog Physics. Using a variant of
Bresnan's Lexical-Functional Grammar, our representation will be an
context-free action grammar, with associated s-structures (situation
structures). The s-structures are defined in terms of Situation
Dogmatics[3], and are a partial specification of the situation of the
dog during that action.
Here is a sample grammar which generates strings of action
predicates corresponding to dog days[4], (nonterminals are capitalized):
Day -> Action Day | Sleep
Action -> Sleep | Eat | Play | leavecondo Walk
Sleep -> dream Sleep | deaddog Sleep | wake
Eat -> Eat chomp | chomp
Play -> stuff(Toy, mouth) | hump(x,y) | getpetted(x,y)
Toy -> ball | sock
Walk -> poop Walk | trot Walk | sniff Walk | entercondo
Several regularities are captured by the syntax. For example,
these rules have the desirable property that pooping in the condo is
ungrammatical. Obviously such grammatical details are not innate in the
infant dog. This brings us to the question of rule acquisition and
Universality. These context-free action rules are assumed to be learned
by a neural network with "hidden" units[5] using the bark propagation
method (see Rumelhart & McClelland, 1985; Cottrell 1985). The beauty of
this is that Dogmatic Universality is achieved by assuming neural
networks to be innate[6].
The above rules generate some impossible sequences, however. This
is the job of the situation equation annotations. Some situations are
impossible, and this acts as a filter on the generated strings. For
example, an infinite string of stuff(Toy, mouth)'s are prohibited by the
constraint that the situated dog can only fit one ball and one sock in
her mouth at the same time. One of the goals of Naive Dog Physics is to
determine these commonsense constraints. One of our major results is
the discovery that dog force (df) is constant. Since df = mass *
acceleration, this means that smaller dogs accelerate faster, and dogs
at rest have infinite mass. This is intuitively appealing, and has been
borne out by my dogs.
____________________
[1]We have decided not to use FOPC, as this has been proven by Schank
(personal communication) to be inadequate, in a proof too loud to fit in
this footnote.
[2]The use of competence theories is a standard trick first intro-
duced by Chomsky, which avoids the intrusion of reality on the theory.
An example is Chomsky's theory of light bulb changing, which begins by
rotating the ceiling...
[3]Barwoof & Peppy (1983). Situation Dogmatics (SD) can be regarded
as a competence theory of reality. See previous footnote. Using SD is a
departure from Hayes, who exhorts us to "understand what [the represen-
tation] means." In the Gibsonian world of Situation Dogmatics, we don't
know what the representation means. That would entail information in
our heads. Rather, following B&P, the information is out there, in the
dog. Thus, for example, the dog's bark means there are surfers walking
behind the condo.
[4]Of course, a less ambitious approach would just try to account for
dog day afternoons.
[5]It is never clear in these models where these units are hidden, or
who hid them there. The important thing is that you can't see them.
[6]Actually this assumption may be too strong when applied to the
dogs under consideration. However, this is much weaker than Pinker's as-
sumption that the entirety of Joan Bresnan's mind is innate in the
language learner. It is instructive to see how his rules would work
here. We assume hump(x,y) is innate, and x is bound by the default s-
function "Self". The first time the puppy is humped, the mismatch
causes a new Passive humping entry to be formed, with the associated
redundancy rule. Evidence for the generalization to other predicates is
seen in the puppy subsequently trying to stuff her mouth into the ball.
------------------------------
End of AIList Digest
********************