Copy Link
Add to Bookmark
Report
AIList Digest Volume 2 Issue 091
AIList Digest Monday, 16 Jul 1984 Volume 2 : Issue 91
Today's Topics:
Chess - Group Play,
Psychology - Limits to Intelligence,
AI Tools - Small Computer Lisp,
AI Books - Reference Books & How to Get a Ph.D.,
Business - Softwar,
Humor - The Laws of Robotics,
Brain Theory - Simulation,
Intelligence - Turing Test
----------------------------------------------------------------------
Date: 14 Jul 84 10:53-PDT
From: mclure @ Sri-Unix.arpa
Subject: Delphi Experiment: group play against 8-ply machine
I would like to conduct a Delphi Experiment with this list. The
format of the experiment is as follows. All interested chess players
will vote for their choice of move in an on-going game between them
(the group) and the Fidelity Prestige which will be set to search a
minimum of 8-ply deep (like Belle and Cray Blitz). This Prestige has
the ECO opening modules (80,000 variations).
A move with the most number of votes will be chosen above others
and made in the current position. A couple days will be given for
gathering the votes. In the event of a tie between two or more moves,
the move will be selected randomly.
The resulting position will then be handed to Prestige 8-ply which
will conduct a brute-force search to at least 8-ply. Its move will be
reported (the search usually takes about 3-15 hours) to the players and
another move vote will be solicited. This process will continue until
the Prestige mates the group or the group mates the Prestige or a draw
is declared.
The moves, as they are made, will be reported to this list.
Please include the move number and the move in either Algebraic
or English notation.
>>>>>>>>> Prestige 8-ply will play White.
>>>>>>>>> Prestige 8-ply moves 1. e4 (P-K4)
BR BN BB BQ BK BB BN BR
BP BP BP BP BP BP BP BP
-- ** -- ** -- ** -- **
** -- ** -- ** -- ** --
-- ** -- ** WP ** -- **
** -- ** -- ** -- ** --
WP WP WP WP -- WP WP WP
WR WN WB WQ WK WB WN WR
Your move, please?
Replies to Arpanet: mclure@sri-unix or Usenet: sri-unix!mclure.
DO NOT SEND REPLIES TO THE ENTIRE LIST! Just send them to one of
the above addresses.
[Unless large numbers of people choose to participate, I would prefer that
a separate mailing list be used for communicating the state of the game.
Since this is to be an experiment, however, discussion of the purpose
and predicted outcome of the experiment would be interesting topics for
AIList. This seems to be a study of group intelligence, but with the
group dynamics largely removed. (See the following message from Richard
Brandau for speculation about group intelligence.) Is such a group
doomed to unimaginitive play? A true Delphi experiment would circulate
initial suggestions and arguments (anonymously) before taking the final
vote; how much would that advance the group's intelligence? What can be
learned here? -- KIL]
------------------------------
Date: 14 Jul 1984 14:43-EDT
From: LCELEC@USC-ISI.ARPA
Subject: Proposed Limit of Intelligence
An assumption in 'armchair AI' is that some lower threshold of
information content and processing is required for 'intelligence' to
manifest itself. Debates abound on the subject of what that threshold
must be -- in other words, what operational definition of minimal
intelligence is to be accepted.
The Turing test is an example of this lower bound, a threshold set at
the level of human-like behavior. This test is sometimes criticized
as not being stringent enough, as missing parts of human experience
like 'emotion,' 'intent' or 'consciousness.' Still, the ultimate
object of comparison is human intelligence. Indeed, the comparison is
restricted to the intelligence of an individual human, rather than the
collective intelligence of a group, or of the species.
Expert system practitioners encounter difficulties in trying to
represent the combined expertise of multiple individuals. It is
generally assumed, I believe, that these difficulties are principally
technical, that they could be surmounted if we simply knew more about
how to represent and process diverse knowledge. This may be true, but
these technical difficulties may not be mere technicalities. Rather,
there may be a profound problem at root here, and an issue of
practical significance for the future of AI.
Might there exist an upper limit on the concentration of intelligence?
Beyond this hypothetical ceiling, information capacities and/or
processing abilities would have to be partitioned and distributed
among separate intelligent agents.
I do not seriously propose that the appropriate location of this upper
bound is at the level of intelligence possessed by an individual
human. Rather, I propose that some such ceiling exists, above, at, or
below the level of intelligence possessed by an individual.
If the ceiling lies at or above the level of human intelligence, then
it is not necessary to be concerned with it when modelling the
intelligent behavior of a single human. In other words, development
of AI programs can continue without regard for some higher-level
macrostructure of intelligence, as would be demanded if the ceiling
were below human-level.
This is not to say that a human or super-human ceiling can never be
important in the development of artificial intelligence. Indeed, we
humans (limited by a de facto rather than a theoretical ceiling) are
often involved in systems -- such as professional organizations,
communications networks, and committee meetings -- the cumulative
intelligence of which may surpass the level of any of the individual
human participants. These systems each possess a structure for the
organization of their constituent intelligent agents. In order to
model the BEHAVIOR of these systems, their organizational structures
must be modeled. If the proposed ceiling exists, it will be necessary
to model some such structure, just to obtain the level of INTELLIGENCE
possessed by these or more advanced systems.
The existence of these organizational structures raises the
possibility that the structures themselves possess intelligence. This
may not seem intuitive (or at least not parsimonious) when considering
the last committee meeting you've attended, but a lower-level example
may be more appealing. Lewis Thomas, in _The_Medusa_and_the_Snail_
(if I remember correctly) proposes that social insects such as ants
possess an intelligence AS A GROUP, and can be said to THINK as a
group, although the constituents of the group appear to lack anything
like intelligence. Presumably, something in the colony's
organizational structure is responsible for this societal
intelligence.
Humans would presumably prefer to think of themselves as INDIVIDUALLY
intelligent. Perhaps the human neuron fills the role of building-
block to human intelligence, in the way that the individual ant plays
a role in ant-colony intelligence. Both roles are clearly the product
of evolution; the role of individual humans in organizations can also
be seen as a product of evolution; the organizations themselves are a
product of a kind of evolution. Might these organizational structures
possess an intelligence of their own?
This raises the possibility that the ceiling on concentration of
intelligence lies below the level of human intelligence. Obviously,
if this is the case, human intelligence is just another "structural"
or "organizational" intelligence. This may be relevant to the limits
of human attention and task-multiplexing which are studied by
experimental psychologists. If so, the existence of such a ceiling
has profound significance for even modest advances in the state of the
AI art.
Regardless of the level (or levels) at which the proposed ceiling
exists, its very existence would have significance to our
understanding of the nature of intelligence. It may also prove
important for future system designers. After all, we would not want
them to spend time trying to build a machine whose existence can be
known to be impossible.
-- Richard Brandau
------------------------------
Date: 12 Jul 1984 08:56:33-EDT
From: sde@Mitre-Bedford
Subject: Simulation, limits to
According to Information Mechanics (if I understood the relevant part),
it is impossible to totally simulate anything in less than the mass of
the thing to be simulated. For a more elaborate response, the person who
should be commenting on this is Fred Kantor, the author of the monograph.
Unfortunately, I don't think he is on Arpanet.
David sde@mitre-bedford
------------------------------
Date: 11 Jul 84 5:21:10-PDT (Wed)
From: hplabs!hao!seismo!rlgvax!cvl!umcp-cs!eneevax!phaedrus @ Ucb-Vax.arpa
Subject: Re: Small Computer Lisps?
Article-I.D.: eneevax.146
This month's BYTE (July) has an article on LISP for the IBM PC. It is
a review of Integral Quality's IQLISP and The Software House's muLISP.
It is on page 281 and the authors are Jordan Bortz and John Diamant.
Without hallucinogens, life itself would be impossible.
ARPA: phaedrus%eneevax%umcp-cs@CSNet-Relay
UUCP: {seismo,allegra,brl-bmd}!umcp-cs!eneevax!phaedrus
------------------------------
Date: 6 Jul 84 20:57:06-PDT (Fri)
From: sun!idi!kiessig @ Ucb-Vax.arpa
Subject: AI Reference Books
Article-I.D.: idi.210
I received the following suggestions for reference/text books
on AI in response to my article posted a while ago:
AI Handbook by Feigenbaum et al.
AI Journal (pretty technical)
AI Magazine
Artificial Intelligence by Elaine Rich (a textbook)
(several people thought this was a good intro book)
Artificial Intelligence by Patrick Winston (2nd ed.)
Artificial Intelligence and Natural Man by Margaret Boden
(less technical, more historical & quite thick)
Expert Systems by Hayes-Roth, Waterman, et al.
Fifth Generation by Feigenbaum et al.
Problem Solving Methods in Artificial Intelligence by
Nils J. Nilsson (1971)
If you know of any others, I'd like to hear about them,
or if you've read any of these any have comments (good or bad),
that would be useful, too.
Rick Kiessig
{decvax, ucbvax}!sun!idi!kiessig
{akgua, allegra, amd70, burl, cbosgd, dual, ihnp4}!idi!kiessig
Phone: 408-996-2399
------------------------------
Date: Fri 13 Jul 84 15:45:20-PDT
From: C.S./Math Library <LIBRARY@SU-SCORE.ARPA>
Subject: How to get a Ph.D. in AI
[Forwarded from the Stanford bboard by Laws@SRI-AI.]
Alan Bundy, Ben du Boulay, Jim Howe, and Gordon Plotkin have written a
chapter in O'shea's and Eisenstadt's new book Artificial Intelligence
Q335.A788 1984. The chapter, five, is titled how to get a Ph.D in AI.
Anybody out there need some advice?
------------------------------
Date: 29 Jun 84 13:47:51-PDT (Fri)
From: ihnp4!mgnetp!burl!ulysses!unc!mcnc!ecsvax!hes @ Ucb-Vax.arpa
Subject: Re: Softwar
Article-I.D.: ecsvax.2814
In the good old days, SAS only ran on IBM mainframes (360 & offspring)
and so there was the operating system (OS!) to ask for the date. Most
large corporations use the date for all sorts of operations, and so
probably wouldn't want to set the wrong date at IPL (OS load) in order
to avoid paying lease costs. (I believe the dissappearing act works.)
Also SAS sells a lot of (quite good) tutorial and technical manuals
and does a fair amount of answering bug requests over the phone and
sending out newsletters, updates, etc. -- none of which would be
readily available if you weren't making lease payments (I assume they
would be suspicious ...)
--henry schaffer genetics ncsu
------------------------------
Date: 11 Jul 84 8:18:53-PDT (Wed)
From: hplabs!sdcrdcf!sdcsvax!akgua!mcnc!ecsvax!dgary @ Ucb-Vax.arpa
Subject: Re: The Law
Article-I.D.: ecsvax.2903
The Three Laws of Robotics for the 1980s
(originally developed by the author and J. W. Godwin)
1. Never give a sucker an even break.
2. Never draw to an inside straight.
3. Don't get caught.
D Gary Grady
Duke University Computation Center, Durham, NC 27706
(919) 684-4146
USENET: {decvax,ihnp4,akgua,etc.}!mcnc!ecsvax!dgary
------------------------------
Date: Thursday, 12 July 1984 18:11:35 EDT
From: Purvis.Jackson@cmu-cs-cad.arpa
Subject: Tests & Poems
Regarding the Turing Test . . .
Perhaps a more appropriate test of intelligence would be to have the
machine play the part of the interogator. If it could distinguish
properly between a monkey and a business administration major, then
it would clearly exhibit intelligence. But on second thought, this
wouldn't be a very good test, for it would be entirely possible for
an intelligent human to fail to distinguish them.
Hurrah for artificial intelligence,
I think it be due time
To off this unnatural diligence
For activities more sublime.
Methinks with rapid development,
Applications become quite close,
My mind entertains the President,
Who surely could use a dose.
------------------------------
Date: 9 Jul 84 16:13:55-PDT (Mon)
From: hplabs!sdcrdcf!sdcsvax!akgua!mcnc!ecsvax!dgary @ Ucb-Vax.arpa
Subject: Re: The Turing Test - machines vs. people
Article-I.D.: ecsvax.2879
Kilobaud magazine (now Microcomputing) ran an article ~5 years ago on ai and
"humanlike conversation" in which the author concluded that humanlike dialog
had little to do with intelligence, artificial or genuine. To accurately
simulate human dialog required, among other things, WOM (write only memory)
which was used to store anything not of direct immediate interest to the
speaker. You could do a pretty good simulation of Eddy Murphie on the other
end of a Turing test with a very simple algorithm.
D Gary Grady
Duke University Computation Center, Durham, NC 27706
(919) 684-4146
USENET: {decvax,ihnp4,akgua,etc.}!mcnc!ecsvax!dgary
------------------------------
Date: 12 Jul 84 10:24:53-PDT (Thu)
From: hplabs!sdcrdcf!sdcsvax!akgua!mcnc!ecsvax!dgary @ Ucb-Vax.arpa
Subject: Re: The Turing Test - machines vs. people
Article-I.D.: ecsvax.2926
Someone took issue with a recent posting I made:
>From: ags@pucc-i (Seaman) Tue Jul 10 10:38:42 1984
>> ...You could do a pretty good simulation of Eddy Murphie on the other
>> end of a Turing test with a very simple algorithm.
>
>Anyone who believes this either doesn't understand the Turing test or has
>a very low opinion of his own intelligence. Are you seriously claiming ...
From the kidding tone of the rest of my posting, I assumed the :-) was
quite unnecessary. Evidently I was wrong. So I retract my insult
to Messrs Turing and Murphy, and suggest that a simple algorithm could
substitute for "Cheech" Marin. OK, what about Marcel Marceau...
:-) :-) :-) <-- Please note!!
D Gary Grady
Duke University Computation Center, Durham, NC 27706
(919) 684-4146
USENET: {decvax,ihnp4,akgua,etc.}!mcnc!ecsvax!dgary
------------------------------
Date: 11 Jul 84 12:37:51-PDT (Wed)
From: ihnp4!hlexa!bev @ Ucb-Vax.arpa
Subject: Re: Re: The Turing Test - machines vs. p - (nf)
Article-I.D.: hlexa.2559
Understanding?
If a human passes a calculus test it means they can calculate
correct answers to (some percentage of) the questions asked.
If a computer does the same it means the same, but that's all.
------------------------------
Date: 11 Jul 84 16:58:47-PDT (Wed)
From: decvax!mit-athena!yba @ Ucb-Vax.arpa
Subject: Re: Re: The Turing Test - machines vs. p - (nf)
Article-I.D.: mit-athe.206
If a program passes a test in calculus the best we can grant it is that
it can pass tests. In the famous program ANALOGY (Bobrow's I think)
the computer "passes" geometric analogy tests. It does not seem to understand
either geometry or analogy outside of this limited domain of discourse.
We make the same mistaken assumption about humans--that is that because
you can pass a "test" you understand a subject.
The Turing test was a "blind" test; in that the Colonel is wrong--someone
reading this over the net or receiving a note from the bank cannot just
"go look". The idea was to tell via dialog only in a blind situation
(maybe even a double-blind if there are some control situations where
two humans taking the Turing test face each other).
The question of how to evaluate the performance of an AI system has become
an important question. I am not sure that the question of "understanding"
should even enter into it. In any case, let's not trivialize it.
yba%mit-heracles@mit-mc.ARPA UUCP: decvax!mit-athena!yba
------------------------------
End of AIList Digest
********************