Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 242

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest           Thursday, 22 Oct 1987     Volume 5 : Issue 242 

Today's Topics:
Comments - The Success of AI
Representation - Lenat's AM Program

----------------------------------------------------------------------

Date: 19 Oct 87 09:54:22 edt
From: Walter Hamscher <hamscher@ht.ai.mit.edu>
Subject: The Success of AI

Date: 18 Oct 87 01:39:46 GMT
From: PT.CS.CMU.EDU!SPICE.CS.CMU.EDU!spe@cs.rochester.edu (Sean
Engelson)

Given a sufficiently powerful computer, I could, in theory, simulate
the human body and brain to any desired degree of accuracy. * * *

Don't forget to provide all the sensory input provided by being
in, moving around in, and affecting the world. Otherwise you'll
be simulating a catatonic.

Do the terminally catatonic have minds?

------------------------------

Date: 19 Oct 87 10:27:22 edt
From: Walter Hamscher <hamscher@ht.ai.mit.edu>
Subject: The Success of AI

Date: 17 Oct 87 22:09:05 GMT
From: cbmvax!snark!eric@rutgers.edu (Eric S. Raymond)

* * *

I never heard of this line of research being followed up by anyone but
Doug Lenat himself, and I've never been able to figure out why. He later
wrote a program called EURISKO that (among other things) won that year's
Trillion-Credit Squadron tournament (this is a space wargame related to
the _Traveller_ role-playing game) and designed an ingenious fundamental
component for VLSI logic. I think all this was in '82.

See Lenat & J.S. Brown in AI Journal volume 23 #3, 1984: "Why AM
and EURISKO Appear to Work"
. The punchline of the article
(briefly) is that AM seems to have succeeded in elementary set
theory because its own representation structures (i.e., lists),
were particularly well suited to reasoning about sets. It
started breaking down at exactly the places where its
representation was inadequate for the concepts. For example,
there was no obvious way to move from its representation of the
number n as a list of length n, to a positional representation
that would make it more likely to discover things like
logarithms. Furthermore, its operations on procedures involved
local modifications to procedures expressed as list structures,
and as long as the procedures were compact these "mutations"
were likely to produce interesting new behavior, but as the
procedures get more complex, arbitrary random local
modifications had a vanishingly low success ratio. Hence it
would seem that direction to go from this insight is to make
programs that can learn new representations. There are probably
not enough people working on that. But anyway this is getting
off the subject, which is whether AI has had any successes.
Whether you want to count AM as a success is half-empty /
half-full issue; the field surely learned something from it, but
it surely hasn't learned enough.

------------------------------

Date: 19 Oct 87 17:47:45 GMT
From: brian@sally.utexas.edu (Brian H. Powell)
Subject: Re: The Success of AI

In article <228@snark.UUCP>, eric@snark.UUCP (Eric S. Raymond) writes:
> Doug Lenat's Amateur Mathematician program was a theorem prover equipped with
> a bunch of heuristics about what is 'mathematically interesting',
> [...]
>
> After n hours of chugging through a lot of nontrivial but already-known
> mathematics, it 'conjectured' and then proved a bunch of new results on the
> [...]

I feel compelled to challenge this, but not necessarily the rest of your
article.
AM wasn't a theorem prover. From the July, 1976 dissertation:

7.2.2 Current Limitations

[...]
AM has no notion of proof, proof techniques, formal validity, heuristics for
finding counterexamples, etc. Thus it never really establishes any conjecture
formally.
---end of excerpt---

The dissertation goes on to briefly suggest ways of adding this
capability, but as I understand it, no one ever has. Lenat himself, as I
recall, thought it was more interesting to do more work towards heuristics
than proving. EURISKO was the result of that. (i.e., you might get more
power if you could spend part of your time conjecturing heuristics in addition
to conjecturing about particular problems.)
AM is a neat program, but by many views it's overrated. It's great that
it conjectures all these neat theorems, but my impression is that it does
quite a bit of floundering to find them. I think it also spends a lot of time
floundering without finding anything useful, also. (A program run isn't
guaranteed to think of something clever.) Finally, it's not clear that the
program is really intelligent enough to realize that it's just conjectured
something intelligent. (I would bet that there are a lot of things AM has
considered uninteresting that humans consider interesting, and vice-versa.)
A human can monitor AM and modify the priority of certain tasks if the
human feels AM is studying the wrong thing. A human is practically required
for this purpose if AM is to do something especially clever. This turns AM
more into a search tool than an autonomous program, and I don't think a tool
is what Lenat had in mind.
If you read the summaries of AM, you think it's powerful. Once you read
the entire dissertation, you realize it's not quite as great a program as you
had thought, but you still think it's good research.

Brian H. Powell
UUCP: ...!uunet!ut-sally!brian
ARPA: brian@sally.UTEXAS.EDU

------------------------------

Date: 20 Oct 87 06:30:06 GMT
From: mcvax!cernvax!ethz!srp@uunet.uu.net (Scott Presnell)
Subject: Re: The Success of AI

In article <193@PT.CS.CMU.EDU> spe@spice.cs.cmu.edu (Sean Engelson) writes:

>
>Given a sufficiently powerful computer, I could, in theory, simulate
>the human body and brain to any desired degree of accuracy. This

Horse shit. The problem is you don't even know exactly what you are
simulating! I suppose you could say it's all a problem of definition,
however even with your assumtion that the mind is a integral part of the
body I still claim that you don't know what you're simulating. For
instance, dreams, are they logical?, do they fall in a pattern?, a computer
has got to have them to be a real simulation of a body/mind, but you cannot
simulate what you cannot accurately describe.

Let's get down to a specific case:
I propose that given any amount of computing power, you could not presently,
and probably will never be able to simulate me: Scott R. Presnell.
My wife can be the judge.

This may sound reactionary, that's because that's the way I responded
internally to this first statement. I apologize if I've jumped into a
discussion too quickly, I don't have time to read the previous discussions
right now.

Scott Presnell Organic Chemistry
Swiss Federal Institute of Technology (ETH-Zentrum)
CH-8092 Zurich, Switzerland.
uucp:seismo!mcvax!cernvax!ethz!srp (srp@ethz.uucp); bitnet:Benner@CZHETH5A

------------------------------

Date: 21 Oct 87 05:30:58 GMT
From: ucsdhub!jack!man!crash!gryphon!tsmith@sdcsvax.ucsd.edu (Tim
Smith)
Subject: Re: The Success of AI

In article <193@PT.CS.CMU.EDU> spe@spice.cs.cmu.edu (Sean Engelson) writes:
+=====
| Given a sufficiently powerful computer, I could, in theory, simulate
| the human body and brain to any desired degree of accuracy. This
| gedanken-experiment is the one which put the lie to the biological
| anti-functionalists, as, if I can simulate the body in a computer, the
| computer is a sufficiently powerful model of computation to model the
| mind. I know, for example, that serial computers are inherently as
| powerful computationally as parallel computers, though not as
| efficient, as I can simulate parallel processing on essentially serial
| machines. So we see, that if the assumption that the mind is an
| inherent property of the body is accepted, we must also accept that a
| computer can have a mind, if only by the inefficient expedient of
| simulating a body containing a mind.
| -Sean-
+=====
My claim is, specifically, that you cannot simulate a human
being (body and mind) with a digital computer, either in theory
or practice. Not a few people with whom I am in basic agreement
would claim that, well, it just *might* be conceivable in
theory, but you could never do it in practice.

I'ts not clear what is meant by "in theory" here. It sounds like
an unacceptable hedge. You might, for example, claim that with a
very large number of computers, all just at the edge of the
speed boundaries dictated by the laws of physics in the most
advanced materials imaginable, you could simulate a human body
and mind--but not in real time. But the simulation would have to
be in real time, because humans live in real time, doing things
that are critically time dependent (perceiving speech, for
example).

Similarly, humans think the way they do partially because of
their size, because of the environment they live in, because of
the speed at which they move, live, and think.

One of the consistent failings of AI researchers is to vastly
underestimate the intricacy and complexity of the kinds of
things they are trying to model (of course most of the other
cognitive scientists in this century have also underestimated
these things). Playing chess is nothing compared with natural
language understanding. We take language understanding for
granted, because, after all, we all do it. Yet we consider a
chess grand master brilliant, because we cannot match his
skills. But in fact, becoming a chess grand master is not more
difficult than learning to speak and write English. It's easier.
We learn language because we start early, spend *lots* and
*lots* of time doing it, and it's fun (watch children playing
word games sometime). We recognize that it's learn to speak or
perish, in a sense. Many fewer people are motivated (at the
early age required) to learn to play chess at the GM level.

The trouble with the kind of naive (if you'll pardon the
expression) reductionism inherent in your position is that it
seems to assume that any set of physical interactions that can
be expressed mathematically can be scaled up to a full-scale
simulation, and that this simulation would be indistinguishable
from the original thing.

Leaving aside AI for a moment, consider weather simulations.
Metereologists have developed computerized simulations of
phenomena such as hurricanes. Based on lots of data from past
storms, they can predict, with some accuracy, how a developing
storm might behave. This is obviously an extremely useful
capability. But to claim that a computer simulation of a
hurricane is exactly the same as the real thing would probably
sound like a very poor joke to someone who has experienced a
hurricane first-hand.

It seems to me that any intelligent person would say "how could
you ever truly simulate a hurricane, and why would you want to?"

Well, I have the same reaction to those who say that they want
to simulate human intelligence, or even some essential part of
it such as natural language understanding. How, and for God's
sake, *why*? To study human intelligence, using computers and
any other tools available, is a fascinating thing to do. I have
spent a number of years doing so. But to say that we are
approaching an era when human intelligence will be simulated
seems to be just about like saying that from the puff of air
generated by the wave of a hand it is only a few short steps to
a full-scale realistic simulation of a hurricane.

Know what it is you are trying to simulate!


--
Tim Smith
INTERNET: tsmith@gryphon.CTS.COM
UUCP: {hplabs!hp-sdd, sdcsvax, ihnp4, ....}!crash!gryphon!tsmith
UUCP: {philabs, trwrb}!cadovax!gryphon!tsmith

------------------------------

Date: 21 Oct 87 05:35:49 GMT
From: ucsdhub!jack!man!crash!gryphon!tsmith@sdcsvax.ucsd.edu (Tim
Smith)
Subject: Re: The Success of AI

In article <228@snark.UUCP> eric@snark.UUCP (Eric S. Raymond) writes:
+====
| In article <1922@gryphon.CTS.COM>, tsmith@gryphon.CTS.COM (Tim Smith) writes:
| > Computers do not process natural language very well, they cannot
| > translate between languages with acceptable accuracy, they
| > cannot prove significant, original mathematics theorems.
|
| I am in strong agreement with nearly everything else you say in this article,
| especially your emphasis on a need for a new paradigm of mind. But you are,
| I think, a little too dismissive of some real accomplishments of AI in at
| least one of these difficult areas.
|
| Doug Lenat's Amateur Mathematician program was a theorem prover equipped with
| a bunch of heuristics about what is 'mathematically interesting', essentially
| methods for grinding out interesting generalizations and combinations of known
| theorems.
| [...]
|
| So at least one of your negative assertions is incorrect.
+=====
OK, I'll accept your word on this (I'm a linguist, not a
mathematician).

+=====
| I think AI has the same negative-definition problem that "natural
| philosophy"
did when experimental science got off the ground -- that
| once people get a handle on some "AI" problem (like, say, playing
| master-level chess or automated proof of theorems) there's a tendency
| to say "oh, now we understand that; it's *just* computation, it's not
| really AI"
and write it out of the field (it would be interesting to
| explore the hidden vitalist premises behind such thinking).
+=====
Well, scientific (and philosophical) fields do progress, and there is
a normal tendency to discard the old and no longer interesting. But
there is an interesting aspect to what you are saying, I believe. Let
me try to develop it a bit, using chess as an example.

Chess: I am at a disadvantage here in one sense--I don't play the
game very well. In my limited understanding of it, it is a very
difficult game to play at a high level. It requires years of study,
usually starting at a young age, to become a grand master. It
requires peculiar abilities of concentration and nervous resources to
play chess at a competetive level. Nevertheless, I don't think of
chess as being a particularly intellectual game. It seems much more like
tennis to me (and I don't play that either). This is not a put-down!
I think of chess as being a sedentary sport--a sport for the mind.

Now here's the interesting point. If you were to come to me and say--
"Smith, you have a year to develop an automaton that will play some
kind of major sport at a championship level, competing against humans.
Money is no object, and you can have access to all the world's
experts in AI and robotics, but you must design a robot that plays
championship X in a year's time. What is X?"
I would say, without a
moment's hesistation, "tennis".

Why? Of all the sports, tennis is the most bounded. It is played within
a very restricted area (unlike golf or even baseball), it is a
one-against-one sport (unlike football or soccer), the playing surfaces
(aside from Wimbledon) are the truest of all the major sports, and it
is indubitably the most boring of all the sports to watch (if not to
play). A perfect candidate for automation.

Chess? It is tennis for the mind. And so a perfect candidate for
initial attempts at AI. But if computers have conquered chess (as
they seem about to), does this mean that "real" artificial
intelligence is not far behind? No, it just means that chess was
over-rated as an intellectual exercise! On a scale of 1 to 10, in
terms of intellectual effort involved in playing the game, chess
seems to rate at about .002. In terms of skill, concentration
ability, depth of understanding of the game, etc. it is difficult.
But then, so is multiplying two 37 digit numbers in your head
difficult. Unless you're an "idiot savant", or a computer!
--
Tim Smith
INTERNET: tsmith@gryphon.CTS.COM
UUCP: {hplabs!hp-sdd, sdcsvax, ihnp4, ....}!crash!gryphon!tsmith
UUCP: {philabs, trwrb}!cadovax!gryphon!tsmith

------------------------------

Date: Wed, 21 Oct 87 09:50:51 PDT
From: Tom Dietterich <tgd@orstcs.cs.orst.edu>
Subject: Lenat's AM program

The exact reasons for the success of AM (and for its eventual failure
to continue making new discoveries) have not been established. In
Lenat's dissertation, he speculated that the source of power was the
search heuristics, and that the eventual failure was caused by the
inability of the system to generate new heuristics.

Then, in a paper by Lenat and Brown, a different reason is given:
namely that the representation of concepts was the critical factor.
There is a close relationahip between mathematics concepts and lisp,
so that mathematical concepts can be represented very concisely as
lisp functions. Simple syntactic mutation operations, when applied to
these concise functions, yield other interesting mathematical
concepts. In new domains, such as those tackled by Eurisko, it was
necessary to engineer the concept representation so that the concepts
were concisely representable.

Finally, in a paper published this year by Lenat and Feigenbaum, yet
another explanation of AM's (and Eurisko's) success and failure is
given: "The ultimate limitation was not what we expected (cpu time),
or hoped for (the need to learn new representations), but rather
something at once surprising and daunting: the need to have a massive
fraction of consensus reality already in the machine."


The problem with all of these explanations is that they have not been
subjected to rigorous experimental and analytical tests, so at the
present time, we still (more than ten years after AM) do not
understand why AM worked!

I have my own pet hypothesis, which I am currently subjecting to an
experimental test. The hypothesis is this: AM succeeded because its
concept-creation operators generated a space that was dense in
interesting mathematical concepts. This hypothesis contradicts each
of the preceding ones. It claims that heuristics are not important
(i.e., a brute force search using the concept-creation operators would
be only polynomially--not exponentially--more expensive). It claims
that the internal representation of the concepts (as lisp functions)
was also unimportant (i.e., any other representation would work as
well, because mutation operators are very rarely used by AM).
Finally, it claims that additional world knowledge is irrelevant
(because it succeeds without such knowledge).

There is already some evidence in favor of this hypothesis. At CMU, a
student named Weimin Shen has developed a set of operators that can
rediscover many of AM's concepts. The operators are applied in brute
force fashion and they discover addition, doubling, halving,
subtraction, multiplication, squaring, square roots, exponentiation,
division, logarithms, and iterated exponentiation. All of these are
discovered without manipulating the internal representation of the
starting concepts.

AM is a "success" of AI in the sense that interesting and novel
behavior was exhibited. However, it is a methodological failure of
AI, because follow up studies were not conducted to understand causes
of the successes and failures of AM. AM is not unique in this regard.
Follow-up experimentation and analysis is critical to consolidating
our successes and extracting lessons for future research. Let's get
to work!

Tom Dietterich
Department of Computer Science
Oregon State University
Corvallis, OR 97331
tgd@cs.orst.edu
OR tgd%cs.orst.edu@relay.cs.net


References:

\item Lenat, D. B., (1980). AM: An artificial intelligence approach to
discovery in mathematics as heuristic search, In Davis, R., and Lenat,
D. B., {\it Knowledge-based systems in Artificial Intelligence}, 1980.

\item Lenat, D. B., and Brown, J. S. (1984). Why AM and EURISKO appear to work,
{\it Artificial Intelligence}, 23(3) 269--294.

\item Lenat, D. B., and Feigenbaum, E. A. (1987). On the thresholds of
knowledge. In {\it IJCAI87, The Proceedings of the Tenth
International Joint Conference on Artificial Intelligence}, Milan, Los
Altos, CA: Morgan-Kaufmann.

\item Shen, W. (1987). Functional transformations in AI discovery
systems. Technical Report CMU-CS-87-117, Carnegie-Mellon University,
Department of Computer Science.

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT