Copy Link
Add to Bookmark
Report

AIList Digest Volume 8 Issue 121

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest             Monday, 7 Nov 1988      Volume 8 : Issue 121 

Philosophy:

AI as CS and the scientific epistemology of the common sense world
Lightbulbs and Related Thoughts
Re: Bringing AI back home
Computer science as a subset of artificial intelligence
Re: Limits of AI -or- The Kaleidoscope of Ideas

----------------------------------------------------------------------

Date: 31 Oct 88 2154 PST
From: John McCarthy <JMC@SAIL.Stanford.EDU>
Subject: AI as CS and the scientific epistemology of the common sense
world

[In reply to message sent Mon 31 Oct 1988 20:39-EST.]

Intelligence can be studied

(1) through the physiology of the brain,

(2) through psychology,

(3) through studying the tasks presented in the achievement of
goals in the common sense world.

No one of the approaches can be excluded by a priori arguments,
and I believe that all three will eventually succeed, but one
will succeed more quickly than the other two and will help mop up
the other two. I have left out sociology, because I think its
contribution will be peripheral.

AI is the third approach. It proceeds mainly in computer science
departments, and many of its methods are akin to other computer
science topics. It involves experimenting with computer programs
and sometimes hardware and rarely includes either psychological
or physiological experiments with humans or animals. It isn't
further from other computer science topics than they are from
each other and there are more and more hybrids of AI with
other CS topics all the time.

Perhaps Simon doesn't like the term AI, because his and Newell's
work involves a hybrid with psychology and has involved psychological
experiments as well as experimental computer programming. Surely
some people should pursue that mixture, which has sometimes
been fruitful, but most AI researchers stick to experimental
programming and also AI theory.

In my opinion the core of AI is the study of the common sense world
and how a system can find out how to achieve its goals. Achieving
goals in the common sense world involves a different kind of
information situation than science has had to deal with previously.
This fact causes most scientists to make mistakes in thinking about
it. Some pick an aspect of the world that permits a conventional
mathematical treatment and retreat into it. The result is that
their results often have only a metaphorical relation to intelligence.
Others demand differential equations and spend their time rejecting
approaches that don't have them.

Why does the common sense world demand a different approach? Here are
some reasons.

(1) Only partial information is available. It is partial not merely
quantitatively but also qualitatively. We don't know all the
relevant phenomena. Nevertheless, humans can often achieve goals
using this information, and there is no reason humans can't understand
the processes required to do it well enough to program them in computers.

(2) The concepts used in common sense reasoning have a qualitatively
approximate character. This is treated in my paper ``Ascribing
Mental Qualities to Machines.''

(3) The theories that can be obtained will not be fully predictive
of behavior. They will predict only when certain conditions are
met. Curiously, while many scientists demand fully predictive theories,
when they build digital hardware, they accept engineering specifications
that aren't fully predictive. For example, consider a flip-flop with
a J input, a K input and a clock input. The manufacturer specifies
what will happen if the clock is turned on for long enough and then
turned off provided exactly one of the J and K inputs remains high
during this period and the other remains low. The specifications
do not say what will happen if both are high or both are low or
if they change while the clock is turned on. The manufacturer
doesn't guarantee that all the flip-flops he sells will behave
in the same way under these conditions or that he won't change
without notice how they behave. All he guarantees is what
will happen when the flip-flop is used in accordance with the
``design rules''. Computer scientists are also quite properly
uninterested in non-standard usage. This contrasts with linear
circuit theory which in principle tells how a linear circuit will
respond to any input function of time. Newtonian and
non-relativistic quantum mechanics tell how particles respond to
arbitrary forces. Quantum field theory seems to be more picky.
Many programs have specified behavior only for inputs meeting
certain conditions, and some programming languages refrain
from specifying what will happen if certain conditions aren't
met. The implementor make the compiler do whatever is convenient
or even not figure out what will happen.

What we can learn about the common sense world is like what is
specified about the flip-flop, only even more limited.
Therefore, some people regard the common sense world as unfair
and refuse to do science about it.

------------------------------

Date: 2 Nov 88 13:55:48 GMT
From: pitstop!sundc!rlgvax!tony@sun.com (Tony Stuart)
Subject: Lightbulbs and Related Thoughts

On the way into work this morning I was stopped at a light
near an office building. They were washing the windows using
a large crane. This lead me to think about the time that a
light was out in CCI's sign and they used a crane to replace
it. I began to wonder whether they replace all the lights while
they have the crane up or just the one that is out. Maybe it
depends on how close the lights are to the end of their rated
life. This got me thinking about the lights in the vanity at
home. Two of the four have blown in the last couple of weeks.
I remarked to Anne how it was interesting that lightbulbs do
start to blow out at around the same time. This lead me to
suggest that we remember to replace the blown out lightbulbs.

The point is that an external stimulus, seeing the men wash
the windows of the building, lead to a problem to solve, replacing
the lights in the vanity. I have no doubt that if I had replaced
those lights already then the train of thought would have
continued until I encountered a problem that needed attention.
The mind seems optimized for problem solving and perhaps one
reason for miscellaneous ramblings is that they uncover problems.

On a similar track, I have often thought that once we find a
solution to a problem it is much more difficult to search for
another solution. Over evolutionary history it is likely that
life was sufficiently primitive that a single good solution was
sufficient. The brain might be optimized such that the first
good solution satisifies the problem seeking mode and to go
beyond that solution requires concious effort. This is an
argument for not resorting to a textbook as the first line of
problem solving. The textbook is sure to give a good solution
but perhaps not the best. With the textbook solution in mind
it may be much more difficult to come up with an original
solution that is better than the textbook one. For this reason
it is best to try to solve the problem internally before going
to some external device.

There may also be some insite into how to make computers
think. Lets say I designed my computer to follow trains of
thought and at each thought it looked for unresolved questions.
If there were no unresolved questions it would continue onto
the next linked thought. Otherwise it would look for the
solution to the problem. If the search did not turn up the
information in memory it would result in the formation of
a question. Anne suggests that these trains of thought are
often triggered by external stimulae that a computer would
not have. She says that we live in a sea of stimulae.

I've often wondered about the differences between short term
and long term memory. Here's a computer model for it. Assume
that short term memory is information stored as sentences and
long term memory is information stored in data structures with
organized field name/field value/relationship links. Information
is initially stored in the sentence based short term memory.
In a background process, or when our minds are otherwise idle,
a task searches through the short term memory for data that
might resolve questions (holes) in the long term memory. (Which
is searched I don't really know.) This information in the short
term memory is then appropriately cataloged in the long term
memory. Another task is responsible for purging sentences
from the short term memory. It could use a first in-first out
or more likely a least frequently used algorithm.

A side effect of this model is that information in short
term memory cannot be used unless there is a hole in the long
term memory. This leads to problems in bootstrapping the
process, but assuming there is a solution to that problem, it
also models behavior that is present in humans. This is the
case of feeling that one hears a word or phrase a lot after
he knows what it means. Another part of the side effect is
that one cannot use information that he has unless it fits.
This means that it must be discarded until the long term
memory is sufficiently developed to accept it.

--

Anthony F. Stuart, {uunet|sundc}!rlgvax!tony
CCI, 11490 Commerce Park Drive, Reston, VA 22091

------------------------------

Date: 2 Nov 88 15:54:37 GMT
From: umix!umich!itivax!dhw@uunet.UU.NET (David H. West)
Subject: Re: Bringing AI back home


In article <1776@crete.cs.glasgow.ac.uk> Gilbert Cockton writes:
>
>In a previous article, Ray Allis writes:
>>If AI is to make progress toward machines with common sense, we
>>should first rectify the preposterous inverted notion that AI is
>>somehow a subset of computer science,
>Nothing preposterous at all about this. AI is about applications of
>computers, and you can't sensibly apply computers without using computer
>science.

All that this shows is that AI has a non-null intersection with CS,
not that it is a subset of it.

> Many people would be happy if AI boy scouts came down
>from their technological utopian fantasies and addressed the sensible
>problem of optimising human-computer task allocation in a humble,
>disciplined and well-focussed manner.

Many people would have been happier (in the short term) if James
Watt had stopped his useless playing with kettles and gone out and got
a real job.

>There are tasks in the world. Computers can assist some of these
>tasks, but not others. Understanding why this is the case lies at the
>heart of proper human-machine system design. The problem with hard AI is
>that it doesn't want to know that a real division between automatable
>and unautomatable tasks does exist in practice.

You seem to believe that this boundary is fixed. Well, it will be
unless people work on moving it.

> Why are so
>many AI workers so damned ignorant of the problems with
>operationalising definitions of intelligence, as borne out by nearly a
>century of psychometrics here?

There was a time when philosophers concerned themselves with
questions such as whether magnets move towards each other out
of love or against their will. Why were those who wanted instead to
measure forces so damned ignorant of the problems with the
philosophical approach? Maybe they weren't, and that's *why* they
preferred to measure forces.

>Common sense is a labelling activity
>for beliefs which are assumed to be common within a (sub)culture.

Partially.

>Such social constructs cannot have a machine embodiment, nor can any

Cannot? Why not? "Do not at present" I would accept.

>The minute words like common sense and intelligence are used, the
>relevant discipline becomes the sociology of knowledge.

*A* relevent discipline. AI is at present more concerned with the
structure and machine implementation of common sense than with its
detailed content.

-David West dhw%iti@umix.cc.umich.edu
{uunet,rutgers,ames}!umix!itivax!dhw
CDSL, Industrial Technology Institute, PO Box 1485,
Ann Arbor, MI 48106

------------------------------

Date: Wed, 2 Nov 88 14:55:01 pst
From: Ray Allis <ray@BOEING.COM>
Subject: Computer science as a subset of artificial intelligence


In <1776@crete.cs.glasgow.ac.uk> Gilbert Cockton writes:

>In a previous article, Ray Allis writes:
>>If AI is to make progress toward machines with common sense, we
>>should first rectify the preposterous inverted notion that AI is
>>somehow a subset of computer science,
>Nothing preposterous at all about this. AI is about applications of
>computers,

I was disagreeing with that too-limited definition of AI. *Computer
science* is about applications of computers, *AI* is about the creation
of intelligent artifacts. I don't believe digital computers, or rather
physical symbol systems, can be intelligent. It's more than difficult,
it's not possible.

>> or call the research something other than "artificial intelligence".
>Is this the real thrust of your argument? Most people would agree,
>even Herb Simon doesn't like the term and says so in "Sciences of the
>Artificial"
.

No, "I said what I meant, and I meant what I said". The insistence that
"artificial intelligence research" is subsumed under computer science
is preposterous. That position precludes the development of intelligent
machines.

>> Computer science has nothing whatever to say about much of what we call
>> intelligent behavior, particularly common sense.
>Only sociology has anything to do with either of these, so to
>place AI within CS is to lose nothing.

Only the goal.

>Intelligence is a value judgement, not a definable entity.

"Intelligence" is not a physical thing you can touch or put in a bottle,
like water or carbon dioxide. "Intelligent" is an adjective, usually
modifying the noun "behavior", and it does describe something measurable;
a quality of behavior an organism displays in coping with its environment.
I think intelligent behavior is defined more objectively than, say, the
quality of an actor's performance in A Midsummer Night's Dream, which IS
a value judgement.

> Common sense is a labelling activity
>for beliefs which are assumed to be common within a (sub)culture.
>
>Such social constructs cannot have a machine embodiment, nor can any
>academic discipline except sociology sensibly address such woolly
>epiphenomena. I do include cognitive psychology within this exclusion,
>as no sensible cognitive psychologist would use terms like common sense
>or intelligence. The mental phenomena which are explored
>computationally by cognitive psychologists tend to be more basic and
>better defined aspects of individual behaviour. The minute words like
>common sense and intelligence are used, the relevant discipline becomes
>the sociology of knowledge.

Common sense does not depend on social consensus. I mean by common sense
those behaviors which nearly everyone acquires in the course of existence,
such as reluctance to put your hand into the fire. I contend that symbol
systems in general, digital computers in particular, and therefore computer
science, are inadequate for artifacts which "have common sense".

Formal logic is only a part of human intellectual being, computer science
is about the mechanization of that part, AI is (or should be) about the
entire intellect. Hence AI is something more ambitious than CS, and not
a subcategory. That's why I used the word "inverted".

>Gilbert Cockton, Department of Computing Science, The University, Glasgow
> gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

Ray Allis
Boeing Computer Services, Seattle, Wa.
ray@boeing.com bcsaic!ray

------------------------------

Date: 3 Nov 88 15:12:29 GMT
From: bwk@mitre-bedford.arpa (Barry W. Kort)
Subject: Re: Limits of AI -or- The Kaleidoscope of Ideas

In article <2211@datapg.MN.ORG> sewilco@datapg.MN.ORG writes:
> Life, and thus evolution, is merely random exceptions to entropy.

There is an emerging theory on the evolution of complex stable systems.
(See for example Ilya Prigogine's book, _Order out of Chaos_.)

The mathematical theory of fixed points, and the related system-theoretic
idea of eigenfunctions and eigenvalues suggest that stable, recurring
modes or patterns may emerge naturally from any system when "the outputs
are shorted to the inputs"
.

Consider for instance, the map whose name is "The Laws of Physics and
Chemistry"
. Plug in some atoms and molecules into this map (or
processor) and you get out atoms and molecules. By the Fixed Point
Theorem, one would expect there to exist a family of atoms and
molecules which remain untransformed by this map. And this family
could have arbitrarily complex members. DNA comes to mind. (Crystals
are another example of a self-replicating, self-healing structure).

So the "random exceptions to entropy" may not be entirely random.
They may be the eigenvalues and eigenfunctions of the system. The
Mandelbrot Set has shown us how exquisitely beautiful and complex
structures can arise out of simple recursion and feedback loops.

--Barry Kort

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT