Copy Link
Add to Bookmark
Report

AIList Digest Volume 1 Issue 085

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest            Monday, 31 Oct 1983       Volume 1 : Issue 85 

Today's Topics:
Intelligence
----------------------------------------------------------------------

Date: Fri 28 Oct 83 13:43:21-EDT
From: RICKL%MIT-OZ@MIT-MC.ARPA
Subject: Re: Parallelism & Consciousness

From: MINSKY@MIT-OZ

That's what you get for trying to define things too much.

what do i get for trying to define what too much??

though obviously, even asking that question is trying to define
your intent too much, & i'll only get more of whatever i got for
whatever it was i got it for.

-=*=-

------------------------------

Date: 28 Oct 1983 12:02-PDT
From: ISAACSON@USC-ISI
Subject: Re: Parallelism & Consciousness


From Minsky:
That's what you get for trying to define things too much.

Coming, as it does, out of the blue, your comment appears to
negate the merits of this discussion. The net effect might
simply be to bring it to a halt. I think that it is, inadvertent
though it might be, unkind to the discussants, and unfair to the
rest of us who are listening in.

I agree. The level of confusion is not insignificant and
immediate insights are not around the corner. However, in my
opinion, we do need serious discussion of these issues. I.e.,
questions of subcognition vs. cognition; parallelism,
"autonomy", and epiphenomena; algorithmic programability vs.
autonomy at the subcognitive and cognitive levels; etc. etc.

Perhaps it would be helpful if you give us your views on some of
these issues, including your views on a good methodology to
discussing them.

-- JDI

------------------------------

Date: 30 Oct 83 13:27:11 EST (Sun)
From: Don Perlis <perlis%umcp-cs@CSNet-Relay>
Subject: Re: Parallelism & Consciousness


From: BUCKLEY@MIT-OZ
-- of what relevance is the issue of time-behavior of an
algorithm to the phenomenon of intelligence, i.e., can
there be in principle such a beast as a slow,
super-intelligent program?

From: RICKL%MIT-OZ@mit-mc
gracious, isn't this a bit chauvinistic? suppose that ai is
eventually successful in creating machine intelligence,
consciousness, etc. on nano-second speed machines of the
future: we poor humans, operating only at rates measured in
seconds and above, will seem incredibly slow to them. will
they engage in debate about the relevance of our time- behavior
to our intelligence? if there cannot in principle be such a
thing as a slow, super-intelligent program, how can they avoid
concluding that we are not intelligent? -=*=- rick

It seems to me that the issue isn't the 'appearance' of intelligence of
one being to another--after all, a very slow thinker may nonetheless
think very effectively and solve a problem the rest of us get nowhere
with. Rather I suggest that intelligence be regarded as effectiveness,
namely, as coping with the environment. Then real-time issues clearly
are significant.

A supposedly brilliant algorithm that 'in principle' could decide what
to do about an impending disaster, but which is destroyed by that
disaster long before it manages to grasp that there is a disaster,or
what its dimensions are, perhaps should not be called intelligent (at
least on the basis of *that* event). And if all its potential behavior
is of this sort, so that it never really gets anything settled, then it
could be looked at as really out of touch with any grasp of things,
hence not intelligent.

Now this can be looked at in numerous contexts; if for instance it is
applied to the internal ruminations of the agent, eg as it tries to
settle Fermat's Last Theorem, and if it still can't keep up with its
own physiology, ie, its ideas form and pass by faster than its
'reasoning mechanisms' can keep track of, then it there too will fail,
and I doubt we would want to say it 'really' was bright. It can't even
be said to be trying to settle Fermat's Last theorem, for it will not
be able to keep that in mind.

This is in a sense an internal issue, not one of relative speed to the
environment. But considering that the internal and external events are
all part of the same physical world, I don't see a significant
difference. If the agent *can* keep track of its own thinking, and
thereby stick to the task, and eventually settle the theorem, I think
we would call it bright indeed, at least in that domain, although
perhaps a moron in other matters (not even able to formulate questions
about them).

------------------------------

Date: Sun 30 Oct 83 16:59:12-EST
From: RICKL%MIT-OZ@MIT-MC.ARPA
Subject: Re: Parallelism & Consciousness

[...]

From: Don Perlis <perlis%umcp-cs@CSNet-Relay>
It seems to me that the issue isn't the 'appearance' of intelligence of
one being to another....Rather I suggest that intelligence be regarded
as effectiveness, namely, as coping with the environment....

From this & other recent traffic on the net, the question we are really
discussing seems to be: ``can an entity be said to be intelligent in and
of itself, or can an entity only be said to be intelligent relative to some
world?''. I don't think I believe in "pure, abstract intelligence, divorced
from the world". However, a consequence of the second position seems to
be that there should be possible worlds in which we would consider humans
to be un-intelligent, and I can't readily think of any (can anyone else?).

Leaving that question as too hard (at least for now), another question we
have been chasing around is: ``can intelligence be regarded as survivability,
(or more generally as coping with an external environment)?''. In the strong
form this position equates the two, and this position seems to be too
strong. Amoebas cope quite well and have survived for unimaginably longer
than we humans, but are generally acknowledged to be un-intelligent (if
anyone cares to dispute this, please do). Survivability and coping with
the environment, alone, therefore fail to adequately capture our intuitions
of intelligence.
-=*=- rick

------------------------------

Date: 30 Oct 1983 18:46:48 EST (Sunday)
From: Dave Mankins <dm@BBN-UNIX>
Subject: Re: Intelligence and Competition

By the survivability/adaptability criteria the cockroach must be
one of the most intelligent species on earth. There's obviously
something wrong with those criteria.

------------------------------

Date: Fri 28 Oct 83 14:19:36-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Definition of Intelligence

I like the idea that the intelligence of an organism should be
measured relative to its goals (which usually include survival, but
not in the case of "smart" bombs and kamikaze pilots). I don't think
that goal-satisfaction criteria can be used to establish the "relative
intelligence" of organisms with very different goals. Can a fruit fly
be more intelligent than I am, no matter how well it satisfies its
goals? Can a rock be intelligent if its goals are sufficiently
limited?

To illustrate this in another domain, let us consider "strength". A
large bulldozer is stronger than a small one because it can apply more
brute force to any job that a bulldozer is expected to do. Can we
say, though, that a bulldozer is "stronger" than a pile driver, or
vice versa?

Put another way: If scissors > paper > rock > scissors ..., does it
make any sense to ask which is "best"? I think that this is the
problem we run into when we try to define intelligence in terms of
goals. This is not to say that we can define it to be independent of
goals, but goal satisfaction is not sufficient.

Instead, I would define intelligence in terms of adaptability or
learning capability in the pursuit of goals. An organism with hard-
wired responses to its environment (e.g. a rock, a fruit fly, MACSYMA)
is not intelligent because it does not adapt. I, on the other hand,
can be considered intelligent even if I do not achieve my goals as
long as I adapt to my environment and learn from it in ways that would
normally enhance my chances of success.

Whether speed of response must be included as a measure of
intelligence depends on the goal, but I would say that, in general,
rapid adaptation does indicate greater intelligence than the same
response produced slowly. Multiple choice aptitude tests, however,
exercise such limited mental capabilities that a score of correct
answers per minute is more a test of current knowledge than of ability
to learn and adapt within the testing period. Knowledge relative to
age (IQ) is a useful measure of learning ability and thus of
intelligence, but cannot be used for comparing different species. I
prefer unlimited-time "power" tests for measuring both competence and
intelligence.

The Turing test imposes a single goal on two organisms, namely the
goal of convincing an observer at the other end of tty that he/it is
the true human. This will clearly only work for organisms capable
of typing at human speed and capable of accepting such a goal. These
conditions imply that the organism must have a knowledge of human
psychology and capabilities, or at least a belief (probably incorrect)
that it can "fake" them. Given such a restricted situation, the
nonhuman organism is to be judged intelligent if it can appropriately
modify its own behavior in response to questioning at least as well as
the human can. (I would claim that a nonadapting organism hasn't a
chance of passing the test, and that this is just what the observer
will be looking for.)

I do not believe that a single test can be devised which can determine
the relative intelligences of arbitrary organisms, but the public
wants such a test. What shall we give them? I would suggest the
following procedure:

For two candidate organisms, determine a goal that both are capable
of accepting and that we consider related to intelligence. For an
interesting test, the goal must be such that neither organism is
specially adapted or maladapted for achieving it. The goal might be
absolute (e.g., learn 100 nonsense syllables) or relative (e.g.,
double your vocabulary). If no such goal can be found, the relative
organisms cannot be ranked. If a goal is found, we can rank them
along the dimension of the indicated behavior and we can infer a
similar ranking for related behaviors (e.g., verbal ability). The
actual testing for learning ability is relatively simple.

How can we test a computer for intelligence? Unfortunately, a computer
can be given a wide variety of sensors and effectors and can be made
to accept almost any goal. We must test it for human-level adaptability
in using all of these. If it cannot equal human ability nearly all
measurable scales (e.g., game playing, verbal ability, numerical
ability, learning new perceptual and motor skills, etc.), it cannot
be considered intelligent in the human sense. I know that this is
exceedingly strict, but it is the same test that I would apply to
decide whether a child, idiot savant, or other person were intelligent.
On the other hand, if I could not match the computer's numerical and
memory capabilities, it has the right to judge me unintelligent by
computer standards.

The intelligence of a particular computer program, however, should
be judged by much less stringent standards. I do not expect a
symbolic algebra program to learn to whistle Dixie. If it can
learn, without being programmed, a new form of integral faster
than I can, or if it can find a better solution than I can in
any length of time, then I will consider it an intelligent symbolic
algebra program. Similar criteria apply to any other AI program.

I have left open the question of how to measure adaptability,
relative importance of differing goals, parallel satisfaction of
multiple goals, etc. I have also not discussed creativity, which
involves autonomous creation of new goals. Have I missed anything,
though, in the basic concept of intelligence?

-- Ken Laws

------------------------------

Date: 30 Oct 1983 1456-PST
From: Jay <JAY@USC-ECLC>
Subject: Re: Parallelism & Consciousness

From: RICKL%MIT-OZ@MIT-MC.ARPA

...
the question we are really discussing seems to be: ``can an entity be
said to be intelligent in and of itself, or can an entity only be said
to be intelligent relative to some world?''. I don't think I believe
in "pure, abstract intelligence, divorced from the world".
...
another question we have been chasing around is: ``can intelligence be
regarded as survivability, (or more generally as coping with an
external environment)?''. [...]

I believe intelligence to be the ability to cope with CHANGES in the
enviroment. Take desert tortoises, although they are quite young
compared to amobea, they have been living in the desert some
thousands, if not millions of years. Does this mean they are
intelligent? NO! put a freeway through their desert and the tortoises
are soon dying. Increase the rainfall and they may become unable to
compete with the rabbits (which will take full advantage of the
increase in vegitation and produce an increase in rabbit-ation). The
ability to cope with a CHANGE in the enviroment marks intellignece.
All a tortoise need do is not cross a freeway, or kill baby rabbits,
and then they could begin to claim intellignce. A similar argument
could be made against intelligent amobea.

A posible problem with this view is that biospheres can be counted
intelligent, in the desert an increase in rainfall is handled by an
increase in vegetation, and then in herbivores (rabbits) and then an
increase in carnivores (coyotes). The end result is not the end of a
biosphere, but the change of a biosphere. The biosphere has
successfully coped with a change in its environment. Even more
ludicrous, an argument could be made for an intelligent planet, or
solar system, or even galaxy.

Notice, an organism that does not change when its environment
changes, perhaps because it does not need to, has not shown
intelligence. This is, of course, not to say that that particular
organism is un-intelligent. Were the world to become unable to
produce rainbows, people would change little, if at all.

My behavioralism is showing,
j'

------------------------------

Date: Sun, 30 Oct 1983 18:11 EST
From: JBA%MIT-OZ@MIT-MC.ARPA
Subject: Parallelism & Consciousness

From: RICKL%MIT-OZ at MIT-MC.ARPA
However, a consequence of the second position seems to
be that there should be possible worlds in which we would consider humans
to be un-intelligent, and I can't readily think of any (can anyone else?).

Read the Heinlein novel entitled (I think) "Have Spacesuit, Will
Travel." Somewhere in there a race tries to get permission to
kill humans wantonly, arguing that they're basically stupid. Of
course, a couple of adolscent humans who happen to be in the neighborhood
save the day by proving that they're smart. (I read this thing a long
time ago, so I may have the story and/or title a little wrong.)

Jonathan

[Another story involves huge alien "energy beings" taking over the earth.
They destroy all human power sources, but allow the humans to live as
"cockroaches" in their energy cities. One human manages to convince an
alien that he is intelligent, so the aliens immediately begin a purge.
Who wants intelligent cockroaches? -- KIL]

------------------------------

Date: Sun 30 Oct 83 15:41:18-PST
From: David Rogers <DRogers@SUMEX-AIM.ARPA>
Subject: Intelligence and Competition

From: RICKL%MIT-OZ@MIT-MC.ARPA
I don't think I believe in "pure, abstract intelligence, divorced
from the world". However, a consequence of the second position seems to
be that there should be possible worlds in which we would consider humans
to be un-intelligent, and I can't readily think of any (can anyone else?).

From: Jay <JAY@USC-ECLC>
...Take desert tortoises, [...]

Combining these two comments, I came up with this:

...Take American indians, although they are quite young compared
to amoeba, they have been living in the desert some thousands of years.
Does this mean they are intelligent? NO! Put a freeway (or some barbed
wire) through their desert and they are soon dying. Increase cultural
competition and they may be unable to compete with the white man (which
will take full advantage of their lack of guns and produce an
increase in white-ation). The ability to cope with CHANGE in the
environment marks intelligence.

I think that the stress on "adaptability" makes for some rather strange
candidates for intelligence. The indians were developing a cooperative
relationship with their environment, rather than a competitive one; I cannot
help but think that our cultural stress on competition has biased us
towards competitive definitions of intelligence.

Survivability has many facets, and competition is only one of them, and
may not even be a very large one. Perhaps before one judges intelligence on
how systems cope with change, how about intelligence with how the systems
cope with stasis? While it is popular to think about how the great thinkers
of the past arose out of great trials, I think that more of modern knowledge
came from times of relative calm, when there was enough surplus to offer
a group of thinkers time to ponder.

David

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT