Copy Link
Add to Bookmark
Report
AIList Digest Volume 8 Issue 108
AIList Digest Thursday, 20 Oct 1988 Volume 8 : Issue 108
Intelligence / Consciousness Test for Machines (5 Messages)
----------------------------------------------------------------------
Date: 11 Oct 88 19:19:01 GMT
From: hp-sde!hpcuhb!hpindda!kmont@hplabs.hp.com (Kevin Montgomery)
Subject: Re: Intelligence / Consciousness Test for Machines
(Neural-Nets)???
In article <1141@usfvax2.EDU>, mician@usfvax2.BITNET writes:
> When can a machine be considered a conscious entity?
May I suggest that this discussion be moved to talk.philosophy?
While it has many implications to AI (as do most of the more
philosophical arguments which take place in comp.ai), it has a
broader scope and should have a broader reader base. There are
a number of implications of a definition of consciousness-
rights and responsibilities of something deemed conscious, whether
mere consciousness is a sufficient criterion for personhood
(should non-biological entities be deemed conscious, and if
consciousness is sufficient for personhood, and the constitutional
rights are bestowed upon persons "born" in a country, then these
entities have all the rights of the constitution (in this country)).
The implication of this example would be that if machines (or animals,
or any non-human or non-biological entity) have rights, then one may
be arrested for murder if one should halt the "life process" of
such an entity either by killing an animal or by removing power
from a machine.
Moreover, the question of when humans are conscious (and thus are
arguably persons) has implications in the areas of abortion, euthanasia,
human rights, and other areas.
For these reasons, I suggest we drop over to talk.philosophy (VERY
low traffic over there, anyway), resolve these questions (if possible,
but doubtful), and post a response to the interested newsgroups (comp.ai,
talk.abortion, etc).
Rather than attacking all questions at once and getting quite confused
in the process, I suggest that we start with the question of whether
consciousness is a necessary and sufficient criterion for personhood.
In other words, in order to have rights (such as the right to life),
does something have to have consciousness? Perhaps we should start
with a definition of consciousness and personhood, and revise these
as we see fit (would someone with a reputable dictionary handy post
one there?).
Note that there are implications about things such as anencephalic
babies (born with only the medulla, no higher brain areas exist),
commissurotomy (split-brain) patients, and even people we consider
to be knocked unconscious (or even sleeping!) have personhood
(and therefore rights).
kevin
------------------------------
Date: 13 Oct 88 12:20:12 GMT
From: bwk@mitre-bedford.arpa (Barry W. Kort)
Subject: Re: Intelligence / Consciousness Test for Machines
(Neural-Nets)???
The Turtling Test
Barry Kort
(With apologies to Douglas Hofstadter)
Achilles: Good morning Mr. T!
Tortoise: Good day Achilles. What a wonderful day for
touring the computer museum.
Achilles: Yes, it's quite amazing to realize how far our
computer technology has come since the days of Von
Neumann and Turing.
Tortoise: It's interesting that you mention Alan Turing, for
I've been doing some biographical research on him.
He is a most interesting and enigmatic character.
Achilles: Biographical research? That's a switch. Usually
people like to talk about his Turing Test, in
which a human judge tries to distinguish which of
two individuals is the human and which is the
computer, based on their answers to questions
posed by the judge over a teletype link. To tell
you the truth, I'm getting a little tired of
hearing people talk about it so much.
Tortoise: You have a fine memory, my friend, but I'm afraid
you'll be disappointed when I tell you that the
Turing Test does come up in my work.
Achilles: In that case, don't tell me.
Tortoise: Fair enough. Perhaps you would be interested to
know what Alan Turing would have done next if he
hadn't died so tragically in his prime.
Achilles: That's an interesting idea, but of course it's
impossible to say.
Tortoise: If you mean we'll never know for sure, I would
certainly agree. But I have just come up with a
way to answer the question anyway.
Achilles: Really?
Tortoise: Really. You see, I have just constructed a model
of Alan Turing's brain, based on a careful
examination of everything he read, saw, did, or
wrote about during his tragic career.
Achilles: Everything?
Tortoise: Well, not quite everything -- just the things I
know about from the archives and from his notes
and effects. That's why it's just a model and not
an exact duplicate of his brain. It would be a
perfect model if I could discover everything he
ever saw, learned, or discovered.
Achilles: Amazing!
Tortoise: Since Turing had a very logical mind, I merely
start with his accumulated knowledge and reason
logically to what he would have investigated next.
Interestingly, this leads to a possible hypothesis
explaining why Turing committed suicide.
Achilles: Fantastic! Let's hear your theory.
Tortoise: A logical next step after devising the Turing Test
would be to give the formal definition of a Turing
Machine to computer `A' (which, since it's a
computer, happens to be a Turing Machine itself)
and ask it to decide if another system (call it
machine `B') is a Turing Machine.
Achilles: I don't get it. What is machine `A' supposed to
do to decide the question?
Tortoise: Why it merely devises a test which only a Turing
Machine could pass, such as a computation that a
lesser beast would choke on. Then it administers
the Test to machine `B' to see how it handles the
challenge.
Achilles: Are you sure that a Turing Machine knows how to
devise such a test in the first place?
Tortoise: That's a good question. I suppose it depends on
how the definition of a Turing Machine is stated.
Clearly, a good definition would be one which
states or implies a practical way to decide if an
arbitrary hunk of matter possesses the property of
being a Turing Machine. In this case, it's safe
to assume that the problem was well-posed, meaning
that the definition was sufficiently complete.
Achilles: So what happened next?
Tortoise: You mean what does my model of Turing's brain
suggest as the next logical step?
Achilles: Of course, Mr. T. I quite forgot what level we
were operating on.
Tortoise: Next, Machine `A' would be asked if Machine `A'
itself fit the definition of a Turing Machine!
Achilles: Wow! You mean you can ask a machine to examine
its own makeup?
Tortoise: Why not? In fact many modern computers have
built-in self diagnostic systems. Why can't a
computer devise a diagnostic program to see what
kind of computer it is? As long as it's given the
definition of a Turing Machine, it can administer
the test to itself and see if it passes.
Achilles: Holy Holism! Computers can become self-aware of
what they are?!
Tortoise: That would seem to be the case.
Achilles: What happens next?
Tortoise: You tell me.
Achilles: The Turing Machine tries the Turing Test on a
human.
Tortoise: Very good. And what is the outcome?
Achilles: The human passes?
Tortoise: Right!
Achilles: So Alan Turing concludes that he's nothing more
than a Turing Machine, which makes him so
depressed he eventually commits suicide.
Tortoise: Maybe.
Achilles: What else could there be?
Tortoise: Let's go back to your last conclusion. You said,
"Turing concludes that he's nothing more than a
Turing Machine."
Achilles: I don't follow your point.
Tortoise: Suppose Turing wants to prove conclusively that he
was something more than "just a Turing Machine."
Achilles: I see. He had a Turing Machine in him, but he
wanted to know what else he was that was more than
just a machine.
Tortoise: Right. So he searched for some way to discover
how he differed from a machine in an important
way.
Achilles: And he couldn't discover any way?
Tortoise: Not necessarily. He may have known of several
ways. For example, he could have tried to fall in
love.
Achilles: Why falling in love is the easiest thing in the
world.
Tortoise: Not if you try to do it. Then it's impossible!
Achilles: I see your point.
Tortoise: In any event, there is no evidence that Turing
ever fell in love, even though he must have known
it was possible. Maybe he didn't know that one
shouldn't try so hard.
Achilles: So he committed suicide in despair?
Tortoise: Maybe.
Achilles: What else could there be?
Tortoise: The last possibility that comes to mind is that
Turing suspected there was something he was
overlooking.
Achilles: And what is that?
Tortoise: Could a Turing Machine discover the properties of
a Turing Machine without being told?
Achilles: Gee, I don't know. But it could discover the
properties of another machine that it could do
experiments on.
Tortoise: Would it ever think to do such experiments on
itself?
Achilles: I don't know. Does it even know what the word
"itself" points to?
Tortoise: Who would have given it the idea of "self"?
Achilles: I don't know. It reminds me of Narcissus
discovering his reflection in a pool of water and
falling in love with himself.
Tortoise: Well, I haven't finished my research yet, but I
suspect that a Turing Machine, without outside
assistance, could not discover the complete
definition of itself, nor would it think to ask
itself the question, "Am I a Turing Machine?" if
it were simply given the definition of one as a
mathematical abstraction.
Achilles: In other words, if Alan Turing did ask himself the
question, "Am I (Alan Turing) a Turing Machine?"
the very act of posing the question proves he
isn't one!
Tortoise: That's my conjecture.
Achilles: So he committed suicide to prove he wasn't one,
because he didn't realize that he already had all
the evidence he needed to prove that he was
intellectually more complex than a mere Turing
Machine.
Tortoise: Perhaps.
Achilles: Well, I would be most interested to discover the
final answer when you complete your research on
this most interesting question.
Tortoise: My friend, if we live long enough, we're bound to
find the answer.
Achilles: Good day Mr. T!
Tortoise: Good day Achilles.
------------------------------
Date: 13 Oct 88 18:43:22 GMT
From: dewey.soe.berkeley.edu!mkent@ucbvax.berkeley.edu (Marty Kent)
Subject: Re: Intelligence / Consciousness Test for Machines
(Neural-Nets)???
In article <1141@usfvax2.EDU>, mician@usfvax2.BITNET writes:
> When can a machine be considered a conscious entity?
I think an important point here is exactly the idea that a machine (or
other entity) is generally -considered- to be conscious or not. In other
words, this judgement shouldn't be expected to reflect some deep -truth-
about the entity involved (as if it REALLY is or isn't conscious). It's
more a matter of the -usefulness- of the judgement: what does it buy you
to consider an entity conscious...
So a machine can be considered (by -you-) conscious any time
1) you yourself find it helpful to think this way, and
2) you're not aware of anything that violates this judgement.
If you really want to consider entities conscious, you can come up with a
workable definition of consciousness that'll include most anything (or, at
least, exclude almost nothing). If you're really resistant to the idea,
you can keep pushing up the requirements until nothing and noone passes
your test.
Chief Dan George said "The human beings [his own tribe, of course :-)]
think -everything- is alive: earth, grass, trees, stones. The white man
thinks everything is dead. If things keep trying to act alive, the white
man will rub them out."
Marty Kent Sixth Sense Research and Development
415/642 0288 415/548 9129
MKent@dewey.soe.berkeley.edu
{uwvax, decvax, inhp4}!ucbvax!mkent%dewey.soe.berkeley.edu
Kent's heuristic: Look for it first where you'd most like to find it.
------------------------------
Date: 14 Oct 88 13:48:51 GMT
From: bwk@mitre-bedford.arpa (Barry W. Kort)
Subject: Definition of thought
>From the Hypercard Stack, "Semantic Network"...
* * *
Thinking is a rational form of information processing which
reduces the entropy or uncertainty of a knowledge base,
generates solutions to outstanding problems, and conceives
goal-oriented courses of action.
Antonym: See "Worrying"
* * *
Worrying is an emotional form of information processing which
fails to reduce the entropy or uncertainty of a knowledge base,
fails to generate solutions to outstanding problems, or fails
to conceive goal-achieving courses of action.
Antonym: See "Thinking"
--Barry Kort
------------------------------
Date: 18 Oct 88 06:38:37 GMT
From: leah!gbn474@bingvaxu.cc.binghamton.edu (Gregory Newby)
Subject: Re: Intelligence / Consciousness Test for Machines
(Neural-Nets)???
(* sorry about typos/unreadability: my /terminfo/regent100 file
is rapidly approaching maximum entropy)
In article <3430002@hpindda.HP.COM>, kmont@hpindda.HP.COM (Kevin Montgomery)
writes:
> In article <1141@usfvax2.EDU>, mician@usfvax2.BITNET writes:
> > When can a machine be considered a conscious entity?
>
> May I suggest that this discussion be moved to talk.philosophy?
>
> While it has many implications to AI (as do most of the more
> philosophical arguments which take place in comp.ai), it has a
> broader scope and should have a broader reader base.
I would like to see this discussion carried through on comp.ai.
It seems to me that these issues are often not considered by scientists
working in ai, but should be. And, it may be more useful to
take an "operational" approach in comp.ai, rathar than a philosophical
or metaphysical approach in talk.philosophy.
This topic has centered about the definition of consciousness, or
the testing of consciousness.
Turing (_Mind_, 1950) said: "The only to know that a machine is
thinking is to be that machine and feel oneself thinking. " (paraphrase)
A better way of thiking about consciousness may be to consider
_self_ consciousness. That is, is the entity in question capable
of considering its own self ?
Traditional approaches to defining "intelligent behaviour" are
PERFORMANCE based. The Turing test asks a machine to *simulate* a human.
(as an aside: how could a machine, which has none of the experience
of a human, be expected to act as one. Unless someone were to
somehow 'hard-code' all of a human's experience in some computer system,
but who would call that intelligence?)
Hofstadter (_Goedel, Escher, Bach_, p24) gives a list of functions as
criteria for intelligent behaviour which many of today's smart expert
systems can perform, but they certainly aren't intelligent!
If a machine is to be considered as "intelligent," or "conscious,"
no test will suffice. It will be forced to make an argument on its
own behalf.
This argument must begin, "I am intelligent"
(or, "I am conscious" --means the same thing, here)
The self concept has not, to my knowledge, been treated in the AI
literature. (My thesis, "A self-concept based approach to artificial
intelligence, with a case study of the Galileo(tm) computer system,"
SUNY Albany, dealt with it, but I'm a social scientist.)
As Mead (see, for instance, _Social Psychology_) suggests, the
difference between lower animals and man is twofold:
1) the self concept: man may consider the self as an object, separate
from other objects and in relation to the environment.
2) the generalized other: man is able to consider the self as
seen by other selves.
The first one's relatively easy. The second must be learned through
social interaction.
So, (if anyone's still reading)
What kind of definition of intelligence are we talking about here?
I would bet that for any performance criteria you can give me, if I
gave you a machine that could do it, the machine would not be considered
intelligent without also exhibiting a self-concept.
'Nuff said.
--newbs
(
gbnewby@rodan.acs.syr.edu
gbn474@leah.albany.edu
)
------------------------------
End of AIList Digest
********************