Copy Link
Add to Bookmark
Report
AIList Digest Volume 4 Issue 254
AIList Digest Friday, 7 Nov 1986 Volume 4 : Issue 254
Today's Topics:
Philosophy - Searle, Turing, Symbols, Categories
----------------------------------------------------------------------
Date: 28 Oct 86 19:54:22 GMT
From: fluke!ssc-vax!bcsaic!michaelm@beaver.cs.washington.edu
(michael maxwell)
Subject: Re: Searle, Turing, Symbols, Categories
In article <10@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>
>michaelm@bcsaic.UUCP (me) wrote:
>
>> As an interesting thought experiment, suppose a Turing test were done
>> with a robot made to look like a human, and a human being who didn't
>> speak English-- both over a CCTV, say, so you couldn't touch them to
>> see which one was soft, etc. What would the robot have to do in order
>> to pass itself off as human?
>
>...We certainly have no problem in principle with
>foreign speakers (the remarkable linguist, polyglot and bible-translator
>Kenneth Pike has a "magic show" in which, after less than an hour of "turing"
>interactions with a speaker of any of the [shrinking] number of languages he
>doesn't yet know, they are babbling mutually intelligibly before your very
>eyes), although most of us may have some problems in practice with such a
>feat, at least, without practice.
Yes, you can do (I have done) such "magic shows" in which you begin to learn a
language using just gestures + what you pick up of the language as you go
along. It helps to have some training in linguistics, particularly field
methods. The Summer Institute of Linguistics (of which Pike is President
Emeritus) gives such classes. After one semester you too can give a magic
show!
I guess what I had in mind for the revised Turing test was not using language
at all--maybe I should have eliminated the sound link (and writing). What
in the way people behave (facial expressions, body language etc.) would cue
us to the idea the one is a human and the other a robot? What if you showed
pictures to the examinees--perhaps beautiful scenes, and revolting ones? This
is more a test for emotions than for mind (Mr. Spock would probably fail).
But I think that a lot of what we think of as human is tied up in this
nonverbal/ emotional level.
BTW, I doubt whether the number of languages Pike knows is shrinking because
of these monolingual demonstrations (aka "magic shows") he's doing. After the
tenth language, you tend to forget what the second or third language was--
much less what you learned!
--
Mike Maxwell
Boeing Advanced Technology Center
...uw-beaver!uw-june!bcsaic!michaelm
------------------------------
Date: 30 Oct 86 00:11:29 GMT
From: mnetor!utzoo!utcsri!utegc!utai!me@seismo.css.gov
Subject: Re: Searle, Turing, Symbols, Categories
In article <1@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>In reply to a prior iteration D. Simon writes:
>
>> I fail to see what [your "Total Turing Test"] has to do with
>> the Turing test as originally conceived, which involved measuring
>> up AI systems against observers' impressions, rather than against
>> objective standards... Moreover, you haven't said anything concrete
>> about what this test might look like.
>
>How about this for a first approximation: We already know, roughly
>speaking, what human beings are able to "do" -- their total cognitive
>performance capacity: They can recognize, manipulate, sort, identify and
>describe the objects in their environment and they can respond and reply
>appropriately to descriptions. Get a robot to do that. When you think
>he can do everything you know people can do formally, see whether
>people can tell him apart from people informally.
>
"respond and reply appropriately to descriptions". Very nice. Should be a
piece of cake to formalize--especially once you've formalized recognition,
manipulation, identification, and description (and, let's face it, any dumb
old computer can sort). This is precisely what I was wondering when I asked
you what this total Turing test looks like. Apparently, you haven't the
foggiest idea, except that it would test roughly the same things that the
old-fashioned, informal, does-it-look-smart-or-doesn't-it Turing test checks.
In fact, none of the criteria you have described above seems defineable in any
sense other than by reference to standard Turing test results ("gee, it sure
classified THAT element the way I would've!"). And if you WERE to define the
entire spectrum of human behaviour in an objective fashion ("rule 1:
answering, 'splunge!' to any question is hereby defined as an 'appropriate
reply'"), how would you determine whether the objective definition is useful?
Why, build a robot embodying it, and see if people consider it intelligent, of
course! The illusion of a "total" Turing test, distinct from the
old-fashioned, subjective variety, thus vanishes in a puff of empiricism.
And forget the well-that's-the-way-Science-does-it argument. It won't wash
--see below.
>> I believe that people in general dodge the "other minds" problem
>> simply by accepting as a convention that human beings are by
>> definition intelligent.
>
>That's an artful dodge indeed. And do you think animals also accept such
>conventions about one another? Philosophers, at least, seem to
>have noticed that there's a bit of a problem there. Looking human
>certainly gives us the prima facie benefit of the doubt in many cases,
>but so far nature has spared us having to contend with any really
>artful imposters. Wait till the robots begin giving our lax informal
>turing-testing a run for its money.
>
I haven't a clue whether animals think, or whether you think, for that matter.
This is precisely my point. I don't believe we humans have EVER solved the
"other minds" problem, or have EVER used the Turing test, even to try to
resolve the question of whether there exist "other minds". The fact that you
would like us to have done so, thus giving you a justification for the use of
the (informal part of) the Turing test (and the subsequent implicit basing of
the formal part on the informal part--see above), doesn't make it so.
This is where your scientific-empirical model for developing the "total"
Turing test out of the original falls down. Let's examine the development of
a typical scientific concept: You have some rough, intuitive observations of
phenomena (gravity, stars, skin). You take some objects whose properties
you believe you understand (rocks, telescopes, microscopes), let them interact
with your vaguely observed phenomenon, and draw more rigorous conclusions based
on the recorded results of these experimental interactions.
Now, let's examine the Turing test in that light: we take possibly-intelligent
robot R, whose properties are fairly well understood, and sit it in front of
person P, whose properties are something of a cipher to us. We then have them
interact, and get a reading off person P (such as, "yup, shore is smart", or,
"nope, dumb as a tree"). Now, what properties are being scientifically
investigated here? They can't have anything to do with robot R--we assume that
R's designer, Dr. Rstein, already has a fairly good idea what R is about.
Rather, it appears as though you are discerning those attributes of people
which relate to their judgment of intelligence in other objects. Of course, it
might well turn out that something productive comes out of this, but it's also
quite possible (and I conjecture that it's actually quite likely) that what you
get out of this is some scientific law such as, "anything which is physically
indistinguishable from a human being and can mutter something that sounds like
person P's language is intelligent; anything else is generally dumb, but
possibly intelligent, depending on the decoration of the room and the drug
content of P's bloodstream at the time of the test". In short, my worries
about the context-dependence and subjective quality of the results have not
disappeared in a puff of empiricism; they loom as large as ever.
>
>> WHAT DOES THE "TOTAL TURING TEST" LOOK LIKE?... Please
>> forgive my impertinent questions, but I haven't read your
>> articles, and I'm not exactly clear about what this "total"
>> Turing test entails.
>
>Try reading the articles.
>
Well, not only did I consider this pretty snide, but when I sent you mail
privately, asking politely where I can find the articles in question, I didn't
even get an answer, snide or otherwise. So starting with this posting, I
refuse to apologize for being impertinent. Nyah, nyah, nyah.
>
>
>Stevan Harnad
>princeton!mind!harnad
Daniel R. Simon
"sorry, no more quotations"
-D. Simon
------------------------------
Date: Thu, 30 Oct 86 16:09:20 EST
From: "Col. G. L. Sicherman" <colonel%buffalo.csnet@CSNET-RELAY.ARPA>
Subject: Re: extended Turing test
In article <8610271728.AA12616@ucbvax.Berkeley.EDU>, harnad@mind.UUCP writes:
> > [I]t's misleading to propose that a veridical model of _our_ behavior
> > ought to have our "performance capacities"...I do not (yet) quarrel
> > with the principle that the model ought to have our abilities. But to
> > speak of "performance capacities" is to subtly distort the fundamental
> > problem. We are not performers!
>
> "Behavioral ability"/"performance capacity" -- such fuss over
> black-box synonyms, instead of facing the substantive problem of
> modeling the functional substrate that will generate them.
You seem to be looking at the problem as a scientist. Let me give an
example of what I mean:
Suppose you have a robot slave. (That's the practical goal of A.I.,
isn't it?) It cooks for you, makes the beds, changes the oil in your
car, puts the dog out, performs sexual favors, ... you name it. BUT--
it will not open the front door for you!
Maddened with frustration, you order an electric-eye door opener,
1950s design. It works flawlessly. Now you have everything you want.
Does the combination of robot + door-opener pass the Total Turing Test?
Is the combination a valid subject for the Test?
------------------------------
Date: Thu, 30 Oct 86 15:56:27 EST
From: "Col. G. L. Sicherman" <colonel%buffalo.csnet@CSNET-RELAY.ARPA>
Subject: Re: how we decide whether it has a mind
In article <8610271726.AA12550@ucbvax.Berkeley.EDU>, harnad@mind.UUCP writes:
> > One (rationally) believes other people are conscious BOTH because
> > of their performance and because their internal stuff is a lot like
> > one's own.
>
> ... I am not denying that
> there exist some objective data that correlate with having a mind
> (consciousness) over and above performance data. In particular,
> there's (1) the way we look and (2) the fact that we have brains. What
> I am denying is that this is relevant to our intuitions about who has a
> mind and why. I claim that our intuitive sense of who has a mind is
> COMPLETELY based on performance, and our reason can do no better. ...
There's a complication here: Our intutions about things in our environment
change with the environment. The first time you use a telephone, you hear
an electronic reproduction of somebody's voice; you KNOW that you're talking
to a machine, not to the other person. Soon this knowledge evaporates, and
you come to think, "I talked with Smith today on the phone." You may even
have seen his face before you!
It's the same with thinking. When only living things could perceive and
adapt accordingly, people did not think of artifacts as having minds.
This wasn't stubborn of them, just honest intuition. When ELIZA came
along, it became useful for her users to think of her as having a mind.
Just like thinking you talked with Smith ...
I'd like to see less treatment of "X has a mind" as a formal proposition,
and more discussion of how we use our intuition about it. After all,
is having a mind the most important thing about somebody to you? Is
it even important at all?
------------------------------
End of AIList Digest
********************