Copy Link
Add to Bookmark
Report
AIList Digest Volume 4 Issue 255
AIList Digest Friday, 7 Nov 1986 Volume 4 : Issue 255
Today's Topics:
Philosophy - Searle, Turing, Symbols, Categories
----------------------------------------------------------------------
Date: 29 Oct 86 18:31:18 GMT
From: ubc-vision!ubc-cs!andrews@BEAVER.CS.WASHINGTON.EDU
Subject: Turing Test ad infinitum
This endless discussion about the Turing Test makes the
"eliminative materialist" viewpoint very appealing: by the
time we have achieved something that most people today would
call intelligent, we will have done it through disposing of
concepts such as "intelligence", "consciousness", etc.
Perhaps the reason we're having so much trouble defining
a workable Turing Test is that we're essentially trying to
fit a square peg into a round hole, belabouring some point
which has less relevance than we realize. I wonder what old
Alan himself would say about the whole mess.
--Jamie.
...!seismo!ubc-vision!ubc-cs!andrews
"At the sound of the falling tree... it's 9:30"
------------------------------
Date: 1 Nov 86 20:34:02 GMT
From: allegra!princeton!mind!harnad@ucbvax.Berkeley.EDU (Stevan
Harnad)
Subject: Re: Searle, Turing, Symbols, Categories
In his second net.ai comment on the abstracts of the two articles under
discussion, me@utai.UUCP (Daniel Simon) wrote:
>> WHAT DOES THE "TOTAL TURING TEST" LOOK LIKE?... Please
>> forgive my impertinent questions, but I haven't read your
>> articles, and I'm not exactly clear about what this "total"
>> Turing test entails.
I replied (after longish attempts to explain in two separate iterations):
>"Try reading the articles."
Daniel Simon rejoined:
> Well, not only did I consider this pretty snide, but when I sent you
> mail privately, asking politely where I can find the articles in
> question, I didn't even get an answer, snide or otherwise. So starting
> with this posting, I refuse to apologize for being impertinent.
> Nyah, nyah, nyah.
The same day, the following email came from Daniel Simon:
> Subject: Hoo, boy, did I put my foot in it:
> Ooops....Thank you very much for sending me the articles, and I'm sorry
> I called you snide in my last posting. If you see a bright scarlet glow
> in the distance, looking west from Princeton, it's my face. Serves me
> right for being impertinent in the first place... As soon as I finish
> reading the papers, I'll respond in full--assuming you still care what
> I have to say... Thanks again. Yours shamefacedly, Daniel R. Simon.
This is a very new form of communication for all of us. We're just going to
have to work out a new code of Nettiquette. With time, it'll come. I
continue to care what anyone says with courtesy and restraint, and
intend to respond to everything of which I succeed in making sense.
Stevan Harnad
{allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet
(609)-921-7771
------------------------------
Date: 1 Nov 86 18:21:12 GMT
From: allegra!princeton!mind!harnad@ucbvax.Berkeley.EDU (Stevan
Harnad)
Subject: Re: Searle, Turing, Symbols, Categories
Jay Freeman (freeman@spar.UUCP) had, I thought, joined the
ongoing discussion about the robotic version of the Total Turing Test
to address the questions that were raised in the papers under
discussion, namely: (1) Do we have any basis for contending with the
"other minds problem" -- whether in other people, animals or machines
-- other than turing-indistinguishable performance capacity? (2) Is
the teletype version of the turing test -- which allows only
linguistic (i.e., symbolic) interactions -- a strong enough test? (3)
Could even the linguistic version alone be successfully passed by
any device whose symbolic functions were not "grounded" in
nonsymbolic (i.e., robotic) function? (4) Are transduction, analog
representations, A/D conversion, and effectors really trivial in this
context, or is there a nontrivial hybrid function, grounding symbolic
representation in nonsymbolic representation, that no one has yet
worked out?
When Freeman made his original sugestion that the symbolic processor
could have access to the robotic transducer's bit-map, I thought he
was making the sophisticated (but familiar) point that once the
transducer representation is digitized, it's symbolic all the way.
(This is a variant of the "transduction-is-trivial" argument.) My
prior reply to Freeman (about simulated models of the world, modularity,
etc.) was addressed to this construal of his point. But now I see that
he was not making this point at all, for he replies:
> ... let's equip the robot with an active RF emitter so
> it can jam the camera's electronics and impose whatever bit map it
> wishes... design a robot in the shape of a back projector, and let it
> create internally whatever representation of a human being it wishes
> the camera to see, and project it on its screen for the camera to
> pick up. Such a robot might do a tolerable job of interacting with
> other parts of the "objective" world, using robot arms and whatnot
> of more conventional design, so long as it kept them out of the
> way of the camera... let's create a vaguely anthropomorphic robot and
> equip its external surfaces with a complete covering of smaller video
> displays, so that it can achieve the minor details of human appearance
> by projection rather than by mechanical motion. Well, maybe our model
> shop is good enough to do most of the details of the robot convincingly,
> so we'll only have to project subtle details of facial expression.
> Maybe just the eyes.
> ... if you are going to admit the presence of electronic or mechanical
> devices between the subject under test and the human to be fooled,
> you must accept the possibility that the test subject will be smart
> enough to detect their presence and exploit their weaknesses...
> consider a robot that looks no more anthropomorphic than your vacuum
> cleaner, but that is possessed of moderate manipulative abilities and
> a good visual perceptive apparatus.
> Before the test commences, the robot sneakily rolls up to the
> camera and removes the cover. It locates the connections for the
> external video output, and splices in a substitute connection to
> an external video source which it generates. Then it replaces the
> camera cover, so that everything looks normal. And at test time,
> the robot provides whatever image it wants the testers to see.
> A dumb robot might have no choice but to look like a human being
> in order to pass the test. Why should a smart one be so constrained?
>From this reply I infer that Freeman is largely concerned with the
question of appearance: Can a robot that doesn't really look like a
person SIMULATE looking like a person by essentially symbolic means,
plus add-on modular peripherals? In the papers under discussion (and in some
other iterations of this discussion on the net) I explicitly rejected appearance
as a criterion. (The reasons are given elsewhere.) What is important in
the robotic version is that it should be a human DO-alike, not a human
LOOK-alike. I am claiming that the (Total) object-manipulative (etc.)
performance of humans cannot be generated by a basically symbolic
module that is merely connected with peripheral modules. I am
hypothesizing (a) that symbolic representations must be NONMODULARLY
(i.e., not independently) grounded in nonsymbolic representations, (b)
that the Total Turing Test requires the candidate to display all of
our robotic capacities as well as our linguistic ones, and (c) that
even the linguistic ones could not be accomplished unless grounded in
the robotic ones. In none of this do the particulars of what the robot
(or its grey matter!) LOOK like matter.
Two last observations. First, what the "proximal stimulus" -- i.e.,
the physical energy pattern on the transducer surface -- PRESERVES
whereas the next (A/D) step -- the digital representation -- LOSES, is
everything about the full PHYSICAL configuration of the energy pattern
that cannot be recovered by inversion (D/A). (That's what the ongoing
concurrent discussion about the A/D distinction is in part concerned
with.) Second, I think there is a tendency to overcomplicate the
issues involved in the turing test by adding various arbitrary
elaborations to it. The basic questions are fairly simply stated
(though not so simple to answer). Focusing instead on ornamented
variants often seems to lead to begging the question or changing the
subject.
Stevan Harnad
{allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet
(609)-921-7771
------------------------------
Date: 1 Nov 86 20:02:08 GMT
From: rutgers!princeton!mind!harnad@lll-crg.arpa (Stevan Harnad)
Subject: Re: Searle, Turing, Symbols, Categories
michaelm@bcsaic.UUCP (michael maxwell) writes:
> I guess what I had in mind for the revised Turing test was not using
> language at all--maybe I should have eliminated the sound link (and
> writing). What in the way people behave (facial expressions, body
> language etc.) would cue us to the idea the one is a human and the other
> a robot? What if you showed pictures to the examinees--perhaps
> beautiful scenes, and revolting ones? This is more a test for emotions
> than for mind (Mr. Spock would probably fail). But I think that a lot of
> what we think of as human is tied up in this nonverbal/ emotional level.
The modularity issue looms large again. I don't believe there's an
independent module for affective expression in human beings. It's all
-- to use a trendy though inadequate expression -- "cognitively
penetrable." There's also the issue of the TOTALITY of the Total
Turing Test, which was intended to remedy the underdetermination of
toy models/modules: It's not enough just to get a model to mimic our
facial expressions. That could all be LITERALLY done with mirrors
(and, say, some delayed feedback and some scrambling and
recombining), and I'm sure it could fool people, at least for a while.
I simply conjecture that this could not be done for the TOTALITY of
our performance capacity using only more of the same kinds of tricks
(analog OR symbolic).
The capacity to manipulate objects in the world in all the ways
we can and do do it (which happens to include naming and describing
them, i.e., linguistic acts) is a lot taller order than mimicking exclusively
our nonverbal expressive behavior. There may be (in an unfortunate mixed
metaphor) many more ways to skin (toy) parts of the theoretical cat than
all of it.
Three final points: (1) Your proposal seems to equivocate between the (more
important) formal functional component of the Total Turing Test (i.e., how do
we get a model to exhibit all of our performance capacities, be they
verbal or nonverbal?) and (2) the informal, intuitive component (i.e., will it
be indistinguishable in all relevant respects from a person, TO a
person?). The motto would be: If you use something short of the Total
Turing Test, you may be able to fool some people some of the time, but not
all of the time. (2) There's nothing wrong in principle with a
nonverbal, even a nonhuman turing test; I think (higher) animals pass this
easily all the time, with virtually the same validity as humans, as
far as I'm concerned. But this version can't rely exclusively on
affective expression modules either. (3) Finally, as I've argued earlier,
all attempts to "capture" qualitative experience -- not just emotion,
but any conscious experience, such as what it's LIKE to see red or
to believe X -- amounts to an unprofitable red herring in this
enterprise. The whole point of the Total Turing Test is that
performance-indistinguishability IS your only basis for infer that
anyone but you has a mind (i.e., has emotions, etc.). In the paper I
dubbed this "methodological epiphenomenalism as aresearch strategy in
cognitive science."
By the way, you prejudged the question the way you put it. A perfectly
noncommittal but monistic way of putting it would be: "What in the way
ROBOTS behave would cue us to the idea that one robot had a mind and
another did not?" This leaves it appropriately open for continuing
research just exactly which causal physical devices (= "robots"), whether
natural or artificial, do or do not have minds.
Stevan Harnad
{allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet
(609)-921-7771
------------------------------
End of AIList Digest
********************