Copy Link
Add to Bookmark
Report

AIList Digest Volume 4 Issue 242

eZine's profile picture
Published in 
AIList Digest
 · 11 months ago

AIList Digest           Thursday, 30 Oct 1986     Volume 4 : Issue 242 

Today's Topics:
Philosophy - Searle, Turing, Symbols, Categories

----------------------------------------------------------------------

Date: 27 Oct 86 03:58:54 GMT
From: spar!freeman@decwrl.dec.com
Subject: Re: Searle, Turing, Symbols, Categories

In article <12@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>
>freeman@spar.UUCP (Jay Freeman) replies:
>
>> Possibly a more interesting test [than the robotic version of
>> the Total Turing Test] would be to give the computer
>> direct control of the video bit map and let it synthesize an
>> image of a human being.
>
> Manipulating digital "images" is still only symbol-manipulation. [...]

Very well, let's equip the robot with an active RF emitter so
it can jam the camera's electronics and impose whatever bit map it
wishes, whether the camera likes it or not. Too silly? Very well,
let's design a robot in the shape of a back projector, and let it
create internally whatever representation of a human being it wishes
the camera to see, and project it on its screen for the camera to
pick up. Such a robot might do a tolerable job of interacting with
other parts of the "objective" world, using robot arms and whatnot
of more conventional design, so long as it kept them out of the
way of the camera. Still too silly? Very well, let's create a
vaguely anthropomorphic robot and equip its external surfaces with
a complete covering of smaller video displays, so that it can
achieve the minor details of human appearance by projection rather
than by mechanical motion. (We can use a crude electronic jammer to
limit the amount of detail that the camera can see, if necessary.)
Well, maybe our model shop is good enough to do most of the details
of the robot convincingly, so we'll only have to project subtle
details of facial expression. Maybe just the eyes.

Slightly more seriously, if you are going to admit the presence of
electronic or mechanical devices between the subject under test and
the human to be fooled, you must accept the possibility that the test
subject will be smart enough to detect their presence and exploit their
weaknesses. Returning to a more facetious tone, consider a robot that
looks no more anthropomorphic than your vacuum cleaner, but that is
possessed of moderate manipulative abilities and a good visual perceptive
apparatus, and furthermore, has a Swiss Army knife.

Before the test commences, the robot sneakily rolls up to the
camera and removes the cover. It locates the connections for the
external video output, and splices in a substitute connection to
an external video source which it generates. Then it replaces the
camera cover, so that everything looks normal. And a test time,
the robot provides whatever image it wants the testers to see.

A dumb robot might have no choice but to look like a human being
in order to pass the test. Why should a smart one be so constrained?


-- Jay Freeman

------------------------------

Date: Mon 27 Oct 86 20:02:39-EST
From: Albert Boulanger <ABOULANGER@G.BBN.COM>
Subject: Turing Test

I think it is amusing and instructive to look at real attempts of the
turing test.

One interesting attempt is written up in the post scriptum of the
chapter:

"A Coffeehouse Conversation on the Turing Test"
Metamagical Themas
Douglas Hofstadter
Basic Books 1985


Albert Boulanger
BBN Labs

------------------------------

Date: 27 Oct 86 17:23:31 GMT
From: rutgers!princeton!mind!harnad@titan.arc.nasa.gov
Subject: Pseudomath about the Turing Test: Reply to Padin

[Until the problem of follow-up articles to mod.ai through Usenet is
straightened out, I'm temporarily responding to mod.ai on net.ai.]

In mod.ai, in Message-ID: <8610270723.AA05463@ucbvax.Berkeley.EDU>,
under the subject heading THE PSEUDOMATH OF THE TURING TEST,
PADIN@FNALB.BITNET writes:

> DEFINE THE SET Q={question1,question2,...}. LETS NOTE THAT
> FOR EACH q IN Q, THERE IS AN INFINITE NUMBER OF RESPONSES (THE
> RESPONSES NEED NOT BE RELEVANT TO THE QUESTION, THEY JUST NEED TO BE
> RESPONSES). IN FACT, WE CAN DEFINE A SET R={EVERY POSSIBLE RESPONSE TO
> ANY QUESTION}, i.e., R={r1,r2,r3,...}.

Do pseudomath and you're likely to generate pseudoproblems. Nevertheless,
this way of formulating it does inadvertently illustrate quite clearly why
the symbolic version of the turing test is inadequate and the robotic version
is to be preferred. The symbolic version is equivalent to the proverbial
monkey's chances of typing Shakespeare by combinatorics. The robotic version
(pending the last word on basic continuity/discontinuity in microphysics) is
then no more or less of a combinatorial problem than Newtonian Mechanics.
[Concerning continuity/discreteness, join the ongoing discussion on the
A/D distinction that's just started up in net/mod.ai.]

> THE EXISTENCE OF ...A FUNCTION T THAT MAPS A QUESTION q TO A SET
> OF RESPONSES RR... FOR ALL QUESTIONS q IS EVIDENCE FOR THE PRESENCE
> OF MIND SINCE T CHOOSES, OUT OF AN INFINITE NUMBER OF RESPONSES,
> THOSE RESPONSES THAT ARE APPROPRIATE TO AN ENTITY WITH A MIND.

Pare off the pseudomath about "choosing among infinities" and you just
get a restatement of the basic intuition behind the turing test: That an
entity has a mind if it acts indistinguishably from an entity with a
mind.

> NOW A PROBLEM [arises]: WHO IS TO DECIDE WHICH SUBSET OF RESPONSES
> INDICATES THE EXISTENCE OF MIND? WHO WILL DECIDE WHICH SET IS
> APPROPRIATE TO INDICATE AN ENTITY OTHER THAN OURSELVES IS OUT THERE
> RESPONDING?

The same one who decides in ongoing, everyday "solutions" to the
other-minds problem. And on exactly the same basis:
indistinguishability of performance.

> [If] WE GET A RESPONSE WHICH APPEARS TO BE RANDOM, IT WOULD SEEM THAT
> THIS WOULD BE SUFFICIENT TO LABEL [the] RESPONDENT A MINDLESS ENTITY.
> HOWEVER, IT IS THE EXACT RESPONSE ONE WOULD EXPECT OF A SCHIZOPHRENIC.

When will this tired prima facie objection (about schizophrenia,
retardation, aphasia, coma, etc.) at last be laid to rest? Damaged
humans inherit the benefit of the doubt from what we know about their
biological origins AND about the success of their normal counterparts in
passing the turing test. Moreover, there is no problem in principle
with subhuman or nonhuman performance -- in practice we turing-test
animals too -- and this too is probably parasitic on our intuitions
about normal human beings (although the evolutionary order was
probably vice versa).

Also, schizophrenics don't just behave randomly; if a candidate just
behaved randomly it would not only justifiably flunk the turing test,
but it would not survive either. (I don't even know what behaving
purely randomly might mean; it seems to me the molecules would never
make it through embryogeny...) On the other hand, which of us doesn't
occasionally behave randomly, and some more often than other?. We can
hardly expect the turing test to provide us with the criteria for extreme
conditions such as brain death if even biologists have problems with that.

All these exotic variants are pseudoproblems and red herrings,
especially when we are nowhere in our progress in developing a system
that can give the normal version of the turing test a run for its money.

> NOW IF WE ARE TO USE OUR JUDGEMENT IN DETERMINING THE PRESENCE OF
> ANOTHER MIND, THEN WE MUST ACCEPT THE POSSIBILITY OF ERROR INHERENT
> IN THE HUMAN DECISION MAKING PROCESS. AT BEST,THEN, THE TURING TEST
> WILL BE ABLE TO GIVE US ONLY A HINT AT THE PRESENCE OF ANOTHER MIND;
> A LEVEL OF PROBABILITY.

What else is new? Even the theories of theoretical physics are only
true with high probability. There is no mathematical proof that our
inferences are entailed with necessity by the data. This is called
"underdetermination" and "inductive risk," and it is endemic to all
empirical inquiry.

But besides that, the turing test has even a second layer of
underdermination that verges on indeterminacy. I have argued that it
has two components: One is the formal theorist's task of developing a
device that can generate all of our performance capacities, i.e.,one
that can pass the Total Turing Test. So far, with only "performance
capacity"
having been mentioned, the level of underdetermination is
that of ordinary science (it may have missed some future performance
capacity, or it may fail tomorrow, or it may just happen to accomplish
the same performance in a radically different way, just as the
universe may happen to differ from our best physical theory).

The second component of the turing test, however, is informal, intuitive
and open-ended, and it's the one we usually have in mind when we speak of
the turing test: Will a normal human being be able to tell the candidate
apart from someone with a mind? The argument is that
turing-indistinguishability of (total) performance is the only basis
for making that judgment in any case.

Fallible? Of course that kind of judgment is fallible. Certainly no less
fallible than ordinary scientific inference; and (I argue) no more fallible
than our judgments about other minds. What more can one ask? Apart from the
necessary truths of mathematics, the only other candidate for a nonprobabilistic
certainty is our direct ("incorrigible") awareness of our OWN minds (although
even there the details seem a bit murky...).


Stevan Harnad
{allegra, bellcore, seismo, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet
(609)-921-7771

------------------------------

Date: 28 Oct 86 08:40:00 EDT
From: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Reply-to: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Subject: son of yet more wrangling on Searle, Turing, Quine, Hume, ...


Warning: the following message is long and exceeds the FDA maximum
daily recommended dosage of philosophizing. You have been warned.

This is the exchange that kicked off this whole diatribe:

>>> Harnad: there is no rational reason for being more sceptical about robots'
>>> minds (if we can't tell their performance apart from that of people)
>>> than about (other) peoples' minds.

>> Cugini: One (rationally) believes other people are conscious BOTH because
>> of their performance and because their internal stuff is a lot like
>> one's own.

> This is a very important point and a subtle one, so I want to make
> sure that my position is explicit and clear: I am not denying that
> there exist some objective data that correlate with having a mind
> (consciousness) over and above performance data. In particular,
> there's (1) the way we look and (2) the fact that we have brains. What
> I am denying is that this is relevant to our intuitions about who has a
> mind and why. I claim that our intuitive sense of who has a mind is
> COMPLETELY based on performance, and our reason can do no better. These
> other correlates are only inessential afterthoughts, and it's irrational
> to take them as criteria.

This riposte seems implausible on the face of it. You seem to want
to pretend that we know absolutely nothing about the basis of thought
in humans, and to "suppress" all evidence based on such knowledge.
But that's just wrong. Brains *are* evidence for mind, in light of
our present knowledge.

> My supporting argument is very simple: We have absolutely no intuitive
> FUNCTIONAL ideas about how our brains work. (If we did, we'd have long
> since spun an implementable brain theory from our introspective
> armchairs.) Consequently, our belief that brains are evidence of minds and
> that the absence of a brain is evidence of the absence of a mind is based
> on a superficial black-box correlation. It is no more rational than
> being biased by any other aspect of appearance, such as the color of
> the skin, the shape of the eyes or even the presence or absence of a tail.

Hoo hah, you mean to say that belief based on "black-box correlation"
is irrational in the absence of a fully-supporting theoretical
framework? Balderdash. People in, say, 1500 AD were perfectly rational
in predicting tides based on the position of the moon (and vice-versa)
even though they hadn't a clue as to the mechanism of interaction.
If you keep asking "why" long enough, *all* science is grounded on
such brute-fact correlation (why do like charges repel, etc.) - as
Hume pointed out a while back.

> To put it in the starkest terms possible: We wouldn't know what device
> was and was not relevantly brain-like if it was staring us in the face
> -- EXCEPT IF IT HAD OUR PERFORMANCE CAPACITIES (i.e., it could pass
> the Total Turing Test). That's the only thing our intuitions have to
> go on, and our reason has nothing more to offer either.

Except in the case of actual other brains (which are, by definition,
relevantly brain-like). The only skepticism open to one is that
one's own brain is unique in its causal powers - possible, but hardly
the best rational hypothesis.

> People were sure (as sure as they'll ever be) that other people had
> minds long before they ever discovered they had brains. I myself believed
> the brain was just a figure of speech for the first dozen or so years of
> my life. Perhaps there are people who don't learn or believe the news
> throughout their entire lifetimes. Do you think these people KNOW any
> less than we do about what does or doesn't have a mind? ...

Let me re-cast Harnad's argument (perhaps in a form unacceptable to him):
We can never know any mind directly, other than our own, if we take
the concept of mind to be something like "conscious intelligence" -
ie the intuitive (and correct, I believe) concept, rather than
some operational definition, which has been deliberately formulated
to circumvent the epistemological problems. (Harnad, to his credit,
does not stoop to such positivist ploys.) But the only external
evidence we are ever likely to get for "conscious intelligence"
is some kind of performance. Moreover, the physical basis for
such performance will be known only contingently, ie we do not
know, a priori, that it is brains, rather than automatic dishwashers,
which generate mind, but rather only as an a posteriori correlation.
Therefore, in the search for mind, we should rely on the primary
criterion (performance), rather than on such derivative criteria
as brains.

I pretty much agree with the above account except for the last sentence
which prohibits us from making use of derivative criteria. Why should
we limit ourselves so? Since when is that part of rationality?
No, the fact is we do have more reason to suppose mind of other
humans than of robots, in virtue of an admittedly derivative (but
massively confirmed) criterion. And we are, in this regard, in a
epistemological position *superior* to those who don't/didn't know
about such things as the role of the brain, ie we have *more* reason
to believe in the mindedness of others than they do. That's why
primitive tribes (I guess) make the *mistake* of attributing
mind to trees, weather, etc. Since raw performance is all they
have to go on, seemingly meaningful activity on the part of any
old thing can be taken as evidence of consciousness. But we
sophisticates have indeed learned a thing or two, in particular, that
brains support consciousness, and therefore we (rationally) give the
benefit of the doubt to any brained entity, and the anti-benefit to
un-brained entities. Again, not to say that we might not learn about
other bases for mind - but that hardly disparages brainedness as a
rational criterion for mindedness.

Another point, which I'll just state rather than argue for is that
even performance is only *contingently* a criterion for mind - ie,
it so happens, in this universe, that mind often expresses itself
by playing chess, etc., just as it so happens that brains cause
minds. And so there's really not much difference between relying on
one contingent correlate (performance) rather than another (brains)
as evidence for the presence of mind.

> > Why is consciousness a red herring just because it adds a level
> > of uncertainty?
>
> Perhaps I should have said indeterminacy. If my arguments for
> performance-indiscernibility (the turing test) as our only objective
> basis for inferring mind are correct, then there is a level of
> underdetermination here that is in no way comparable to that of, say,
> the unobservable theoretical entities of physics (say, quarks, or, to
> be more trendy, perhaps strings). Ordinary underdetermination goes
> like this: How do I know that your theory's right about the existence
> and presence of strings? Because WITH them the theory succeeds in
> accounting for all the objective data (let's pretend), and without
> them it does not. Strings are not "forced" by the data, and other
> rival theories may be possible that work without them. But until these
> rivals are put forward, normal science says strings are "real" (modulo
> ordinary underdetermination).

> Now try to run that through for consciousness: How do I know that your
> theory's right about the existence and presence of consciousness (i.e.,
> that your model has a mind)? "Because its performance is
> turing-indistinguishable from that of creatures that have minds."
Is
> your theory dualistic? Does it give consciousness an independent,
> nonphysical, causal role? "Goodness, no!" Well then, wouldn't it fit
> the objective data just as well (indeed, turing-indistinguishably)
> without consciousness? "Well..."

> That's indeterminacy, or radical underdetermination, or what have you.
> And that's why consciousness is a methodological red herring.

I admit, I have trouble following the line of argument above. Is this
Quine's "it's real if it's a term in our best-confirmed theories"
approach? But I think Quine is quite wrong, if that is his
assertion. I know consciousness (my own, at least) exists, not as
some derived theoretical construct which explains low-level data
(like magnetism explains pointer readings), but as the absolutely
lowest rock-bottom datum there is. Consciousness is the data,
not the theory - it is the explicandum, not the explicans (hope
I got that right). It's true that I can't directly observe the
consciousness of others, but so what? That's an epistemological
inconvenience, but it doesn't make consciousness a red herring.

> I don't know what you mean, by the way, about always being able to
> "engineer anything with anything at some level of abstraction." Can
> anyone engineer something to pass the robotic version of the Total
> Turing Test right now? And what's that "level of abstraction" stuff?
> Robots have to do their thing in the real world. And if my
> groundedness arguments are valid, that ain't all done with symbols
> (plus add-on peripheral modules).

The engineering remark was to re-inforce the idea that, perhaps,
being-composed-of-protein might not be as practically incidental
as many assume. Frinstance, at some level of difficulty, one can
get energy from sunlight "as plants do." But the issues are:
do we get energy from sunlight in the same way? How similar do
we demand that the processes are? It might be easy to be as
efficient as plants in getting energy from sunlight through
non-biological technology. But if we're interested in simulation at
a lower level of abstraction, eg, photosynthesis, then, maybe, a
non-biological approach will be impractical. The point is we know we
can simulate human chess-playing abilities with non-biological
technology. Should we just therefore declare the battle for mind won,
and go home? Or ask the harder question: what would it take to get a
machine to play a game of chess like a person does, ie, consciously.

BTW, I quite agree with your more general thesis on the likely
inadequacy of symbols (alone) to capture mind.

John Cugini <Cugini@NBS-VMS>

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT