Copy Link
Add to Bookmark
Report
AIList Digest Volume 4 Issue 237
AIList Digest Monday, 27 Oct 1986 Volume 4 : Issue 237
Today's Topics:
Philosophy - Harnad's Replies to Krulwich and Paul &
Turing Test & Symbolic Reasoning
----------------------------------------------------------------------
Date: Sun, 26 Oct 86 11:45:17 est
From: princeton!mind!harnad@seismo.CSS.GOV (Stevan Harnad)
Subject: For posting on mod.ai 3rd of 4 (reply to Krulwich)
In mod.ai, Message-ID: <8610190504.AA08083@ucbvax.Berkeley.EDU>,
17 Oct 6 17:29:00 GMT, KRULWICH@C.CS.CMU.EDU (Bruce Krulwich) writes:
> i disagree...that symbols, and in general any entity that a computer
> will process, can only be dealt with in terms of syntax. for example,
> when i add two integers, the bits that the integers are encoded in are
> interpreted semantically to combine to form an integer. the same
> could be said about a symbol that i pass to a routine in an
> object-oriented system such as CLU, where what is done with
> the symbol depends on it's type (which i claim is it's semantics)
Syntax is ordinarily defined as formal rules for manipulating physical
symbol tokens in virtue of their (arbitrary) SHAPES. The syntactic goings-on
are semantically interpretable, that is, the symbols are also
manipulable in virtue of their MEANINGS, not just their shapes.
Meaning is a complex and ill-understood phenomenon, but it includes
(1) the relation of the symbols to the real objects they "stand for" and
(2) a subjective sense of understanding that relation (i.e., what
Searle has for English and lacks for Chinese, despite correctly
manipulating its symbols). So far the only ones who seem to
do (1) and (2) are ourselves. Redefining semantics as manipulating symbols
in virtue of their "type" doesn't seem to solve the problem...
> i think that the reason that computers are so far behind the
> human brain in semantic interpretation and in general "thinking"
> is that the brain contains a hell of a lot more information
> than most computer systems, and also the brain makes associations
> much faster, so an object (ie, a thought) is associated with
> its semantics almost instantly.
I'd say you're pinning a lot of hopes on "more" and "faster." The
problem just might be somewhat deeper than that...
Stevan Harnad
princeton!mind!harnad
harnad%mind@princeton.csnet
(609)-921-7771
------------------------------
Date: Sun, 26 Oct 86 11:59:28 est
From: princeton!mind!harnad@seismo.CSS.GOV (Stevan Harnad)
Subject: For posting on mod.ai, 4th of 4 (reply to Danny Paul)
Topic: Machines: Natural and Man-Made
On mod.ai, in Message-ID: <8610240550.AA15402@ucbvax.Berkeley.EDU>,
22 Oct 86 14:49:00 GMT, NGSTL1::DANNY%ti-eg.CSNET@RELAY.CS.NET (Daniel Paul)
cites Daniel Simon's earlier reply in AI digest (V4 #226):
>One question you haven't addressed is the relationship between intelligence and
>"human performance". Are the two synonymous? If so, why bother to make
>artificial humans when making natural ones is so much easier (not to mention
>more fun)?
Daniel Paul then adds:
> This is a question that has been bothering me for a while. When it
> is so much cheaper (and possible now, while true machine intelligence
> may be just a dream) why are we wasting time training machines when we
> could be training humans instead? The only reasons that I can see are
> that intelligent systems can be made small enough and light enough to
> sit on bombs. Are there any other reasons?
Apart from the two obvious ones -- (1) so machines can free people to do
things machines cannot yet do, if people prefer, and (2) so machines can do
things that people can only do less quickly and efficiently, if people
prefer -- there is the less obvious reply already made to Daniel
Simon: (3) because trying to get machines to display all our performance
capacity (the Total Turing Test) is our only way of arriving at a functional
understanding of what kinds of machines we are, and how we work.
[Before the cards and letters pour in to inform me that I've used
"machine" incoherently: A "machine," (writ large, Deus Ex Machina) is
just a physical, causal system. Present-generation artificial machines
are simply very primitive examples.]
Stevan Harnad
princeton!mind!harnad
harnad%mind@princeton.csnet
(609)-921-7771
------------------------------
Date: 23 Oct 86 15:39:08 GMT
From: husc6!rutgers!princeton!mind!harnad@eddie.mit.edu (Stevan Harnad)
Subject: Re: Searle, Turing, Symbols, Categories
michaelm@bcsaic.UUCP (michael maxwell) writes:
> I believe the Turing test was also applied to orangutans, although
> I don't recall the details (except that the orangutans flunked)...
> As an interesting thought experiment, suppose a Turing test were done
> with a robot made to look like a human, and a human being who didn't
> speak English-- both over a CCTV, say, so you couldn't touch them to
> see which one was soft, etc. What would the robot have to do in order
> to pass itself off as human?
They should all three in principle have a chance of passing. For the orang,
we would need to administer the ecologically valid version of the
test. (I think we have reasonably reliable cross-species intuitions
about mental states, although they're obviously not as sensitive as
our intraspecific ones, and they tend to be anthropocentric and
anthropomorphic -- perhaps necessarily so; experienced naturalists are
better at this, just as cross-cultural ethnographic judgments depend on
exposure and experience.) We certainly have no problem in principle with
foreign speakers (the remarkable linguist, polyglot and bible-translator
Kenneth Pike has a "magic show" in which, after less than an hour of "turing"
interactions with a speaker of any of the [shrinking] number of languages he
doesn't yet know, they are babbling mutually intelligibly before your very
eyes), although most of us may have some problems in practice with such a
feat, at least, without practice.
Severe aphasics and mental retardates may be tougher cases, but there
perhaps the orang version would stand us in good stead (and I don't
mean that disrespectfully; I have an extremely high regard for the mental
states of our fellow creatures, whether human or nonhuman).
As to the robot: Well that's the issue here, isn't it? Can it or can it not
pass the appropriate total test that its appropriate non-robot counterpart
(be it human or ape) can pass? If so, it has a mind, by this criterion (the
Total Turing Test). I certainly wouldn't dream of flunking either a human or
a robot just because he/it didn't feel soft, if his/its total performance
was otherwise turing indistinguishable.
Stevan Harnad
princeton!mind!harnad
harnad%mind@princeton.csnet
------------------------------
Date: 23 Oct 86 14:52:56 GMT
From: rutgers!princeton!mind!harnad@lll-crg.arpa
Subject: Re: extended Turing test
colonel@sunybcs.UUCP (Col. G. L. Sicherman) writes:
> [I]t's misleading to propose that a veridical model of _our_ behavior
> ought to have our "performance capacities"...I do not (yet) quarrel
> with the principle that the model ought to have our abilities. But to
> speak of "performance capacities" is to subtly distort the fundamental
> problem. We are not performers!
"Behavioral ability"/"performance capacity" -- such fuss over
black-box synonyms, instead of facing the substantive problem of
modeling the functional substrate that will generate them.
------------------------------
Date: 24 Oct 86 19:02:42 GMT
From: spar!freeman@decwrl.dec.com
Subject: Re: Searle, Turing, Symbols, Categories
Possibly a more interesting test would be to give the computer
direct control of the video bit map and let it synthesize an
image of a human being.
------------------------------
Date: Fri, 24 Oct 86 22:54:58 EDT
From: "Col. G. L. Sicherman" <colonel%buffalo.csnet@CSNET-RELAY.ARPA>
Subject: Re: turing test
PHayes@SRI-KL.ARPA (Pat Hayes) writes:
> Daniel R. Simon has worries about the Turing test. A good place to find
> intelligent discussion of these issues is Turings original article in MIND,
> October 1950, v.59, pages 433 to 460.
That article was in part a response to G. Jefferson's Lister Oration,
which appeared as "The mind of mechanical man" in the British Medical
Journal for 1949 (pp. 1105-1121). It's well worth reading in its own
right. Jefferson presents the humane issues at least as well as Turing
presents the scientific issues, and I think that Turing failed to
rebut, or perhaps to comprehend, all Jefferson's objections.
------------------------------
Date: Fri, 24 Oct 86 18:09 CDT
From: PADIN%FNALB.BITNET@WISCVM.WISC.EDU
Subject: THE PSEUDOMATH OF THE TURING TEST
LETS PUT THE TURING TEST INTO PSEUDO MATHEMATICAL TERMS.
DEFINE THE SET Q={question1,question2,...}. LETS NOTE THAT
FOR EACH q IN Q, THERE IS AN INFINITE NUMBER OF RESPONSES ( THE RESPONSES
NEED NOT BE RELEVANT TO THE QUESTION, THEY JUST NEED TO BE RESPONSES).
IN FACT, WE CAN DEFINE A SET R={EVERY POSSIBLE RESPONSE TO ANY QUESTION}, i.e.,
R={r1,r2,r3,...}.
WE CAN DEFINE THE TURING TEST AS A FUNCTION T THAT MAPS QUESTIONS
q in Q TO A SET RR IN R OF ALL RESPONSES ( i.e., RR IS A SUBSET OF R).
WE CAN THEN WRITE
T(q) --> RR
WHICH STATES THAT THERE EXISTS A FUNCTION T THAT MAPS A QUESTION q TO A
SET OF RESPONSES RR. THE EXISTENCE OF T FOR ALL QUESTIONS q IS EVIDENCE FOR
THE PRESENCE OF MIND SINCE T CHOOSES, OUT OF AN INFINITE NUMBER OF RESPONSES,
THOSE RESPONSES THAT ARE APPROPRIATE TO AN ENTITY WITH A MIND.
NOTE: T IS THE SET
{(question1,{resp1-1,resp2-1,...,respn-1}),
(question2,{resp1-2,resp2-2,...,respk-2}),
...
(questionj,{resp1-j,resp2-j,...,respj-h}),
}
WE USE A SET (RR) OF RESPONSES BECAUSE THERE ARE ,FOR MOST QUESTIONS, MORE
THEN ONE RESPONSE. THERE ARE TIMES OF COURSE WHEN THERE IS JUST ONE ELEMENT IN
RR, SUCH AS, THE RESPONSE TO THE QUESTION, 'IS IT RAINING OUTSIDE?'.
NOW A PROBLEM ARRISES: WHO IS TO DECIDE WHICH SUBSET OF RESPONSES INDICATES
THE EXISTENCE OF MIND? WHO WILL DECIDE WHICH SET IS APPROPRIATE TO INDICATE
AN ENTITY OTHER THAN OURSELVES IS OUT THERE RESPONDING?
FOR EXAMPLE, IF WE DEFINE THE SET RR AS
RR={r(i) | r(i) is randomly chosen from R}
THEN TO EACH QUESTION q IN THE SET OF QUESTIONS USED TO DETERMINE THE EXISTENCE
OF MIND, WE GET A RESPONSE WHICH APPEARS TO BE RANDOM, THAT IS , WE CAN MAKE NO
SENSE OF THE RESPONSE WITH RESPECT TO THE QUESTION ASKED. IT WOULD SEEM THAT
THIS WOULD BE SUFFICIENT TO LABEL TO RESPONDENT A MINDLESS ENTITY. HOWEVER,
IT IS THE EXACT RESPONSE ONE WOULD EXPECT OF A SCHIZOPHRENIC. NOW WHAT DO WE
DO? DO WE CHOSE TO DEFINE SCHIZOPHRENICS AS MINDLESS PEOPLE? THIS IS NOT
MORALLY PALATABLE. DO WE CHOSE TO ALLOW THE 'RANDOM SET' TO BE USED AS
CRITERIA FOR ASSESSING THE QUALITY OF MINDEDNESS? THIS CHOICE IS NOT
ACCEPTABLE EITHER BECAUSE IT SIMPLY RESULTS IN WHAT MAY BE CALLED TURING NOISE,
YIELDING NO USEFUL INFORMATION.
IF WE ARE UNWILLING TO ACCEPT ANOTHER'S DECISION AS TO THE SET OF
ACCEPTABLE RESPONSES, THEN WE ARE COMPELLED TO DO THE DETERMINATION OURSELVES.
NOW IF WE ARE TO USE OUR JUDGEMENT IN DETERMINING THE PRESENCE OF ANOTHER MIND,
THEN WE MUST ACCEPT THE POSSIBILITY OF ERROR INHERENT IN THE HUMAN DECISION
MAKING PROCESS. AT BEST,THEN, THE TURING TEST WILL BE ABLE TO GIVE US ONLY A
HINT AT THE PRESENCE OF ANOTHER MIND; A LEVEL OF PROBABILITY.
------------------------------
Date: 26 Oct 86 20:56:29 GMT
From: rutgers!princeton!mind!harnad@titan.arc.nasa.gov (Stevan Harnad)
Subject: Re: Searle, Turing, Symbols, Categories
freeman@spar.UUCP (Jay Freeman) replies:
> Possibly a more interesting test [than the robotic version of
> the Total Turing Test] would be to give the computer
> direct control of the video bit map and let it synthesize an
> image of a human being.
Manipulating digital "images" is still only symbol-manipulation. It is
(1) the causal connection of the transducers with the objects of the
outside world, including (2) any physical "resemblance" the energy
pattern on the transducers may have to the objects from which they
originate, that distinguishes robotic functionalism from symbolic
functionalism and that suggests a solution to the problem of grounding
the otherwise ungrounded symbols (i.e., the problem of "intrinsic vs.
derived intentionality"), as argued in the papers under discussion.
A third reason why internally manipulated bit-maps are not a new way
out of the problems with the symbolic version of the turing test is
that (3) a model that tries to explain the functional basis of our
total performance capacity already has its hands full with anticipating
and generating all of our response capacities in the face of any
potential input contingency (i.e., passing the Total Turing Test)
without having to anticipate and generate all the input contingencies
themselves. In other words, its enough of a problem to model the mind
and how it interacts successfully with the world without having to
model the world too.
Stevan Harnad
{seismo, packard, allegra} !princeton!mind!harnad
harnad%mind@princeton.csnet
(609)-921-7771
------------------------------
End of AIList Digest
********************