Copy Link
Add to Bookmark
Report
AIList Digest Volume 4 Issue 274
AIList Digest Tuesday, 2 Dec 1986 Volume 4 : Issue 274
Today's Topics:
Philosophy - Searle, Turing, Symbols, Categories &
Turing Tests and Chinese Rooms
----------------------------------------------------------------------
Date: 26 Nov 86 12:41:50 GMT
From: cartan!rathmann@ucbvax.Berkeley.EDU (the late Michael Ellis)
Subject: Re: Searle, Turing, Symbols, Categories
> Steve Harnad >> Keith Dancey
>> [The turing test] should be timed as well as checked for accuracy...
>> Turing would want a degree of humor...
>> check for `personal values,' `compassion,'...
>> should have a degree of dynamic problem solving...
>> a whole body of psychometric literature which Turing did not consult.
>
>I think that these details are premature and arbitrary. We all know
>(well enough) what people can DO: They can discriminate, categorize,
>manipulate, identify and describe objects and events in the world, and
>they can respond appropriately to such descriptions.
Just who is being arbitrary here? Qualities like humor, compassion,
artistic creativity and the like are precisely those which many of us
consider to be those most characteristic of mind! As to the
"prematurity" of all this, you seem to have suddenly and most
conveniently forgotten that you were speaking of a "total turing
test" -- I presume an ultimate test that would encompass all that we
mean when we speak of something as having a "mind", a test that is
actually a generations-long research program.
As to whether or not "we all know what people do", I'm sure our
cognitive science people are just *aching* to have you come and tell
them that us humans "discriminate, categorize, manipulate, identify, and
describe". Just attach those pretty labels and the enormous preverbal
substratum of our consciousness just vanishes! Right? Oh yeah, I suppose
you provide rigorous definitions for these terms -- in your as
yet unpublished paper...
>Now let's get devices to (1) do it all (formal component) and then
>let's see whether (2) there's anything that we can detect informally
>that distinguishes these devices from other people we judge to have
>minds BY EXACTLY THE SAME CRITERIA (namely, total performance
>capacity). If not, they are turing-indistinguishable and we have no
>non-arbitrary basis for singling them out as not having minds.
You have an awfully peculiar notion of what "total" and "arbitrary"
mean, Steve: its not "arbitrary" to exclude those traits that most
of us regard highly in other beings whom we presume to have minds.
Nor is it "arbitrary" to exclude the future findings of brain
research concerning the nature of our so-called "minds". Yet you
presume to be describing a "total turing test".
May I suggest that what you describing is not a "test for mind", but
rather a "test for simulated intelligence", and the reason you will
not or cannot distinguish between the two is that you would elevate
today's primitive state of technology to a fixed methodological
standard for future generations. If we cannot cope with the problem,
why, we'll just define it away! Right? Is this not, to paraphrase
Paul Feyerabend, incompetence upheld as a standard of excellence?
-michael
Blessed be you, mighty matter, irresistible march of evolution,
reality ever new born; you who by constantly shattering our mental
categories force us to go further and further in our pursuit of the
truth.
-Pierre Teilhard de Chardin "Hymn of the Universe"
------------------------------
Date: 27 Nov 86 12:02:50 GMT
From: cartan!rathmann@ucbvax.Berkeley.EDU (the late Michael Ellis)
Subject: Re: Turing Tests and Chinese Rooms
> Ray Trent
> 1) I've always been somewhat suspicious about the Turing Test. (1/2 :-)
>
> a) does anyone out there have any good references regarding
> its shortcomings. :-|
John Searle's notorious "Chinese Room" argument has probably
drawn out more discussion on this topic in recent times than
anything else I can think of. As far as I can tell, there seems
to be no consensus of opinion on this issue, only a broad spectrum
of philosophical stances, some of them apparently quite angry
(Hofstadter, for example). The most complete presentation I have yet
encountered is in the journal for the Behavioral and Brain Sciences
1980, with a complete statement of Searle's original argument,
responses by folks like Fodor, Rorty, McCarthy, Dennett, Hofstadter,
Eccles, etc, and Searle's counterresponse.
People frequently have misconceptions of just what Searle is arguing,
the most common of these being:
Machines cannot have minds.
What Searle really argues is that:
The relation (mind:brain :: software:hardware) is fallacious.
Computers cannot have minds solely by virtue of their running the
correct program.
His position seems to derive from his thoughts in the philosophy of
language, and in particular his notion of Intentionality.
Familiarity with the work of Frege, Russell, Wittgenstein, Quine,
Austin, Putnam, and Kripke would really be helpful if you are
interested in the motivation behind this concept, but Searle
maintains that his Chinese room argument makes sense without any of
that background.
-michael
------------------------------
Date: 29 Nov 86 06:52:21 GMT
From: rutgers!princeton!mind!harnad@lll-crg.arpa (Stevan Harnad)
Subject: Re: Searle, Turing, Symbols, Categories
Peter O. Mikes <mordor!pom> at S-1 Project, LLNL wrote:
> An example of ["unexperienced experience"] is subliminal perception.
> Similar case is perception of outside world during
> dream, which can be recalled under hypnosis. Perception
> is not same as experience, and sensation is an ambiguous word.
Subliminal perception can hardly serve as a clarifying example since
its own existence and nature is anything but clearly established.
(See D. Holender (1986) "Semantic activation without conscious
identification," Behavioral and Brain Sciences 9: 1 - 66.) If subliminal
perception exists, the question is whether it is just a case of dim or
weak awareness, quickly forgotten, or the unconscious registration of
information. If it is the former, then it is merely a case of a weak
and subsequently forgotten conscious experience. If it is the latter,
then it is a case of unconscious processing -- one of many, for most
processes is unconscious (and studying them is the theoretical burden of
cognitive science).
Dreaming is a similar case. It is generally agreed (from studies in
which subjects are awakened during dreams) that subjects are conscious
during their dreams, although they remain asleep. This state is called
"paradoxical sleep," because the EEG shows signs of active, waking
activity even though the subject's eyes are closed and he continues to
sleep. Easily awakened in that stage of sleep, the subject can report
the contents of his dream, and indicates that he has been consciously
undergoing the experience, like a vivid day-dream or a hallucination.
If the subject is not awakened, however, the dream is usually
forgotten, and difficult if not impossible to recall. (As usual,
recognition memory is stronger than recall, so sometimes cues will be
recognized as having occurred in a forgotten dream.) None of this
bears on the issue of consciousness, since the consciousness during
dreams is relatively unproblematic, and the only other phenomenon
involved is simply the forgetting of an experience.
A third hypothetical possibility is slightly more interesting, but,
unfortunately, virtually untestable: Can there be unconscious
registration of information at time T, and then, at a later time, T1,
conscious recall of that information AS IF it had been experienced
consciously at T? This is a theoretical possibility. It would still
not make the event at T a conscious experience, but it would mean that
input information can be put on "hold" in such a way as to be
retrospectively experienced at a later time. The later experience
would still be a kind of illusion, in that the original event was NOT
actually experienced at T, as it appears to have been upon
reflection. The nervous system is probably playing many temporal (and
causal) tricks like that within very short time intervals; the question
only becomes dramatic when longer intervals (minutes, hours, days) are
interposed between T and T1.
None of these issues are merely definitional ones. It is true that
"perception" and "sensation" are ambiguous, but, fortunately,
"experience" seems to be less so. So one may want to separate
sensations and perceptions into the conscious and unconscious ones.
The conscious ones are the ones that we were consciously aware of
-- i.e., that we experienced -- when they occurred in real time. The
unconscious ones simply registered information in our brains at their
moment of real-time occurrence (without being experienced), and
the awareness, if any, came only later.
> suggest that we follow the example of acoustics, which solved the
> 'riddle' of falling tree by defining 'sound' as physical effect
> (density wave) and noise as 'unwanted sound' - so that The tree
> which falls in deserted place makes sound but does not make noise.
> Accordingly, perception can be unconcious but experience can't.
Based on the account you give, acoustics solved no problem. It merely
missed the point.
Again, the issue is not a definitional one. When a tree falls, all you
have is acoustic events. If an organism is nearby, you have acoustic
events and auditory events (i.e., physiological events in its nervous
system). If the organism is conscious, it hears a sound. But, unless
you are that organism, you can't know for sure about that. This is
called the mind/body problem. "Noise" and "unwanted sound" has
absolutely nothing to do with it.
> mind and consciousness (or something like that) should be a universal
> quantity, which could be applied to machine, computers...
> Since we know that there is no sharp division between living and
> nonliving, we should be able to apply the measure to everything
We should indeed be able to apply the concept conscious/nonconscious
to everything, just as we can apply the concept living/nonliving. The
question, however, remains: What is and what isn't conscious? And how are
we to know it? Here are some commonsense things to keep in mind. I
know of only one case of a conscious entity directly and with
certainty: My own. I infer that other organisms that behave more or
less the way I would are also conscious, although of course I can't be
sure. I also infer that a stone is not conscious, although of course I
can't be sure about that either. The problem is finding a basis for
making the inference in intermediate cases. Certainty will not be
possible in any case but my own. I have argued that the Total Turing
Test is a reasonable empirical criterion for cognitive science and a
reasonable intuitive criterion for the rest of us. Moreover, it has
the virtue of corresponding to the subjectively compelling criterion
we're already using daily in the case of all other minds but our own.
--
Stevan Harnad (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet
------------------------------
End of AIList Digest
********************