Copy Link
Add to Bookmark
Report

AIList Digest Volume 4 Issue 230

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest           Thursday, 23 Oct 1986     Volume 4 : Issue 230 

Today's Topics:
Philosophy - Searle, Turing, Symbols, Categories &
Reflexes as a Test of Self

----------------------------------------------------------------------

Date: 19 Oct 86 02:30:24 GMT
From: rutgers!princeton!mind!harnad@titan.arc.nasa.gov (Stevan Harnad)
Subject: Re: Searle, Turing, Symbols, Categories


greid@adobe.UUCP (Glenn Reid) writes:

> [C]oncocting a universal Turing test is sort of useless... There
> have been countless monsters on TV...[with] varying degrees of
> human-ness...Some...very difficult to detect as being non-human.
> However, given enough time, we will eventually notice that they
> don't sleep, or that they drink motor oil...

The objective of the turing test is to judge whether the candidate
has a mind, not whether it is human or drinks motor oil. We must
accordingly consult our intuitions as to what differences are and are
not relevant to such a judgment. [Higher animals, for example, have no
trouble at all passing (the animal version) of the turing test as far
as I'm concerned. Why should aliens, monsters or robots, if they have what
it takes in the relevant respects? As I have argued before, turing-testing
for relevant likeness is really our only way of contending with the
"other-minds" problem.]

> [T]here are lots of human beings who would not pass the Turing
> test [because of brain damage, etc.].

And some of them may not have minds. But we give them the benefit of
the doubt for humanitarian reasons anyway.

Stevan Harnad
(princeton!mind!harnad)

------------------------------

Date: 19 Oct 86 14:59:49 GMT
From: clyde!watmath!watnot!watdragon!rggoebel@caip.rutgers.edu
(Randy Goebel LPAIG)
Subject: Re: Searle, Turing, Symbols, Categories

Stevan Harnad writes:
> ...The objective of the turing test is to judge whether the candidate
> has a mind, not whether it is human or drinks motor oil.

This stuff is getting silly. I doubt that it is possible to test whether
something has a mind, unless you provide a definition of what you believe
a mind is. Turing's test wasn't a test for whether or not some artificial
or natural entity had a mind. It was his prescription for an evaluation of
intelligence.

------------------------------

Date: 20 Oct 86 14:59:30 GMT
From: rutgers!princeton!mind!harnad@Zarathustra.Think.COM (Stevan Harnad)
Subject: Re: Searle, Turing, Symbols, Categories


rggoebel@watdragon.UUCP (Randy Goebel LPAIG) replies:

> I doubt that it is possible to test whether something has a mind,
> unless you provide a definition of what you believe a mind is.
> Turing's test wasn't a test for whether or not some artificial
> or natural entity had a mind. It was his prescription for an
> evaluation of intelligence.

And what do you think "having intelligence" is? Turing's criterion
effectively made it: having performance capacity that is indistinguishable
from human performance capacity. And that's all "having a mind"
amounts to (by this objective criterion). There's no "definition" in
any of this, by the way. We'll have definitions AFTER we have the
functional answers about what sorts of devices can and cannot do what
sorts of things, and how and why. For the time being all you have is a
positive phenomenon -- having a mind, having intelligence -- and
an objective and intuitive criterion for inferring its presence in any
other case than one's own. (In your own case you presumable know what
it's like to have-a-mind/have-intelligence on subjective grounds.)

Stevan Harnad
princeton!mind!harnad

------------------------------

Date: 21 Oct 86 20:53:49 GMT
From: uwslh!lishka@rsch.wisc.edu (a)
Subject: Re: Searle, Turing, Symbols, Categories

In article <5@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>
>rggoebel@watdragon.UUCP (Randy Goebel LPAIG) replies:
>
>> I doubt that it is possible to test whether something has a mind,
>> unless you provide a definition of what you believe a mind is.
>> Turing's test wasn't a test for whether or not some artificial
>> or natural entity had a mind. It was his prescription for an
>> evaluation of intelligence.
>
>And what do you think "having intelligence" is? Turing's criterion
>effectively made it: having performance capacity that is indistinguishable
>from human performance capacity. And that's all "having a mind"
>amounts to (by this objective criterion). There's no "definition" in
>any of this, by the way. We'll have definitions AFTER we have the
>functional answers about what sorts of devices can and cannot do what
>sorts of things, and how and why. For the time being all you have is a
>positive phenomenon -- having a mind, having intelligence -- and
>an objective and intuitive criterion for inferring its presence in any
>other case than one's own. (In your own case you presumable know what
>it's like to have-a-mind/have-intelligence on subjective grounds.)
>
>Stevan Harnad

How does one go about testing for something when one does not know
what that something is? My basic problem with all this are the two
keywords 'mind' and 'intelligence'. I don't think that what S. Harnad
is talking about when referring to 'mind' and 'intelligence' are what
I believe is the 'mind' and 'intelligence', and I presume others are having
this problem (see first article above).
I think a fair example is trying to 'test' for UFO's. How does one
do this if (a) we don't know what they are and (b) we don't really know if
they exist (is it the same thing with magnetic monpoles?). What are really
testing for in the case of UFO's? I think this answer is a little more
clear than for 'mind', because people generally seem to have an idea of
what a UFO is (an Unidentified Flying Object). Therefore, the minute we
come across something really strange that falls from the sky and can in
no way be identified we label it a UFO (and then try to explain it somehow).
However, until this happens (and whether this has already happened depends
on what you believe) we can't test specifically for UFO's [at least from
how I look at it].
How then does one test for 'mind' or 'intelligence'? These
definitions are even less clear. Ask a particular scientist what he thinks
is 'mind' and 'intelligence', and then ask another. Chances are that their
definitions will be different. Now ask a Christian and a Buddhist. These
answers will be even more different. However, I don't think any one will
be more valid than the other. Now, if one is to define 'mind' before
testing for it, then everyone will have a pretty good idea of what he was
testing for. But if one refuses to define it, there are going to be a
h*ll of a lot of arguments (as it seems there already have been in this
discussion). The same works for intelligence.
I honestly don't see how one can apply the Total Turing Test,
because the minute one finds a fault, the test has failed. In fact, even
if the person who created the 'robot' realizes somehow that his creation
is different, then for me the test fails. But this has all been discussed
before. However, trying to use 'intelligence' or having a 'mind' as one
of the criteria for this test when one expects to arrive at a useful
definition "along the way" seems to be sort of silly (from my point of
view).
I speak only for myself. I do think, though, that the above reasons
have contributed to what has become more a fight of basic beliefs than
anything else. I will also add my vote that this discussion move away from
'the Total Turing Test' and continue on to something a little less "talked
into the dirt".
Chris Lishka
Wisconsin State Lab of Hygiene

[qualifier: nothing above reflects the views of my employers,
although my pets may be in agreement with these views]

------------------------------

Date: 22 Oct 86 04:29:21 GMT
From: rutgers!princeton!mind!harnad@titan.arc.nasa.gov (Stevan Harnad)
Subject: Re: Searle, Turing, Symbols, Categories


lishka@uwslh.UUCP (Chris Lishka) asks:


> How does one go about testing for something when one does not know
> what that something is? My basic problem with all this
> [discussion about the Total Turing Test] are the two
> keywords 'mind' and 'intelligence'. I don't think that what S. Harnad
> is talking about when referring to 'mind' and 'intelligence' are what
> I believe is the 'mind' and 'intelligence', and I presume others are
> having this problem...

You bet others are having this problem. It's called the "other minds"
problem: How can you know whether anyone/anything else but you has a mind?

> Now, if one is to define 'mind' before testing for it, then
> everyone will have a pretty good idea of what he was testing for.

What makes people think that the other-minds problem will be solved or
simplified by definitions? Do you need a definition to know whether
YOU have a mind or intelligence? Well then take the (undefined)
phenomenon that you know is true of you to be what you're trying to
ascertain about robots (and other people). What's at issue here is not the
"definition" of what that phenomenon is, but whether the Total Turing
Test is the appropriate criterion for inferring its presence in entities
other than yourself.

[I don't believe, by the way, that empirical science or even
mathematics proceeds "definition-first." First you test for the
presence and boundary conditions of a phenomenon (or, in mathematics,
you test whether a conjecture is true), then you construct and test
a causal explanation (or, in mathematics, you do a formal proof), THEN
you provide a definition, which usually depends heavily on the nature
of the explanatory theory (or proof) you've come up with.]

Stevan Harnad
princeton!mind!harnad

------------------------------

Date: 20 Oct 86 18:00:11 GMT
From: ubc-vision!ubc-cs!andrews@BEAVER.CS.WASHINGTON.EDU
Subject: Re: A pure conjecture on the nature of the self

In article <11786@glacier.ARPA> jbn@glacier.ARPA (John B. Nagle) writes:
>... The reflexes behind tickling
>seem to be connected to something that has a good way of deciding
>what is self and what isn't.

I would suspect it has more to do with "predictability" -- you
can predict, in some sense, where you feel tickling, therefore you
don't feel it in the same way. It's similar to the blinking "reflex"
to a looming object; if the looming object is someone else's hand
you blink, if it's your hand you don't.

The predictability may come from a sense of self, but I think
it's more likely to come from the fact that you're fully aware of
what is going to happen next when it's your own movements giving
the stimulus.

--Jamie.
...!seismo!ubc-vision!ubc-cs!andrews
"Now it's dark"

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT