Copy Link
Add to Bookmark
Report

AIList Digest Volume 4 Issue 227

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest            Sunday, 19 Oct 1986      Volume 4 : Issue 227 

Today's Topics:
Philosophy - Searle, Turing

----------------------------------------------------------------------

Date: 16 Oct 86 09:10:00 EDT
From: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Reply-to: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Subject: yet more wrangling on Searle, Turing, ...


> Date: 10 Oct 86 13:47:46 GMT
> From: rutgers!princeton!mind!harnad@think.com (Stevan Harnad)
> Subject: Re: Searle, Turing, Symbols, Categories
>
> It is not always clear which of the two components a sceptic is
> worrying about. It's usually (ii), because who can quarrel with the
> principle that a veridical model should have all of our performance
> capacities? Now the only reply I have for the sceptic about (ii) is
> that he should remember that he has nothing MORE than that to go on in
> the case of any other mind than his own. In other words, there is no
> rational reason for being more sceptical about robots' minds (if we
> can't tell their performance apart from that of people) than about
> (other) peoples' minds.

This just ain't so... if we know, as we surely do, that the internals
of the robot (electronics, metal) are quite different from those
of other passersby (who presumably have regular ole brains), we might
well be more skeptical that robots' "consciousness" is the same as
ours. Briefly, I know:

1. that I have a brain
2. that I am conscious, and what my consciousness feels like
3. that I am capable of certain impressive types of performance,
like holding up my end of an English conversation.

It seems very reasonable to suppose that 3 depends on 2 depends
on 1. But 1 and 3 are objectively ascertainable for others as
well. So if a person has 1 and 3, and a robot has 3 but NOT 1,
I certainly have more reason to believe that the person has 2, than
that the robot does. One (rationally) believes other people are
conscious BOTH because of their performance and because their
internal stuff is a lot like one's own.

I am assuming here that "mind" implies consciousness, ie that you are
not simply defining "mind" as a set of external capabilities. If you
are, then of course, by (poor) definition, only external performance
is relevant. I would assert (and I think you would agree) that to
state "X has a mind" is to imply that X is conscious.

> ....So, since we have absolutely no intuitive idea about the functional
> (symbolic, nonsymbolic, physical, causal) basis of the mind, our only
> nonarbitrary basis for discriminating robots from people remains their
> performance.

Again, we DO have some idea about the functional basis for mind, namely
that it depends on the brain (at least more than on the pancreas, say).
This is not to contend that there might not be other bases, but for
now ALL the minds we know of are brain-based, and it's just not
dazzlingly clear whether this is an incidental fact or somewhat
more deeply entrenched.

> I don't think there's anything more rigorous than the total turing
> test ... Residual doubts about it come from
> four sources, ... (d) misplaced hold-outs for consciousness.
>
> Finally, my reply to (d) [mind bias] is that holding out for
> consciousness is a red herring. Either our functional attempts to
> model performance will indeed "capture" consciousness at some point, or
> they won't. If we do capture it, the only ones that will ever know for
> sure that we've succeeded are our robots. If we don't capture it,
> then we're stuck with a second level of underdetermination -- call it
> "subjective" underdetermination -- to add to our familiar objective
> underdetermination (b)...[i.e.,]
> there may be a further unresolvable uncertainty about whether or not
> they capture the unobservable basis of everything (or anything) that is
> subjectively observable.
>
> AI, robotics and cognitive modeling would do better to learn to live
> with this uncertainty and put it in context, rather than holding out
> for the un-do-able, while there's plenty of the do-able to be done.
>
> Stevan Harnad
> princeton!mind!harnad

I don't quite understand your reply. Why is consciousness a red herring
just because it adds a level of uncertainty?

1. If we suppose, as you do, that consciousness is so slippery that we
will never know more about its basis in humans than we do now, one
might still want to register the fact that our basis for belief in
the consciousness of competent robots is more shaky than for that
in humans. This reservation does not preclude the writing of further
Lisp programs.

2. But it's not obvious to me that we will never know more than we do
now about the relation of brain to consciousness. Even though any
correlations will ultimately be grounded on one side by introspection
reports, it does not follow that we will never know, with reasonable
assurance, which aspects of the brain are necessary for consciousness
and which are incidental. A priori, no one knows whether, eg,
being-composed-of-protein is incidental or not. I believe this is
Searle's point when he says that the brain may be as necessary for
consciousness as mammary glands are for lactation. Now at some level
of difficulty and abstraction, you can always engineer anything with
anything, ie make a computer out of play-doh. But the "multi-
realizability"
argument has force only if its obvious (which it
ain't) that the structure of the brain at a fairly high level (eg
neuron networks, rather than molecules), high enough to be duplicated
by electronics, is what's important for consciousness.


John Cugini <Cugini@NBS-VMS>

------------------------------

Date: 16 Oct 86 07:14:26 PDT (Thursday)
From: "charles_kalish.EdServices"@Xerox.COM
Subject: Turing Test(s?)

Maybe we should start a new mail group where we try to convince each
other that we understand the turing test if everybody fails we go back
to the drawing board and design a new test.

And as the first entry:

In response to Daniel Simon's questioning of the appropriateness of this
test, I think the answer is that the Turing test is acceptable because
that's how we recognize each other as intelligent beings. Usually we
don't do it in a rigorous way because everybody always passes it. But
if I ask you to "please pass the Cheez-whiz" and you respond "Anita
Eckbart is marinating her poodle"
then I would get a little suspicious
and ask more questions designed to figure out whether you're joking,
sick, hard of hearing, etc. Depending on your answers I may decide to
downgrade your status to less than full personhood.

About Stevan Harnad's two kinds of Turing tests: I can't really see
what difference the I/O methods of your system makes. It seems that the
relevant issue is what kind of representation of the world it has.
While I agree that to really understand the system would need some
non-purely conventional representation (not semantic if "semantic" means
"not operable on in a formal way" as I believe [given the brain is a
physical system] all mental processes are formal then "semantic" just
means governed by a process we don't understand yet) giving and getting
through certain kinds of I/O doesn't make much difference. Two for
instances: SHRDLU operated on a simulated blocks world. The
modifications to make it operate on real block would have been
peripheral and not have effected the understanding of the system. Also,
all systems take analog input and give analog output. Most receive
finger pressure on keys and return directed streams of ink or electrons.
It may be that a robot would need more "immediate" (as opposed to
conventional) representations, but it's neither necessary nor sufficient
to be a robot to have those representations.

P.s. don't ask me to be the moderator for this new group. The turing
test always assumes the moderator has some claim to expertise in the
matter.

------------------------------

Date: 16 Oct 86 17:11:04 GMT
From: eugene@AMES-AURORA.ARPA (Eugene miya)
Subject: Re: Turing, Symbols, Categories

<2495@utai.UUCP> <2552@utai.UUCP>

In article <2552@utai.UUCP>, me@utai.UUCP (Daniel Simon) writes:
> In article <167@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
> >
> >The turing test has two components, (i) a formal, empirical one,
> >and (ii) an informal, intuitive one. The formal empirical component (i)
> >is the requirement that the system being tested be able to generate human
> >performance (be it robotic or linguistic). That's the nontrivial
> >burden that will occupy theorists for at least decades to come, as we
> >converge on (what I've called) the "total" turing test -- a model that
> >exhibits all of our robotic and lingistic capacities.
>
> Moreover, you haven't said anything concrete about what this test might look
> like. On what foundation could such a set of defining characteristics for
> "human performance" be based? Would it define those attributes common to all
> human beings? Most human beings? At least one human being? How would we
> decide by what criteria to include observable attributes in our set of "human"
> ones? How could such attributes be described? Is such a set of descriptions
> even feasible? If not, doesn't it call into question the validity of seeking
> to model what cannot be objectively characterized? And if such a set of
> describable attributes is feasible, isn't it an indispensable prerequisite for
> the building of a working Turing-test-passing model?
>
> Again, you have all but admitted that the "total" Turing test you have
> described has nothing to do with the Turing test at all--it is a set of
> "objective observables" which can be verified through scientific examination.
> The thoughtful examiner and "comparison human" have been replaced with
> controlled scientific experiments and quantifiable results. What kinds of
> experiments? What kinds of results? WHAT DOES THE "TOTAL TURING TEST"
> LOOK LIKE?
>
> I know, I know. I ask a lot of questions. Call me nosy.
>
> Daniel R. Simon

Keep asking questions.

1) I deleted your final comment about database: note EXPERT SYSTEMS
(so called KNOWLEDGE-BASED SYSTEMS) ARE NOT AI.

2) I've been giving thought to what a `true' Turing test would be
like. I found Turing's original paper in Mind. This is what I have
concluded with light thinking for about 8 months:

a) No single question can answer the question of intelligence, then
how many? I hope a finite, preferably small, or at least a countable number.

b) The Turing test is what psychologists call a test of `Discrimination.'
These tests should be carefully thought out for pre-test and post-test
experimental conditions (like answers of a current question may or may not
be based on answers from an earlier [not necessarily immediate question]).

c) Some of the questions will be confusing, sort of like the more sophisticated
eye tests like I just had. Note we introduce the possibly of calling
some human "machines."

d) Early questions in the tests in particular those of quantitative reasoning
should be timed as well as checked for accuracy. Turing would want this.
to was in his original paper.

e) The test must be prepared for ignorance on the part of humans and machines.
It should not simply take "I don't know," or "Not my taste" for
answers. It should be able to circle in on one's ignorance to
define the boundaries or character of the respondent's ignorance.

f) Turing would want a degree of humor. The humor would be a more
sophisticated type like punning or double entandres. Turing would
certainly consider gaming problems.

Turing mentions all these in his paper. Note that some of the original
qualities make AI uneconomical in the short term. Who wants a computer
which makes adding errors? Especially it's it dealing with my pay check.

I add that

a) We should check for `personal values,' `compassion,' which might
traits or artifacts of the person or team responsible for programming.
It should exploit those areas as possible lines of weakness or strenth.

b) The test should have a degree of dynamic problem solving.

c) The test might have characteristics like that test in the film Blade
Runner. Note: memories != intelligence, but the question might be
posed to the respondent in such a way: "Your wife and your daughter
have fallen into the water. You can only save one. Who do you save?
and why?"


d) Consider looking at the WAIS, WISC, the Stanford-Binet, the MMPI
(currently being updated), the Peabody, and numerous other tests of
intelligence and personality, etc. Note there are tests which
distinguish split brain people. They are simple tests. Consider
the color-blindness tests: simple if you are not color blind,
confusing is you are. There is a whole body of psychometric
literature which Turing did not consult.

As you can guess, such a test cannot be easily placed as a sequence on
paper, but as a program in a dumb machine, it is certainly possible.

As a last thought. The paper in Mind was published in 1950. Turing
made comment about "computers with the capacity of a billion [what
he did not say],"
and the "turn of the Century." I suggested to Doug
Hofstadter (visiting here one day), we hold a 50th anniversary celebration
in the year 2000 on the publication of Turing paper, and he agreed.

>From the Rock of Ages Home for Retired Hackers:

--eugene miya
NASA Ames Research Center
eugene@ames-aurora.ARPA
"You trust the `reply' command with all those different mailers out there?"
{hplabs,hao,nike,ihnp4,decwrl,allegra,tektronix,menlo70}!ames!aurora!eugene
I need a turing machine to route my mail.

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT