Copy Link
Add to Bookmark
Report

AIList Digest Volume 4 Issue 228

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest            Sunday, 19 Oct 1986      Volume 4 : Issue 228 

Today's Topics:
Philosophy - Searle, Turing, Symbols, Categories

----------------------------------------------------------------------

Date: 16 Oct 86 17:25:42 GMT
From: rutgers!princeton!mind!harnad@lll-crg.arpa (Stevan Harnad)
Subject: Re: Searle, Turing, Symbols, Categories


In reply to the following by me in <167@mind.UUCP>:

> there is no evidence at all that
> either capacities or contexts are modular.

michaelm@bcsaic.UUCP (michael maxwell) writes:

>> Maybe I'm reading this out of context (not having read your books or papers),
>> but could you explain this statement? I know of lots of evidence for the
>> modularity of various aspects of linguistic behavior. In fact, we have a
>> parser + grammar of English here that captures a large portion of English
>> syntax, but has absolutely no semantics (yet).

I'm afraid this extract is indeed a bit out of context. The original
context concerned what I've dubbed the "Total Turing Test," one
in which ALL of our performance capacities -- robotic and linguistic --
are "captured." In the papers under discussion I described several
arguments in favor of the Total Turing Test over any partial
turing test, such as "toy" models that only simulate a small
chunk of our cognitive performance capacity, or even the (subtotal)
linguistic ("teleteype") version of the Total Turing Test. These
arguments included:

(3) The "Convergence Argument" that `toy' problems are arbitrary,
that they have too many degrees of freedom, that the d.f. shrink as the
capacities of the toy grow to life-size, and that the only version that
reduces the underdetermination to the normal proportions of a
scientific theory is the `Total' one.

(5) The "Nonmodularity Argument" that no subtotal model constitutes a
natural module (insofar as the turing test is concerned); the only
natural autonomous modules are other organisms, with their complete
robotic capacities (more of this below).

(7) The "Robotic Functionalist Argument" that the entire symbolic
functional level is no macromodule either, and needs to be grounded
in robotic function.

I happen to have views on the "autonomy of syntax" (which is of
course the grand-daddy of the current modulo-mania), but they're not
really pertinent to the total vs modular turing-test issue. Perhaps
the only point about an autonomous parser that is relevant here is
that it is in the nature of the informal, intuitive component of the
turing test that lifeless fragments of mimicry (such as Searle's isolated
`thirst' module) are not viable; they simply fail to convince us of
anything. And rightly so, I should think; otherwise the turing test
would be a pretty flimsy one.

Let me add, though, that even "convincing" autonomous parsing performance
(in the non-turing sense of convincing) seems to me to be rather weak
evidence for the psychological reality of a syntactic module -- let
alone that it has a mind. (On my theory, semantic performance has to be
grounded in robotic performance and syntactic performance must in turn
be grounded in semantic performance.)

Stevan Harnad
(princeton!mind!harnad)

------------------------------

Date: Thu 16 Oct 86 17:55:00-PDT
From: Pat Hayes <PHayes@SRI-KL.ARPA>
Subject: symbols

Stevan Harnad has answered Drew Lawson nicely, but I cant help adding this
thought: if he saw a symbol of a car coming and DIDNT get out of the way, would
the resulting change of his state be a purely symbolic one?
Pat Hayes

------------------------------

Date: 17 Oct 1986 1329-EDT
From: Bruce Krulwich <KRULWICH@C.CS.CMU.EDU>
Subject: symbols: syntax vs semantics


i think that the main thing i disagree with about Searle's work
and recent points in this discussion is the claim that symbols,
and in general any entity that a computer will process, can only
be dealt with in terms of syntax. i disagree. for example, when
i add two integers, the bits that the integers are encoded in are
interpreted semantically to combine to form an integer. the same
could be said about a symbol that i pass to a routine in an
object-oriented system such as CLU, where what is done with
the symbol depends on it's type (which i claim is it's semantics)

i think that the reason that computers are so far behind the
human brain in semantic interpretation and in general "thinking"
is that the brain contains a hell of a lot more information
than most computer systems, and also the brain makes associations
much faster, so an object (ie, a thought) is associated with
its semantics almost instantly.


bruce krulwich

arpa: krulwich@c.cs.cmu.edu
bitnet: bk0a%tc.cc.cmu.edu@cmuccvma
uucp: (??) ... uw-beaver!krulwich@c.cs.cmu.edu
or ... ucbvax!krulwich@c.cs.cmu.edu

"Life's too short to ponder garbage"

------------------------------

Date: Fri 17 Oct 86 10:04:51-PDT
From: Pat Hayes <PHayes@SRI-KL.ARPA>
Subject: turing test

Daniel R. Simon has worries about the Turing test. A good place to find
intelligent discussion of these issues is Turings original article in MIND,
October 1950, v.59, pages 433 to 460.

Pat Hayes
PHAYES@SRI-KL

------------------------------

Date: 14 Oct 86 21:20:53 GMT
From: adelie!axiom!linus!philabs!pwa-b!mmintl!franka@ll-xn.arpa (Frank Adams)
Subject: Re: Searle, Turing, Symbols, Categories

In article <166@mind.UUCP> harnad@mind.UUCP writes:
>What I mean by a symbol is an
>arbitrary formal token, physically instantiated in some way (e.g., as
>a mark on a piece of paper or the state of a 0/1 circuit in a
>machine) and manipulated according to certain formal rules. The
>critical thing is that the rules are syntactic, that is, the symbol is
>manipulated on the basis of its shape only -- which is arbitrary,
>apart from the role it plays in the formal conventions of the syntax
>in question. The symbol is not manipulated in virtue of its "meaning."
>Its meaning is simply an interpretation we attach to the formal
>goings-on. Nor is it manipulated in virtue of a relation of
>resemblance to whatever "objects" it may stand for in the outside
>world, or in virtue of any causal connection with them. Those
>relations are likewise mediated only by our interpretations.

I see two problems with respect to this viewpoint. One is that relating
purely symbolic functions to external events is essentially a solved
problem. Digital audio recording, for example, works quite well. Robotic
operations generally fail, when they do, not because of any problems with
the digital control of an analog process, but because the purely symbolic
portion of the process is inadequate. In other words, there is every reason
to expect that a computer program able to pass the Turing test could be
extended to one able to pass the robotic version of the Turing test,
requiring additional development effort which is tiny by comparison (though
likely still measured in man-years).

Secondly, even in a purely formal environment, there turn out to be a lot of
real things to talk about. Primitive concepts of time (before and after)
are understandable. One can talk about nouns and verbs, sentences and
conversations, self and other. I don't see any fundamental difference
between the ability to deal with symbols as real objects, and the ability to
deal with other kinds of real objects.

Frank Adams ihnp4!philabs!pwa-b!mmintl!franka
Multimate International 52 Oakland Ave North E. Hartford, CT 06108

------------------------------

Date: 17 Oct 86 19:35:51 GMT
From: adobe!greid@glacier.stanford.edu
Subject: Re: Searle, Turing, Symbols, Categories

It seems to me that the idea of concocting a universal Turing test is sort
of useless.

Consider, for a moment, monsters. There have been countless monsters on TV
and film that have had varying degrees of human-ness, and as we watch the
plot progress, we are sort of administering the Turing test. Some of the
better aliens, like in "Blade Runner", are very difficult to detect as being
non-human. However, given enough time, we will eventually notice that they
don't sleep, or that they drink motor oil, or that they don't bleed when
they are cut (think of "Terminator" and surgery for a minute), and we start
to think of alternative explanations for the aberrances we have noticed. If
we are watching TV, we figure it is a monster. If we are walking down the
street and we see somebody get their arm cut off and they don't bleed, we
think *we* are crazy (or we suspect "special effects" and start looking for
the movie camera), because there is no other plausible explanation.

There are even human beings whom we question when one of our subconscious
"tests" fails--like language barriers, brain damage, etc. If you think
about it, there are lots of human beings who would not pass the Turing test.

Let's forget about it.

Glenn Reid
Adobe Systems

Adobe claims no knowledge of anything in this message.

------------------------------

Date: 18 Oct 86 15:16:14 GMT
From: rutgers!princeton!mind!harnad@titan.arc.nasa.gov (Stevan Harnad)
Subject: Re: Searle, Turing, Symbols, Categories


In response to some of the arguments in favor of the robotic over the
symbolic version of the turing test in (the summaries of) my articles
"Minds, Machines and Searle" and "Category Induction and Representation"
franka@mmintl.UUCP (Frank Adams) replies:

> [R]elating purely symbolic functions to external events is
> essentially a solved problem. Digital audio recording, for
> example, works quite well. Robotic operations generally fail,
> when they do, not because of any problems with the digital
> control of an analog process, but because the purely symbolic
> portion of the process is inadequate. In other words, there is
> every reason to expect that a computer program able to pass the
> [linguistic version of the] Turing test could be extended to one
> able to pass the robotic version...requiring additional development
> effort which is tiny by comparison (though likely still measured
> in man-years).

This argument has become quite familiar to me from delivering the oral
version of the papers under discussion. It is the "Triviality of
Transduction [A/D conversion, D/A conversion, Effectors] Argument" (TT
for short).

Among my replies to TT the central one is the principled
Antimodularity Argument: There are reasons to believe that the neat
partitioning of function into autonomous symbolic and nonsymbolic modules
may break down in the special case of mind modeling. These reasons
include my "Groundedness" Argument: that unless cognitive symbols are
grounded (psychophysically, bottom-up) in nonsymbolic processes they remain
meaningless. (This amounts to saying that we must be intrinsically
"dedicated" devices and that our A/D and our "decryption/encryptions"
are nontrivial; in passing, this is also a reply to Searle's worries
about "intrinsic" versus "derived" intentionality. It may also be the
real reason why "the purely symbolic portion of the process is inadequate"!)
This problem of grounding symbolic processes in nonsymbolic ones in the
special case of cognition is also the motivation for the material on category
representation.

Apart from nonmodularity and groundedness, other reasons include:

(1) Searle's argument itself, and the fact that only the transduction
argument can block it; that's some prima facie ground for believing
that the TT may be false in the special case of mind-modeling.

(2) The triviality of ordinary (nonbiological) transduction and its
capabilities, comparared to what organisms with senses (and minds) can
do. (Compare the I/O capacities of "audio" devices with those of
"auditory" ones; the nonmodular road to the capacity to pass the total
turing test suggests that we are talking here about qualitative
differences, not quantitative ones.)

(3) Induction (both ontogenetic and phylogentetic) and inductive capacity
play an intrinsic and nontrivial role in bio-transduction that they do
not play in ordinary engineering peripherals, or the kinds of I/O
problems these have been designed for.

(4) Related to the Simulation/Implementation Argument: There are always
more real-world contingencies than can be anticipated in a symbolic
description or simulation. That's why category representations are
approximate and the turing test is open-ended.

For all these reasons, I believe that Object/Symbol conversion in
cognition is a considerably more profound problem than ordinary A/D;
orders of magnitude more profound, in fact, and hence that TT is
false.

> [E]ven in a purely formal environment, there turn out to be a
> lot of real things to talk about. Primitive concepts of time
> (before and after) are understandable. One can talk about nouns
> and verbs, sentences and conversations, self and other. I don't
> see any fundamental difference between the ability to deal with
> symbols as real objects, and the ability to deal with other kinds
> of real objects.

I don't completely understand the assumptions being made here. (What
is a "purely formal environment"? Does anyone you know live in one?)
Filling in with some educated guesses here, I would say that again the
Object/Symbol conversion problem in the special case of organisms'
mental capacities is being vastly underestimated. Object-manipulation
(including discrimination, categorization, identification and
description) is not a mere special case of symbol-manipulation or
vice-versa. One must be grounded in the other in a principled way, and
the principles are not yet known.

On another interpretation, perhaps you are talking about "deixis" --
the necessity, even in the linguistic (symbolic) version of the turing
test, to be able to refer to real objects in the here-and-now. I agree that
this is a deep problem, and conjecture that its solution in the
symbolic version will have to draw on anterior nonsymbolic (i.e.,
robotic) capacities.

Stevan Harnad
princeton!mind!harnad

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT