Copy Link
Add to Bookmark
Report
AIList Digest Volume 4 Issue 206
AIList Digest Monday, 6 Oct 1986 Volume 4 : Issue 206
Today's Topics:
Philosophy - Searle, Turing, Symbols, Categories
----------------------------------------------------------------------
Date: 27 Sep 86 14:20:21 GMT
From: princeton!mind!harnad@caip.rutgers.edu (Stevan Harnad)
Subject: Searle, Turing, Symbols, Categories
The following are the Summary and Abstract, respectively, of two papers
I've been giving for the past year on the colloquium circuit. The first
is a joint critique of Searle's argument AND of the symbolic approach
to mind-modelling, and the second is an alternative proposal and a
synthesis of the symbolic and nonsymbolic approach to the induction
and representation of categories.
I'm about to publish both papers, but on the off chance that
there is a still a conceivable objection that I have not yet rebutted,
I am inviting critical responses. The full preprints are available
from me on request (and I'm still giving the talks, in case anyone's
interested).
***********************************************************
Paper #1:
(Preprint available from author)
MINDS, MACHINES AND SEARLE
Stevan Harnad
Behavioral & Brain Sciences
20 Nassau Street
Princeton, NJ 08542
Summary and Conclusions:
Searle's provocative "Chinese Room Argument" attempted to
show that the goals of "Strong AI" are unrealizable.
Proponents of Strong AI are supposed to believe that (i) the
mind is a computer program, (ii) the brain is irrelevant,
and (iii) the Turing Test is decisive. Searle's point is
that since the programmed symbol-manipulating instructions
of a computer capable of passing the Turing Test for
understanding Chinese could always be performed instead by a
person who could not understand Chinese, the computer can
hardly be said to understand Chinese. Such "simulated"
understanding, Searle argues, is not the same as real
understanding, which can only be accomplished by something
that "duplicates" the "causal powers" of the brain. In the
present paper the following points have been made:
1. Simulation versus Implementation:
Searle fails to distinguish between the simulation of a
mechanism, which is only the formal testing of a theory, and
the implementation of a mechanism, which does duplicate
causal powers. Searle's "simulation" only simulates
simulation rather than implementation. It can no more be
expected to understand than a simulated airplane can be
expected to fly. Nevertheless, a successful simulation must
capture formally all the relevant functional properties of a
successful implementation.
2. Theory-Testing versus Turing-Testing:
Searle's argument conflates theory-testing and Turing-
Testing. Computer simulations formally encode and test
models for human perceptuomotor and cognitive performance
capacities; they are the medium in which the empirical and
theoretical work is done. The Turing Test is an informal and
open-ended test of whether or not people can discriminate
the performance of the implemented simulation from that of a
real human being. In a sense, we are Turing-Testing one
another all the time, in our everyday solutions to the
"other minds" problem.
3. The Convergence Argument:
Searle fails to take underdetermination into account. All
scientific theories are underdetermined by their data; i.e.,
the data are compatible with more than one theory. But as
the data domain grows, the degrees of freedom for
alternative (equiparametric) theories shrink. This
"convergence" constraint applies to AI's "toy" linguistic
and robotic models as well, as they approach the capacity to
pass the Total (asympototic) Turing Test. Toy models are not
modules.
4. Brain Modeling versus Mind Modeling:
Searle also fails to note that the brain itself can be
understood only through theoretical modeling, and that the
boundary between brain performance and body performance
becomes arbitrary as one converges on an asymptotic model of
total human performance capacity.
5. The Modularity Assumption:
Searle implicitly adopts a strong, untested "modularity"
assumption to the effect that certain functional parts of
human cognitive performance capacity (such as language) can
be be successfully modeled independently of the rest (such
as perceptuomotor or "robotic" capacity). This assumption
may be false for models approaching the power and generality
needed to pass the Total Turing Test.
6. The Teletype versus the Robot Turing Test:
Foundational issues in cognitive science depend critically
on the truth or falsity of such modularity assumptions. For
example, the "teletype" (linguistic) version of the Turing
Test could in principle (though not necessarily in practice)
be implemented by formal symbol-manipulation alone (symbols
in, symbols out), whereas the robot version necessarily
calls for full causal powers of interaction with the outside
world (seeing, doing AND linguistic understanding).
7. The Transducer/Effector Argument:
Prior "robot" replies to Searle have not been principled
ones. They have added on robotic requirements as an
arbitrary extra constraint. A principled
"transducer/effector" counterargument, however, can be based
on the logical fact that transduction is necessarily
nonsymbolic, drawing on analog and analog-to-digital
functions that can only be simulated, but not implemented,
symbolically.
8. Robotics and Causality:
Searle's argument hence fails logically for the robot
version of the Turing Test, for in simulating it he would
either have to USE its transducers and effectors (in which
case he would not be simulating all of its functions) or he
would have to BE its transducers and effectors, in which
case he would indeed be duplicating their causal powers (of
seeing and doing).
9. Symbolic Functionalism versus Robotic Functionalism:
If symbol-manipulation ("symbolic functionalism") cannot in
principle accomplish the functions of the transducer and
effector surfaces, then there is no reason why every
function in between has to be symbolic either. Nonsymbolic
function may be essential to implementing minds and may be a
crucial constituent of the functional substrate of mental
states ("robotic functionalism"): In order to work as
hypothesized, the functionalist's "brain-in-a-vat" may have
to be more than just an isolated symbolic "understanding"
module -- perhaps even hybrid analog/symbolic all the way
through, as the real brain is.
10. "Strong" versus "Weak" AI:
Finally, it is not at all clear that Searle's "Strong
AI"/"Weak AI" distinction captures all the possibilities, or
is even representative of the views of most cognitive
scientists.
Hence, most of Searle's argument turns out to rest on
unanswered questions about the modularity of language and
the scope of the symbolic approach to modeling cognition. If
the modularity assumption turns out to be false, then a
top-down symbol-manipulative approach to explaining the mind
may be completely misguided because its symbols (and their
interpretations) remain ungrounded -- not for Searle's
reasons (since Searle's argument shares the cognitive
modularity assumption with "Strong AI"), but because of the
transdsucer/effector argument (and its ramifications for the
kind of hybrid, bottom-up processing that may then turn out
to be optimal, or even essential, in between transducers and
effectors). What is undeniable is that a successful theory
of cognition will have to be computable (simulable), if not
exclusively computational (symbol-manipulative). Perhaps
this is what Searle means (or ought to mean) by "Weak AI."
*************************************************************
Paper #2:
(To appear in: "Categorical Perception"
S. Harnad, ed., Cambridge University Press 1987
Preprint available from author)
CATEGORY INDUCTION AND REPRESENTATION
Stevan Harnad
Behavioral & Brain Sciences
20 Nassau Street
Princeton NJ 08542
Categorization is a very basic cognitive activity. It is
involved in any task that calls for differential responding,
from operant discrimination to pattern recognition to naming
and describing objects and states-of-affairs. Explanations
of categorization range from nativist theories denying that
any nontrivial categories are acquired by learning to
inductivist theories claiming that most categories are learned.
"Categorical perception" (CP) is the name given to a
suggestive perceptual phenomenon that may serve as a useful
model for categorization in general: For certain perceptual
categories, within-category differences look much smaller
than between-category differences even when they are of the
same size physically. For example, in color perception,
differences between reds and differences between yellows
look much smaller than equal-sized differences that cross
the red/yellow boundary; the same is true of the phoneme
categories /ba/ and /da/. Indeed, the effect of the category
boundary is not merely quantitative, but qualitative.
There have been two theories to explain CP effects. The
"Whorf Hypothesis" explains color boundary effects by
proposing that language somehow determines our view of
reality. The "motor theory of speech perception" explains
phoneme boundary effects by attributing them to the patterns
of articulation required for pronunciation. Both theories
seem to raise more questions than they answer, for example:
(i) How general and pervasive are CP effects? Do they occur
in other modalities besides speech-sounds and color? (ii)
Are CP effects inborn or can they be generated by learning
(and if so, how)? (iii) How are categories internally
represented? How does this representation generate
successful categorization and the CP boundary effect?
Some of the answers to these questions will have to come
from ongoing research, but the existing data do suggest a
provisional model for category formation and category
representation. According to this model, CP provides our
basic or elementary categories. In acquiring a category we
learn to label or identify positive and negative instances
from a sample of confusable alternatives. Two kinds of
internal representation are built up in this learning by
"acquaintance": (1) an iconic representation that subserves
our similarity judgments and (2) an analog/digital feature-
filter that picks out the invariant information allowing us
to categorize the instances correctly. This second,
categorical representation is associated with the category
name. Category names then serve as the atomic symbols for a
third representational system, the (3) symbolic
representations that underlie language and that make it
possible for us to learn by "description."
This model provides no particular or general solution to the
problem of inductive learning, only a conceptual framework;
but it does have some substantive implications, for example,
(a) the "cognitive identity of (current) indiscriminables":
Categories and their representations can only be provisional
and approximate, relative to the alternatives encountered to
date, rather than "exact." There is also (b) no such thing
as an absolute "feature," only those features that are
invariant within a particular context of confusable
alternatives. Contrary to prevailing "prototype" views,
however, (c) such provisionally invariant features MUST
underlie successful categorization, and must be "sufficient"
(at least in the "satisficing" sense) to subserve reliable
performance with all-or-none, bounded categories, as in CP.
Finally, the model brings out some basic limitations of the
"symbol-manipulative" approach to modeling cognition,
showing how (d) symbol meanings must be functionally
anchored in nonsymbolic, "shape-preserving" representations
-- iconic and categorical ones. Otherwise, all symbol
interpretations are ungrounded and indeterminate. This
amounts to a principled call for a psychophysical (rather
than a neural) "bottom-up" approach to cognition.
------------------------------
Date: Mon 29 Sep 86 09:55:11-PDT
From: Pat Hayes <PHayes@SRI-KL.ARPA>
Subject: Searle's logic
I try not to get involved in these arguments, but bruce krulwich's assertion
that Searle 'bases all his logic on' the binary nature of computers is seriously
wrong. We could have harware which worked with direct, physical, embodiments
of all of Shakespeare, and Searles arguments would apply to it just as well.
What bothers him ( and many other philosophers ) is the idea that the machine
works by manipulating SYMBOLIC descriptions of its environment ( or whatever it
happens to be thinking about ). It's the internal representation idea, which
we AIers take in with our mothers milk, which he finds so silly and directs his
arguments against.
Look, I also don't think there's any real difference between a human's knowledge
of a horse and machine's manipulation of the symbol it is using to represent it.
But Searle has some very penetrating arguments against this idea, and one doesnt
make progress by just repeating one's intuitions, one has to understand his
arguments and explain what is wrong with them. Start with the Chinese room, and
read all his replies to the simple counterarguments as well, THEN come back and
help us.
Pat Hayes
------------------------------
Date: 1 Oct 86 18:25:16 GMT
From: cbatt!cwruecmp!cwrudg!rush@ucbvax.Berkeley.EDU (rush)
Subject: Re: Searle, Turing, Symbols, Categories (Question not comment)
In article <158@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>6. The Teletype versus the Robot Turing Test:
>
>For example, the "teletype" (linguistic) version of the Turing...
> whereas the robot version necessarily
>calls for full causal powers of interaction with the outside
>world (seeing, doing AND linguistic understanding).
>
Uh...I never heard of the "robot version" of the Turing Test,
could someone please fill me in?? I think that understanding
the reasons for such a test would help me (I make
no claims for anyone else) make some sense out of the rest
of this article. In light of my lack of knowledge, please forgive
my presumption in the following comment.
>7. The Transducer/Effector Argument:
>
>A principled
>"transducer/effector" counterargument, however, can be based
>on the logical fact that transduction is necessarily
>nonsymbolic, drawing on analog and analog-to-digital
>functions that can only be simulated, but not implemented,
>symbolically.
>
[ I know I claimed no commentary, but it seems that this argument
depends heavily on the meaning of the term "symbol". This could
be a problem that only arises when one attempts to implement some
of the stranger possibilities for symbolic entities. ]
Richard Rush - Just another Jesus freak in computer science
decvax!cwruecmp!cwrudg!rush
------------------------------
Date: 2 Oct 86 16:05:28 GMT
From: princeton!mind!harnad@caip.rutgers.edu (Stevan Harnad)
Subject: Re: Searle, Turing, Symbols, Categories (Question not comment)
In his commentary-not-reply to my <158@mind.UUCP>, Richard Rush
<150@cwrudge.UUCP> asks:
(1)
> I never heard of the "robot version" of the Turing Test,
> could someone please fill me in?
He also asks (in connection with my "transducer/effector" argument)
about the analog/symbolic distinction:
(2)
> I know I claimed no commentary, but it seems that this argument
> depends heavily on the meaning of the term "symbol". This could
> be a problem that only arises when one attempts to implement some
> of the stranger possibilities for symbolic entities.
In reply to (1): The linguistic version of the turing test (turing's
original version) is restricted to linguistic interactions:
Language-in/Language-out. The robotic version requires the candidate
system to operate on objects in the world. In both cases the (turing)
criterion is whether the system can PERFORM indistinguishably from a human
being. (The original version was proposed largely so that your
judgment would not be prejudiced by the system's nonhuman appearance.)
On my argument the distinction between the two versions is critical,
because the linguistic version can (in principle) be accomplished by
nothing but symbols-in/symbols-out (and symbols in between) whereas
the robotic version necessarily calls for non-symbolic processes
(transducer, effector, analog and A/D). This may represent a
substantive functional limitation on the symbol-manipulative approach
to the modeling of mind (what Searle calls "Strong AI").
In reply to (2): I don't know what "some of the stranger possibilities
for symbolic entities" are. I take symbol-manipulation to be
syntactic: Symbols are arbitrary tokens manipulated in accordance with
certain formal rules on the basis of their form rather than their meaning.
That's symbolic computation, whether it's done by computer or by
paper-and-pencil. The interpretations of the symbols (and indeed of
the manipulations and their outcomes) are ours, and are not part of
the computation. Informal and figurative meanings of "symbol" have
little to do with this technical concept.
Symbols as arbitrary syntactic tokens in a formal system can be
contrasted with other kinds of objects. The ones I singled out in my
papers were "icons" or analogs of physical objects, as they occur in
the proximal physical input/output in transduction, as they occur in
the A-side of A/D and D/A transformations, and as they may function in
any part of a hybrid system to the extent that their functional role
is not merely formal and syntactic (i.e., to the extent that their
form is not arbitrary and dependent on convention and interpretation
to link it to the objects they "stand for," but rather, the link is
one of physical resemblance and causality).
The category-representation paper proposes an architecture for such a
hybrid system.
Stevan Harnad
princeton!mind!harnad
------------------------------
End of AIList Digest
********************