Copy Link
Add to Bookmark
Report
AIList Digest Volume 4 Issue 256
AIList Digest Friday, 7 Nov 1986 Volume 4 : Issue 256
Today's Topics:
Philosophy - Searle, Turing, Symbols, Categories
----------------------------------------------------------------------
Date: 2 Nov 86 23:22:23 GMT
From: allegra!princeton!mind!harnad@ucbvax.Berkeley.EDU (Stevan
Harnad)
Subject: Re: Searle, Turing, Symbols, Categories
The following is a response on net.ai to a comment on mod.ai.
Because of problems with posting to mod.ai, I am temporarily replying to net.ai.
On mod.ai cugini@NBS-VMS.ARPA ("CUGINI, JOHN") writes:
> You seem to want to pretend that we know absolutely nothing about the
> basis of thought in humans, and to "suppress" all evidence based on
> such knowledge. But that's just wrong. Brains *are* evidence for mind,
> in light of our present knowledge.
What I said was that we knew absolutely nothing about the FUNCTIONAL
basis of thought in humans, i.e., about how brains or relevantly
similar devices WORK. Hence we wouldn't have the vaguest idea if a
given lump of grey matter was in fact the right stuff, or just a
gelatenous look-alike -- except by examining its performance (i.e., turing)
capacity. [The same is true, by the way, mutatis mutandis, for a
better structural look-alike -- with cells, synapses, etc. We have no
functional idea of what differentiates a mind-supporting look-alike
from a comatose one, or one from a nonviable fetus. Without the
performance criterion the brain cue could lead us astray as often as
not regarding whether there was indeed a mind there. And that's not to
mention that we knew perfectly well (perhaps better, even) how to judge
whether somebody had a mind before 'ere we ope'd a skull nor knew what we
had chanced upon there.
If you want a trivial concession though, I'll make one: If you saw an
inert body totally incapable of behavior, then or in the future, and
you entertained some prior subjective probability that it had a mind, say,
p, then, if you opened its skull and found something anatomically and
physiologically brain-like in there, then the probability p that it
had, or had had, a mind would correspondingly rise. Ditto for an inert
alien species. And I agree that that would be rational. However, I don't
think that any of that has much to do with the problem of modeling the mind, or
with the relative strengths or weaknesses of the Total Turing Test.
> People in, say, 1500 AD were perfectly rational in predicting
> tides based on the position of the moon (and vice-versa)
> even though they hadn't a clue as to the mechanism of interaction.
> If you keep asking "why" long enough, *all* science is grounded on
> such brute-fact correlation (why do like charges repel, etc.) - as
> Hume pointed out a while back.
Yes, but people then and even earlier were just as good at "predicting" the
presence of mind WITHOUT any reference to the brain. And in ambiguous
cases, behavior was and is the only rational arbiter. Consider, for
example, which way you'd go if (1) an alien body persisted in behaving like a
clock-like automaton in every respect -- no affect, no social interaction,
just rote repetition -- but it DID have something that was indistinguishable
(on the minute and superficial information we have) from a biological-like
nervous system), versus (2) if a life-long close friend of yours had
to undergo his first operation, and when they opened him up, he turned
out to be all transistors on the inside. I don't set much store by
this hypothetical sci-fi stuff, especially because it's not clear
whether the "possibilities" we are contemplating are indeed possible. But
the exercise does remind us that, after all, performance capacity is
our primary criterion, both logically and intuitively, and its
black-box correlates have whatever predictive power they may have
only as a secondary, derivative matter. They depend for their
validation on the behavioral criterion, and in cases of conflict,
behavior continues to be the final arbiter.
I agree that scientific inference is grounded in observed correlations. But
the primary correlation in this special case is, I am arguing, between
mental states and performance. That's what both our inferences and our
intuitions are grounded in. The brain correlate is an additional cue, but only
inasmuch as it agrees with performance. As to CAUSATION -- well, I'm
sceptical that anyone will ever provide a completely satisfying account
of the objective causes of subjective effects. Remember that, except for
the special case of the mind, all other scientific inferences have
only had to account for objective/objective correlations (and [or,
more aptly, via) their subjective/subjective experiential counterparts).
The case under discussion is the first (and I think only) case of
objective/subjective correlation and causation. Hence all prior bets,
generalizations or analogies are off or moot.
> other brains... are, by definition, relevantly brain-like
I'd be interested in knowing what current definition will distinguish
a mind-supporting brain from a non-mind-supporting brain, or even a
pseudobrain. (That IS the issue, after all, in claiming that the brain
in an INDEPENDENT predictor of mindedness.)
> Let me re-cast Harnad's argument (perhaps in a form unacceptable to
> him): We can never know any mind directly, other than our own, if we
> take the concept of mind to be something like "conscious intelligence" -
> ie the intuitive (and correct, I believe) concept, rather than
> some operational definition, which has been deliberately formulated
> to circumvent the epistemological problems. (Harnad, to his credit,
> does not stoop to such positivist ploys.) But the only external
> evidence we are ever likely to get for "conscious intelligence"
> is some kind of performance. Moreover, the physical basis for
> such performance will be known only contingently, ie we do not
> know, a priori, that it is brains, rather than automatic dishwashers,
> which generate mind, but rather only as an a posteriori correlation.
> Therefore, in the search for mind, we should rely on the primary
> criterion (performance), rather than on such derivative criteria
> as brains. I pretty much agree with the above account except for the
> last sentence which prohibits us from making use of derivative
> criteria. Why should we limit ourselves so? Since when is that part
> of rationality?
I accept the form in which you've recast my argument. The reasons that
brainedness is not a good criterion are the following (I suppose I
should stop saying it is not a "rational" criterion having made the
minor concession I did above): Let's call being able to pass the Total
Turing Test the "T" correlate of having a mind, and let's call having a brain
the "B" correlate. (1) The validity of B depends completely on T. We
have intuitions about the way we and others behave, and what it feels
like; we have none about having brains. (2) In case of conflict
between T and B, our intuitions (rationally, I suggest) go with T rather
than B. (3) The subjective/objective issue (i.e., the mind/body
problem) mentioned above puts these "correlations" in a rather
different category from other empirical correlations, which are
uniformly objective/objective. (4) Looked at sufficiently minutely and
functionally, we don't know what the functionally relevant as opposed to the
superficial properties of a brain are, insofar as mind-supportingness
is concerned; in other words, we don't even know what's a B and what's
just naively indistinguishable from a B (this is like a caricature of
the turing test). Only T will allow us to pick them out.
I think those are good enough reasons for saying that B is not a good
independent criterion. That having be said, let me concede that for a
radical sceptic, neither is T, for pretty pretty much the same
reasons! This is why I am a methodological epiphenomenalist.
> No, the fact is we do have more reason to suppose mind of other
> humans than of robots, in virtue of an admittedly derivative (but
> massively confirmed) criterion. And we are, in this regard, in an
> epistemological position *superior* to those who don't/didn't know
> about such things as the role of the brain, ie we have *more* reason
> to believe in the mindedness of others than they do. That's why
> primitive tribes (I guess) make the *mistake* of attributing
> mind to trees, weather, etc. Since raw performance is all they
> have to go on, seemingly meaningful activity on the part of any
> old thing can be taken as evidence of consciousness. But we
> sophisticates have indeed learned a thing or two, in particular, that
> brains support consciousness, and therefore we (rationally) give the
> benefit of the doubt to any brained entity, and the anti-benefit to
> un-brained entities. Again, not to say that we might not learn about
> other bases for mind - but that hardly disparages brainedness as a
> rational criterion for mindedness.
A trivially superior position, as I've suggested. Besides, the
primitive's mistake (like the toy AI-modelers') is in settling for
anything less than the Total Turing Test; the mistake is decidedly NOT
the failure to hold out for the possession of a brain. I agree that it's
rational to take brainedness as an additional corroborative cue, if you
ever need one, but since it's completely useless when it fails to corroborate
or conflicts with the Total Turing criterion, of what independent use is it?
Perhaps I should repeat that I take the context for this discussion to
be science rather than science fiction, exobiology or futurology. The problem
we are presumably concerned with is that of providing an explanatory
model of the mind along the lines of, say, physics's explanatory model
of the universe. Where we will need "cues" and "correlates" is in
determining whether the devices we build have succeeded in capturing
the relevant functional properties of minds. Here the (ill-understood)
properties of brains will, I suggest, be useless "correlates." (In
fact, I conjecture that theoretical neuroscience will be led by, rather
than itself leading, theoretical "mind-science" [= cognitive
science?].) In sci-fi contexts, where we are guessing about aliens'
minds or those of comatose creatures, having a blob of grey matter in
the right place may indeed be predictive, but in the cog-sci lab it is
not.
> there's really not much difference between relying on one contingent
> correlate (performance) rather than another (brains) as evidence for
> the presence of mind.
To a radical sceptic, as I've agreed above. But there is to a working
cognitive scientist (whose best methodological stance, I suggest, is
epiphenomenalism).
> I know consciousness (my own, at least) exists, not as
> some derived theoretical construct which explains low-level data
> (like magnetism explains pointer readings), but as the absolutely
> lowest rock-bottom datum there is. Consciousness is the data,
> not the theory - it is the explicandum, not the explicans (hope
> I got that right). It's true that I can't directly observe the
> consciousness of others, but so what? That's an epistemological
> inconvenience, but it doesn't make consciousness a red herring.
I agree with most of this, and it's why I'm not, for example, an
"eliminative materialist." But agreeing that consciousness is data
rather than theory does not entail that it's the USUAL kind of data of
empirical science. I KNOW I have a mind. Every other instance is
radically different from this unique one: I can only guess, infer. Do
you know of any similar case in normal scientific inference? This is
not just an "epistemological inconvenience," it's a whole 'nother ball
game. If we stick to the standard rules of objective science (which I
recommend), then turing-indistinguishable performance modeling is indeed
the best we can aspire to. And that does make consciousness a red
herring.
> ...being-composed-of-protein might not be as practically incidental
> as many assume. Frinstance, at some level of difficulty, one can
> get energy from sunlight "as plants do." But the issues are:
> do we get energy from sunlight in the same way? How similar do
> we demand that the processes are?...if we're interested in simulation at
> a lower level of abstraction, eg, photosynthesis, then, maybe, a
> non-biological approach will be impractical. The point is we know we
> can simulate human chess-playing abilities with non-biological
> technology. Should we just therefore declare the battle for mind won,
> and go home? Or ask the harder question: what would it take to get a
> machine to play a game of chess like a person does, ie, consciously.
This sort of objection to a toy problem like chess (an objection I take to
be valid) cannot be successfully redirected at the Total Turing Test, and
that was one of the central points of the paper under discussion. Nor
are the biological minutiae of modeling plant photosynthesis analogous to the
biological minutiae of modeling the mind: The OBJECTIVE data in the
mind case are what you can observe the organism to DO. Photosynthesis
is something a plant does. In both cases one might reasonably demand
that a veridical model should mimic the data as closely as possible.
Hence the TOTAL Turing Test.
But now what happens when you start bringing in physiological data, in the
mind case, to be included with the performance data? There's no
duality in the case of photosynthesis, nor is there any dichotomy of
levels. Aspiring to model TOTAL photosynthesis is aspiring to get
every chemical and temporal detail right. But what about the mind
case? On the one hand, we both agree with the radical sceptic that
NEITHER mimicking the behavior NOR mimicking the brain can furnish
"direct" evidence that you've captured mind. So whereas getting every
(observable) photosynthetic detail right "guarantees" that you've
captured photosynthesis, there's no such guarantee with consciousness.
So there's half of the disanalogy. Now consider again the hypothetical
possibilities we were considering earlier: What if brain data and
behavioral data compete? Which way should a nonsceptic vote? I'd go
with behavior. Besides, it's an empirical question, as I said in the
papers under discussion, whether or not brain constraints turn out to
be relevant on the way to Total Turing Utopia. Way down the road,
after all, the difference between mind-performance and
brain-performance may well become blurred. Or it may not. I think the
Total Turing Test is the right provisional methodology for getting you
there, or at least getting you close enough. The rest may very well
amount to only the "fine tuning."
> BTW, I quite agree with your more general thesis on the likely
> inadequacy of symbols (alone) to capture mind.
I'm glad of that. But I have to point out that a lot of what you
appear to disagree about went into the reasons supporting that very
thesis, and vice versa.
-----
May I append here a reply to andrews@ubc-cs.UUCP (Jamie Andrews) who
wrote:
> This endless discussion about the Turing Test makes the
> "eliminative materialist" viewpoint very appealing: by the
> time we have achieved something that most people today would
> call intelligent, we will have done it through disposing of
> concepts such as "intelligence", "consciousness", etc.
> Perhaps the reason we're having so much trouble defining
> a workable Turing Test is that we're essentially trying to
> fit a square peg into a round hole, belabouring some point
> which has less relevance than we realize. I wonder what old
> Alan himself would say about the whole mess.
On the contrary, rather than disposing of them, we will finally have
some empirical and theoretical idea of what their functional basis
might be, rather than simply knowing what it's like to have them. And
if we don't first sort out our methodological constraints, we're not
headed anywhere but in hermeneutic circles.
Stevan Harnad
{allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet
(609)-921-7771
------------------------------
Date: 3 Nov 86 17:41:54 GMT
From: mcvax!ukc!warwick!rlvd!kgd@seismo.css.gov (Keith Dancey)
Subject: Re: Searle, Turing, Symbols, Categories
In article <5@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>
>
>What do you think "having intelligence" is? Turing's criterion
>effectively made it: having performance capacity that is indistinguishable
>from human performance capacity. And that's all "having a mind"
>amounts to (by this objective criterion). ...
At the risk of sidetracking this discussion, I don't think it wise to try
and equate 'mind' and 'intelligence'. A 'mind' is an absolute thing, but
'intelligence' is relative.
For instance, most people would, I believe, accept that a monkey has a
'mind'. However, they would not necessarily so easily accept that a
monkey has 'performance capacity that is indistinguishable from human
performance capacity'.
On the other hand, many people would accept that certain robotic
processes had 'intelligence', but would be very reluctant to attribute
them with 'minds'.
I think there is something organic about 'minds', but 'intelligence' can
be codified, within limits, of course.
I apologise if this appears as a red-herring in the argument.
--
Keith Dancey, UUCP: ..!mcvax!ukc!rlvd!kgd
Rutherford Appleton Laboratory,
Chilton, Didcot, Oxon OX11 0QX
JANET: K.DANCEY@uk.ac.rl
Tel: (0235) 21900 ext 5716
------------------------------
End of AIList Digest
********************