Copy Link
Add to Bookmark
Report

AIList Digest Volume 4 Issue 269

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest           Wednesday, 26 Nov 1986    Volume 4 : Issue 269 

Today's Topics:
Philosophy - Searle, Turing, Symbols, Categories

----------------------------------------------------------------------

Date: 21 Nov 86 19:08:02 GMT
From: rutgers!princeton!mind!harnad@lll-crg.arpa (Stevan Harnad)
Subject: Re: Searle, Turing, Symbols, Categories: Reply to Cugini (2)

[Part II]


If you want some reasons why the mind/body case is so radically
different from ordinary causal inference in science, here are two:

(1) Generalizations about correlates of having a mind
are, because of the peculiar nature of subjective, 1st-person
experience, always doomed to be based on an N = 1. We can have
intersubjective agreement about a meter-reading, but not about a
subject's experience. This already puts mind-science in a class by
itself. (One can even argue that the intersubjective agreement on
"objective" meter readings is itself parasitic on, or grounded in,
some turing-equivalence assumptions about other people's reports of
their experiences -- of meter readings!)

But, still more important and revealing: (2) Consider ordinary scientific
inferences about "unobservables," say, about quarks (if they should continue
to play an inferred causal role in the future, utopian, "complete"
explanatory/predictive theory in physics): Were you to subtract this
inferred entity from the (complete) theory, the theory would lose its
capacity to account for all the (objective) data. That's the only
reason we infer unobservables in the first place, in ordinary
science: to help predict and causally explain all the observables.
A complete, utopian scientific theory of the "mind," in radical
contrast with this, will always be just as capable of accounting
for all the (objective) data (i.e., all the observable data on what
organisms and brains do) WITH or WITHOUT positing the existence of mind(s)!

In other words, the complete explanatory/predictive theory of organisms
(and devices) WITH minds will be turing-indistinguishable from the
complete explanatory/predicitive theory of organisms (and devices)
WITHOUT minds, that simply behave in every observable way AS IF they
had minds.

That kind of inferential indeterminacy is a lot more serious than the
underdetermination of ordinary scientific inferences about
unobservables like quarks, gravitons or strings. And I believe that this
amounts to a demonstration that all ordinary inferential bets (about
brain-correlates, etc.) are off when it comes to the mind.
The mind (subjectivity, consciousness, the capacity to have
qualitative experience) is NEITHER an ordinary, intersubjectively
verifiable objectively observable datum, as in normal science, NOR is
it an ordinary unobservable inferred entity, forced upon us so that we
can give a successful explanatory/predictive account of the objective
data.

Yet the mind is undoubtedly real. We know that, noninferentially, for
one case: our own. It is to THAT direct knowledge that the informal component
of the TTT appeals, and ONLY to that knowledge. Any further indirect
inferences, based on, say, correlations, depend ultimately for their
validation only on that direct knowledge, and are always secondary to
it, in that split inferences are always settled by an appeal to the
TTT criterion, not vice versa (or some third thing), as I shall try to
show below.

(The formal component of the TTT, on the other hand [i.e., the formal
computer-testing of a theory that purports to generate all of our
performance capacities], IS just a case of ordinary scientific
inference; here it is an empirical question whether brain correlates
will be helpful in guiding theory-construction. I happen to
doubt they will be helpful even there; not, at least until we
get much closer to TTT utopia, when we've all but captured
total performance capacity, and the fine-tuning [errors, reaction
times, response style, etc.] may begin to matter. There, as I've
suggested, the boundary between organism-performance and
brain-performance may break down somewhat, and microfunctional and
structural considerations may become relevant to the success and
verisimilitude of the performance modeling itself.

> Now then, armed with the reasonably reliable knowledge that in my own
> case, my brain is a cause of my mind, and my mind is a cause of my
> performance, I can try to draw appropriate conclusions about others.

As I've tried to argue, these two types of knowledge are so different
as to be virtually incommensurable. In particular, your knowledge that
your brain causes your performance is direct and incorrigible, whereas
your knowledge that your brain causes your mind is indirect,
inferential, and parasitic on the former. Inferences about other minds
are NOT ordinary cases of scientific inference. The mind/body case is
special.

> X3 has brains, but little/no performance - eg a case of severe
> retardation. Well, there doesn't seem much reason to believe that
> X has intelligence, and so is disqualified from having mind, given
> our definition. However, it is still reasonable to believe that
> X3 might have consciousness, eg can feel pain, see colors, etc.

For the time being, intelligence is as mind does. X3 may not be VERY
intelligent, but if he has any mind-like performance capacity (to pass
some variant of the TTT for some organism or other -- a tricky issue),
that amounts to having some intelligence. As discussed in another
module, intelligence may be a matter of degree, but having a mind
seems to be an all-or-none matter. Also, having a mind seems to be a
sufficient condition for having intelligence; if it's not also a
necessary condition, we have the radical indeterminacy I mentioned
earlier, and we're in trouble.

So the case of severe retardation seems to represent no problem.
Retarded people pass (some variant of) the TTT, and we have no trouble
assigning them minds. This is fine as long as they have some (shall we
call it "intelligible") performance capacity, and hence some
intelligence. Comatose people are another matter. But they may well
not have minds. (I might add that our inclination to assign a mind to
a person who is so retarded that his performance capacity is reduced
to vegetative functions such as blinking, breathing and swallowing,
could conceivably be an overgeneralization, motivated by considerations
of biological origins and humanitarian concerns.) I repeat, though,
that these special cases belong more to the domain of near-utopia
fine-tuning than the basic issue of whether it is performance or brain
correlates that should guide us in inferring minds in others. Certainly
neither TTT-enthusiasts nor brain-enthusiasts have any grounds for
feeling confident about their judgments in such ambiguous cases.

> X4 has normal human cognitive performance, but no brains, eg the
> ultimate AI system. Well, no doubt X4 has intelligence, but the issue
> is whether X4 has consciousness. This seems far from obvious to me,
> since I know in my own case that brain causes consciousness causes
> performance. But I already know, in the case of X4, that the causal
> chain starts out at a different place (non-brain), even if it ends up
> in the same place (intelligent performance). So I can certainly
> question (rationally) whether it gets to performance "via
> consciousness"
or not.
> If this seems too contentious, ask yourself: given a choice between
> destroying X3 or X4, is it really obvious that the more moral choice
> is to destroy X3?

I don't think the moral choice is obvious in either case. However, I
don't think you're imagining this case sufficiently vividly. Let's make
it the one I proposed: A lifelong friend turns out to be robot, versus
a human born (irremediably) with only vegetative function. These issues
are for the right-to-lifers; the alternatives imposed on us are too
hypothetical and artificial (akin to having to choose between saving
one's mother or father). But I think it's fairly clear which way I'd
go here. And what we know (or don't know) about brains has very little
to do with it.

> Finally, a gedanken experiment (if ever there was one) - suppose
> (a la sci-fi stories) they opened you up and showed you that you
> really didn't have a brain after all, that you really did have
> electronic circuits - and suppose it transpired that while most
> humans had brains, a few, like yourself, had electronics. Now,
> never doubting your own consciousness, if you *really* found that
> out, would you not then (rationally) be a lot more inclined to
> attribute consciousness to electronic entities (after all you know
> what it feels like to be one of them) than to brained entities (who
> knows what, if anything, it feels like to be one of them?)?
> Even given *no* difference in performance between the two sub-types?
> Showing that "similarity to one's own internal make-up" is always
> going to be a valid criterion for consciousness, independent of
> performance.

Frankly, although it might disturb me for other reasons, I think that
discovering I had complex, ill-understood electronic cicuits inside my
head instead of complex, ill-understood biochemical ones would not
sway me one way or the other on the basic proposition that it is
performance alone that is responsible for my inferring minds in other
people, not my (or anyone else's) dim knowledge about their inner
structure of function. I agreed in an earlier module, though, that
such a demonstration would be a bit of a blow to the sceptics about robots
(which I am not) if they discovered THEMSELVES to be robots. On the
other hand, it wouldn't move an outside sceptic one bit. For example,
*you* would presumably be unifluenced in your convictions about the
relevance of brain-correlates over and above performance if *I* turned
out to be X4. And that's just the point! Like it or not, the
1st-person stance retains center stage in the mind/body problem.

> I make this latter point to show that I am a brain-chauvinist *only
> insofar* as I know/believe that I *myself* am a brained entity (and
> that my brain is what causes my consciousness). This really
> doesn't depend on my own observation of my own performance at all -
> I'd still know I had a mind even if I never did any (external) thing
> clever.

Yes. But the problem for *you* is whether *I* (or some other candidate)
have a mind, not whether *you* do. Moreover, no one suggested that the
turing test was the basis for knowing one has a mind in the 1st person
case. That problem is probably closer to the Cartesian Cogito, solved
directly and incorrigibly. The other-minds problem is the one we're
concerned with here.

Perhaps I should emphasize that in the two "correlations" we are
talking about -- performance/mind and brain/mind -- the basis for the
causal inference is radically different. The causal connection between
my mind and my performance is something I know directly from being the
performer. There is no corresponding intuition about causation from
being the possessor of my brain. That's just a correlation, depending
for its causal interpretation (if any), on what theory or metatheory I
happen to subscribe to. That's why nothing compelling follows from
being told what my insides are made of.

> To summarize: brainedness is a criterion, not only via the indirect
> path of: others who have intelligent performance also have brains,
> ergo brains are a secondary correlate for mind; but also via the
> much more direct path (which *also* justifies performance as a
> criterion): I have a mind and in my very own case, my mind is
> closely causally connected with brains (and with performance).

I would summarize it differently: In the 1st-person case, I know directly
that my performance is caused by my mind. I infer (from the correlation)
that my brain causes my mind. In the other-minds case I know nothing
directly; however, I am intuitively persuaded by performance similarity.
I have no intuitions about brains, but of course every confirmatory
cue helps; so if you also have a brain, my confidence is increased.
But split the ticket, and I'll go with performance every time. That
makes it seem as if performance is still the decisive criterion, and
brainedness is only a secondary correlate.

Putting it yet another way: We have direct knowledge of the causal
connection between our minds and our performance and only indirect
inferences about the causal connection between our brains and our
minds (and performance). This parasitism is hence present in our
inferences about other minds too.

> I agree that there are some additional epistemological problems,
> [with subjective/objective causation, as opposed to
> objective/objective causation, i.e., with the mind/body problem]
> compared to the usual cases of causation. But these don't seem
> all that daunting, absent radical skepticism.

But "radical" scepticism makes an unavoidable, substantive appearance
in the contemporary scientific incarnation of the other-minds problem:
The problem of robot minds.

> We already know which parts of the brain
> correlate with visual experience, auditory experience, speech
> competence, etc. I hardly wish to understate the difficulty of
> getting a full understanding, but I can't see any problem in
> principle with finding out as much as we want. What may be
> mysterious is that at some level, some constellation of nerve
> firings may "just" cause visual experience, (even as electric
> currents "just" generate magnetic fields.) But we are
> always faced with brute-force correlation at the end of any scientific
> explanation, so this cannot count against brain-explanatory theory of
> mind.

There is not quite as much disagreement here as there may seem. We
agree on (1) the basic mystery in objective/subjective causation -- though I
disagree that it is no more mysterious than objective/objective
causation. Never mind. It's mysterious. I also agree that (2) I would
feel (negligibly) more confident in inferring that a candidate who
passed the TTT had a mind if it had a real brain than if it did not.
(I'd feel even more confident if it was my identical twin.) We agree
that (3) the brain causes the mind, that (4) the brain can be studied,
that (5) there are anatomical and physiological correlations
(objective/subjective), and that (6) these are very probably causal.

Where we may disagree is on the methodology for arriving at a causal theory
of mind. I don't think peeking-and-poking at the brain in search of
correlations is likely to generate a successful causal theory; I think
trial-and-error modeling of performance will, and that it will in fact
guide brain research, suggesting what functions to look for
implementations of, and how they cause performance. What I believe
will fall by the wayside in this brute-force correlative account --
I'm for correlations too, of course, except that I'm for
objective/objective correlations -- is subjectivity itself. For, on
all the observable evidence that will ever be available, the
complete theory of the mind -- whether implemented as a brain or as some
other artificial causal device -- will always be just as true of a
device actually having a mind as of a mindless device merely acting as
if it had a mind. And there will be no way of settling this, short of
actually BEING the device in question (which is no help to the rest of
us). If that's radical scepticism, it's come home to roost, and should
be accepted as a fact of life in mind-science. (I've dubbed this
"methodological epiphenomenalism" in the paper under discussion.)

You may feel more confident in attributing a mind to the
brain-implementation than to a synthetic one (though I can't imagine you'll
have good reasons, since they'll be functionally equivalent in every
observable and ostensibly relevant respect), but that too is a
question we will never be able settle objectively.

(Let me add, in case it's not apparent, that performances such as
reporting "It hurts now" are perfectly respectable, objective data,
both for the brain-correlation investigator and the mind-modeler. So
whereas we can never investigate subjectivity directly except in our
own case, we can approximate its behavioral manifestations as closely
as the expressive power of introspective reports will allow. What's
not clear is how useful this aspect of performance modeling will be.)

> Well, I plead guilty to diverting the discussion into philosophy, and as
> a practical matter, one's attitude in this dispute will hardly affect
> one's day-to-day work in the AI lab. One of my purposes is a kind of
> pre-emptive strike against a too-grandiose interpretation of the
> results of AI work, particularly with regard to claims about
> consciousness. Given a behavioral definition of intelligence, there
> seems no reason why a machine can't be intelligent. But if "mind"
> implies consciousness, it's a different ball-game, when claiming
> that the machine "has a mind".

I plead no less guilty than you. Neither of us is responsible for the
fact that scepticism looms large in making inferences about other
minds and how they work, which is what cognitive science is about. I
do disagree, though, that these considerations are irrelevant to one's
research strategy. It does matter whether you choose to study the
brain directly, or to model it, or to model performance-equivalent
alternatives. Other issues in this discussion matter too: modeling
toy modules versus the Total Turing Test, symbolic modeling versus
robotic modeling, and the degree of attention focused on modeling
phenomenological reports.

I also agree, of course, about the grandiose over-interpretation of
which AI (and, lately, connectionism too) has been guilty. But in the
papers under discussion I try to propose principled constraints (e.g.,
robotic capacity, groundedness, nonmodularity and the Total Turing
Test) that might restrain such excesses, rather than merely scepticism
about artificial performance. I also try to sort out the empirical
issues from the methodological and metaphysical ones. And, as I've
argued in several iterations, "inetlligence" is not just a matter of
definition.

> My as-yet-unarticulated intuition is that, at least for people, the
> grounding-of-symbols problem, to which you are acutely and laudably
> sensitive, inherently involves consciousness, ie at least for us,
> meaning requires consciousness. And so the problem of shoehorning
> "meaning" into a dumb machine at least raises the issue about how
> this can be done without making them conscious (or, alternatively,
> how to go ahead and make them conscious). Hence my interest in your
> program of research.

Thank you for the kind words. One of course hopes that consciousness
will be captured somewhere along the road to Utopia. But my
methodological epiphenomenalism suggests that this may be an undecidable
metaphysical problem, and that, empirically and objectively, total
performance capacity is the most we can know ("scientifically") that
we have captured.

--

Stevan Harnad (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT