Copy Link
Add to Bookmark
Report
AIList Digest Volume 5 Issue 030
AIList Digest Monday, 2 Feb 1987 Volume 5 : Issue 30
Today's Topics:
Philosophy - Consciousness
----------------------------------------------------------------------
Date: 30 Jan 87 01:51:19 GMT
From: princeton!mind!harnad@rutgers.rutgers.edu (Stevan Harnad)
Subject: Re: Minsky on Mind(s)
mmt@dciem.UUCP (Martin Taylor) of D.C.I.E.M., Toronto, Canada,
writes:
> Of course [rooms and corporations] do not feel pain as we do,
> but they might feel pain, as we do.
The solution is not in the punctuation, I'm afraid. Pain is just an
example standing in for whether the candidate experiences anything AT
ALL. It doesn't matter WHAT a candidate feels, but THAT it feels, for
it to be conscious.
> On what grounds do you require proof that something has consciousness,
> rather than proof that it has not? Can there be grounds other than
> prejudice (i.e. prior judgment that consciousness in non-humans is
> overwhelmingly unlikely?).
First, none of this has anything to do with proof. We're trying to
make empirical inferences here, not mathematical deductions. Second,
even as empirical evidence, the Total Turing Test (TTT) is not evidential
in the usual way, because of the mind/body problem (private vs. public
events; objective vs. subjective inferences). Third, the natural null
hypothesis seems to be that an object is NOT conscious, pending
evidence to the contrary, just as the natural null hypothesis is that
an object is, say, not alive, radioactive or massless until shown
otherwise. -- Yes, the grounds for the null hypothesis are that the
presence of consciousness is more likely than its absence; the
alternative is animism. But no, the complement to the set of
probably-conscious entities is not "non-human," because animals are
(at least to me) just about as likely to be conscious as other humans
are (although one's intuitions get weaker down the phylogenetic scale);
the complement is "inanimate." All of these are quite natural and
readily defensible default assumptions rather than prejudices.
> [i] Occam's razor demands that we describe the world using the simplest
> possible hypotheses.
> [ii] It seems to me simpler to ascribe consciousness to an entity that
> resembles me in many ways than not to ascribe consciousness to that
> entity.
> [iii] I don't think one CAN use the TTT to assess whether another
> entity is conscious.
> [iv] Silicon-based entities have few overt points of resemblance,
> so their behaviour has to be convincingly like mine before I will
> grant them a consciousness like mine.
{i} Why do you think animism is simpler than its alternative?
{ii} Everything resembles everything else in an infinite number of
ways; the problem is sorting out which of the similarities is relevant.
{iii} The Total Turing Test (a variant of my own devise, not to be
confused with the classical turing test -- see prior chapters in these
discussions) is the only relevant criterion that has so far been
proposed and defended. Similarities of appearance are obvious
nonstarters, including the "appearance" of the nervous system to
untutored inspection. Similarities of "function," on the other hand,
are moot, pending the empirical outcome of the investigation of what
functions will successfully generate what performances (the TTT).
{iv} [iv] seems to be in contradiction with [iii].
> The problem splits in two ways: (1) Define consciousness so that it does
> not involve a reference to me, or (2) Find a way of describing behaviour
> that is simpler than ascribing consciousness to me alone. Only if you
> can fulfil one of these conditions can there be a sensible argument
> about the consciousness of some entity other than ME.
It never ceases to amaze me how many people think this problem is one
that is to be solved by "definition." To redefine consciousness as
something non-subjective is not to solve the problem but to beg the
question.
[The TTT, by the way, I proposed as logically the strongest (objective) evidence
for inferring consciousness in entities other than oneself; it also seems to be
the only methodologically defensible evidence; it's what all other
(objective) evidence must ultimately be validated against; moreover, it's
already what we use in contending with the other-minds problem intuitively
every day. Yet the TTT remains more fallible than conventional inferential
hypotheses (let alone proof) because it is really only a pragmatic conjecture
rather than a "solution." It's only good up to turing-indistinguishability,
which is good enough for the rest of objective empirical science, but not
good enough to handle the problem of subjectivity -- otherwise known as the
mind/body problem.]
--
Stevan Harnad (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet
------------------------------
Date: 30 Jan 87 23:35:23 GMT
From: clyde!watmath!utzoo!dciem!mmt@rutgers.rutgers.edu (Martin
Taylor)
Subject: Re: More on Minsky on Mind(s)
> More ironic still, in arguing for the TTT and methodological
>epiphenomenalism, I am actually saying: "Why do you care? Worrying about
>consciousness will get you nowhere, and there's objective empirical
>work to do!"
>
That's a highly prejudiced, anti-empirical point of view: "Ignore Theory A.
It'll never help you. Theory B will explain the data better, whatever
they may prove to be!"
Sure, there's all sorts of objective empirical work to do. There's lots
of experimental work to do as well. But there is also theoretical work
to be done, to find out how best to describe our world. If the descriptions
are simpler using a theory that embodies consciousness than using one that
does not, then we SHOULD assume consciousness. Whether this is the case
is itself an empirical question, which cannot be begged by asserting
(correctly) that all behaviour can be explained without resort to
consciousness.
--
Martin Taylor
{allegra,linus,ihnp4,floyd,ubc-vision}!utzoo!dciem!mmt
{uw-beaver,qucis,watmath}!utcsri!dciem!mmt
------------------------------
Date: Wed, 28 Jan 87 12:29:51 est
From: Stevan Harnad <princeton!mind!harnad@seismo.CSS.GOV>
Subject: Laws on Consciousness
Ken Laws <Laws@SRI-STRIPE.ARPA> wrote:
> I'm inclined to grant a limited amount of consciousness to corporations
> and even to ant colonies. To do so, though, requires rethinking the
> nature of pain and pleasure (to something related to homeostatis).
Unfortunately, the problem can't be resolved by mere magnanimity. Nor
by simply reinterpreting experience as something else -- at least not
without a VERY persuasive argument -- one no one in the history of the M/B
problem has managed to come up with so far. This history is just one of
hand-waving. Do you think "rethinking" pain as homeostastis does the trick?
> computer operating systems and adaptive communications networks are
> close [to conscious]. The issue is partly one of complexity, partly
> of structure, partly of function.
I'll get back to the question of whether experiencing is an
all-or-none phenomenon or a matter of degree below. For now, I just
wonder what kind and degree of structural/functional "complexity" you
believe adds up to EXPERIENCING pain as opposed to merely behaving as
if experiencing pain.
> I am assuming that neurons and other "simple" systems are C-1 but
> not C-2 -- and C-2 is the kind of consciousness that people are
> really interested in.
Yes, but do you really think that hard questions like these can be
settled by assumption? The question is: What justifies the inference
that an organism or device is experiencing ANYTHING AT ALL (C-1), and
what justifies interpreting internal functions as conscious ones?
Assumption does not seem like a very strong justification for an
inference or interpretation. What is the basis for your assumption?
I have proposed the TTT as the only justifiable basis, and I've given
arguments in support of that proposal. The default assumptions in the
AI/Cog-Sci community seem to be that sufficiently "complex" function
and performance capacity, preferably with "memory" and "learning," can be
dubbed "conscious," especially with the help of the subsidiary
assumption that consciousness admits of degrees. The thrust of my
critique is that this position is rather weak and arbitrary, and open
to telling counter-examples (like Searle's). But, more important, it
is not an issue on which the Cog-sci community even needs to take a
stand! For Cog-sci's objective goal -- of giving a causal explanation
of organisms' and devices' functional properties -- can be achieved
without embellishing any of its functional constructs with a conscious
interpretation. This is what I've called "methodological
epiphenomenalism." Moreover, the TTT (as an asymptotic goal) even
captures the intuitions about "sufficient functional complexity and
performance capacity," in a nonarbitrary way.
It is the resolution of these issues by unsupportable assumption, circularity,
arbitrary fiat and obiter dicta that I think is not doing the field
any good. And this is not at all because (1) it simply makes cog-sci look
silly to philosophers, but because, as I've repeatedly suggested, (2) the
unjustified embellishment of (otherwise trivial, toy-like) function
or performance as "conscious" can actually side-track cog-sci from its
objective, empirical goals, masking performance weaknesses by
anthropomorphically over-interpreting them. Finally (3), the
unrealizable goal of objectively capturing conscious phenomenology,
being illogical, threatens to derail cog-sci altogether, heading it in
the direction of hermeneutics (i.e., subjective interpretation of
mental states, i.e., C-2) rather than objective empirical explanation of
behavioral capacity. [If C-2 is "what people are really interested
in," then maybe they should turn to lit-crit instead of cog-sci.]
> The mystery for me is why only >>one<< subsystem in my brain
> seems to have that introspective property -- but
> multiple personalities or split-brain subjects may be examples that
> this is not a necessary condition.
Again, we'd probably be better off tackling the mystery of what the
brain can DO in the world, rather than what subjective states it can
generate. But, for the record, there is hardly agreement in clinical
psychology and neuropsychology about whether split-brain subjects or
multiple-personality patients really have more than one "mind," rather
than merely somewhat dissociated functions -- some conscious, some not --
that are not fully integrated, either temporally or experientially.
Inferring that someone has TWO minds seems to be an even trickier
problem than the usual problem ("solved" by the TTT) of inferring that
someone has ONE (a variant of the mind/body problem called the "other-minds"
problem). At least in the case of the latter we have our own, normal unitary
experience to generalize from...
> [Regarding the question of whether consciousness admits of degrees:]
> An airplane either can fly or it can't. Yet there are
> simpler forms of flight used by other entities-- kites, frisbees,
> paper airplanes, butterflies, dandelion seeds... My own opinion
> is that insects and fish feel pain, but often do so in a generalized,
> nonlocalized way that is similar to a feeling of illness in humans.
Flight is an objective, objectively definable function. Experience is
not. We can, for example, say that a massive body that stays aloft in
space for any non-zero period of time is "flying" to a degree. There
is no logical problem with this. But what does it mean to say that
something is conscious to a degree? Does the entity in question
EXPERIENCE anything AT ALL? If so, it is conscious. If not, not. What
has degree to do with it (apart from how much, or how intensely it
experiences, which is not the issue)?
I too believe that lower animals feel pain. I don't want to conjecture
what it feels like to them; but having conceded that it feels like
anything at all, you seem to have conceded that they are conscious.
Now where does the question of degree come into it?
The mind/body problem is the problem of subjectivity. When you ask
whether something is conscious, you're asking whether it has
subjective states at all, not which ones, how many, or how strong.
That is an all-or-none matter, and it concerns C-1. You can't speak of
C-2 at all until you have a principled handle on C-1.
> I assume that lower forms experience lower forms of consciousness
> along with lower levels of intelligence. Such continuua seem natural
> to me. If you wish to say that only humans and TTT-equivalents are
> conscious, you should bear the burden of establishing the existence
> and nature of the discontinuity.
I happen to share all those assumptions about consciousness in lower
forms, except that I don't see any continuum of consciousness there at
all. They're either conscious or not. I too believe they are conscious,
but that's an all-or-none matter. What's on a continuum is what they're
conscious OF, how much, to what degree, perhaps even what it's "like" for
them (although the latter is more a qualitative than a quantitative
matter). But THAT it's like SOMETHING is what it is that I am
assenting to when I agree that they are conscious at all. That's C-1.
And it's the biggest discontinuity we're ever likely to know of.
(Note that I didn't say "ever likely to experience," because of course
we DON'T experience the discontinuity: We know what it is like to
experience something, and to experience more or less things, more or less
intensely. But we don't know what it's like NOT to experience
something. [Be careful of the scope of the "not" here: I know what
it's like to see not-red, but not what it's like to not-see red, or be
unconscious, etc.] To know what it's like NOT to experience
anything at all is to experience not-experiencing, which is
a contradiction in terms. This is what I've called, in another paper,
the problem of "uncomplemented" categories. It is normally solved by
analogy. But where the categories are uncomplementable in principle,
analogy fails in principle. I think that this is what is behind our
incoherent intuition that consciousness admits of degrees: Because to
experience the conscious/unconscious discontinuity is logically
impossible, hence, a fortiori, experientially impossible.)
> [About why neurons are conscious and atoms are not:]
> When someone demonstrates that atoms can learn, I'll reconsider.
You're showing your assumptions here. What can be more evident about
the gratuitousness of mentalistic interpretation (in place of which I'm
recommending abstention or agnosticism on methodological grounds)
than that you're prepared to equate it with "learning"?
> You are questioning my choice of discontinuity, but mine is easy
> to defend (or give up) because I assume that the scale of
> consciousness tapers off into meaninglessness. Asking whether
> atoms are conscious is like asking whether aircraft bolts can fly.
So far, it's the continuum itself that seems meaningless (and the defense
a bit too easy-going). Asking questions about subjective phenomena
is not as easy as asking about objective ones, hopeful analogies
notwithstanding. The difficulty is called the mind/body problem.
> I hope you're not insisting that no entity can be conscious without
> passing the TTT. Even a rock could be conscious without our having
> any justifiable means of deciding so.
Perhaps this is a good place to point out the frequent mistake of
mixing up "ontic" questions (about what's actually TRUE of the world)
and "epistemic" ones (about what we can KNOW about what's actually true of
the world, and how). I am not claiming that no entity can be conscious
without passing the TTT. I am not even claiming that every entity that
passes the TTT must be conscious. I am simply saying that IF there is
any defensible basis for inferring that an entity is conscious, it is
the TTT. The TTT is what we use with one another, when we daily
"solve" the informal "other-minds" problem. It is also cog-sci's
natural asymptotic goal in mind-modeling, and again the only one that
seems methodologically and logically defensible.
I believe that animals are conscious; I've even spoken of
species-specific variants of the TTT; but with these variants both our
intuitions and our ecological knowledge become weaker, and with them
the usefulness of the TTT in such cases. Our inability to devise or
administer an animal TTT doesn't make animals any less conscious. It just
makes it harder to know whether they are, and to justify our inferences.
(I'll leave the case of the stone as an exercise in applying the
ontic/epistemic distinction.)
>>SH: "(To reply that synthetic substances with the same functional properties
>> must be conscious under these conditions is to beg the question.)"
>KL: I presume that a synthetic replica of myself, or any number of such
> replicas, would continue my consciousness.
I agree completely. The problem was justifying attributing consciousness
to neurons and denying it to, say, atoms. It's circular to say
neurons are conscious because they have certain functional properties
that atoms lack MERELY on the grounds that neurons are functional
parts of (obviously) conscious organisms. If synthetic components
would work just as well (as I agree they would), you need a better
justification for imputing consciousness to neurons than that they are
parts of conscious organisms. You also need a better argument for
imputing consciousness to their synthetic substitutes. The TTT is my
(epistemic) criterion for consciousness at the whole-organism level.
Its usefulness and applicability trail off drastically with lower and lower
organisms. I've criticized cog-sci's default criteria earlier in this
response. What criteria do you propose, and what is the supporting
justification, for imputing consciousness to, say, neurons?
> Perhaps professional philosophers are able to strive for a totally
> consistent world view.
The only thing at issue is logical consistency, not world view. And even
professional scientists have to strive for that.
> Why is there Being instead of Nothingness? Who cares?
These standard examples (along with the unheard sound of the tree
falling alone in the forest) are easily used to lampoon philosophical
inquiry. They tend to be based on naive misunderstandings of what
philosophers are actually doing -- which is usual as significant and
rigorous as any other area of logically constrained intellectual
inquiry (although I wouldn't vouch for all of it, in any area of
inquiry).
But in this case consider the actual ironic state of affairs:
It is cog-sci that is hopefully opening up and taking an ambitious
position on the problems that normally only concern philosophers,
such as the mind/body problem. NONphilosophers are claiming : "this is
conscious and that's not," and "this is why," and "this is what
consciousness is." So who's bringing it up, and who's the one that cares?
Moreover, I happen myself to be a nonphilosopher (although I have a
sizeable respect for that venerable discipline and its inevitable quota
of insightful exponents); yet I repeatedly find myself in the peculiar
role of having to point out the philosophically well-known howlers
that cog-sci keeps tumbling into in its self-initiated inquiry into
"Nothingness." More ironic still, in arguing for the TTT and methodological
epiphenomenalism, I am actually saying: "Why do you care? Worrying about
consciousness will get you nowhere, and there's objective empirical
work to do!"
> If I had to build an aircraft, I would not begin by refuting
> theological arguments about Man being given dominion over the
> Earth rather than the Heavens. I would start from a premise that
> flight was possible and would try to derive enabling conditions.
Building aircraft and devices that (attempt to) pass the TTT are objective,
do-able empirical tasks. Trying to model conscious phenomenology, or to
justify interpreting processes as conscious, gets you as embroiled in
"theology" as trying to justify interpreting the Communal wafer as the
body of Christ. Now who's the pragmatist and who's the theologian?
Stevan Harnad
{allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet
(609)-921-7771
------------------------------
End of AIList Digest
********************