Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 029

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest             Monday, 2 Feb 1987       Volume 5 : Issue 29 

Today's Topics:
Philosophy - Consciousness & Methodological Epiphenomenalism

----------------------------------------------------------------------

Date: Thu, 29 Jan 87 09:27 EST
From: Seth Steinberg <sas@bfly-vax.bbn.com>
Subject: Consciousness?

I always thought that a scientific theory had to undergo a number of
tests to determine how "good" it is. Needless to say, a perfect score
on one test may be balanced by a mediocre score on another test. Some
useful tests are:

- Does the theory account for the data?
- Is the theory simple? Are there unnecessary superfluousities?
- Is the theory useful? Does it provide the basis for a fruitful
program of research?

There are theories of the mind which include consciousness and those
arguing that it is secondary - a side effect of thought. It seems
quite probable that the bulk of artificial intelligence work (machine
reasoning, qualitative physics, theorem proving ... ) can be performed
without considering this thorny issue. While I frequently accuse my
computers of malice, I doubt they are consciously malicious when they
flake out on me.

While the study of consciousness is fascinating and lies at the base of
numerous religions, it doesn't seem to be scientifically useful. Do I
rewrite my code because the machine is conscious or because it is
getting the wrong answer? Is there a program of experimentation
suggested by the search for consciousness? (Don't confuse this with
using conscious introspection to build unconscious intelligence as I
would to guide a toy tank from my office to the men's room). Does
consciousness change the way artificial intelligence must be
programmed? The evidence so far says NO. [How is that for a baldfaced
assertion? Send me your code with comments showing how consciousness
is taken into account and I'll see if I can rewrite it without
consciousness].

I don't think scientific theories of consciousness are incorrect, I
think they are barren.

Seth

P.S. For an excellent example of a nifty but otherwise barren theory
read the essay Adam's Navel in Stephen Gould's book the Flamingo's
Smile.

------------------------------

Date: 28 Jan 87 17:36:10 GMT
From: princeton!mind!harnad@rutgers.rutgers.edu (Stevan Harnad)
Subject: Re: More on Minsky on Mind(s)


Ken Laws <Laws@SRI-STRIPE.ARPA> wrote on mod.ai:

> I'm inclined to grant a limited amount of consciousness to corporations
> and even to ant colonies. To do so, though, requires rethinking the
> nature of pain and pleasure (to something related to homeostatis).

Unfortunately, the problem can't be resolved by mere magnanimity. Nor
by simply reinterpreting experience as something else -- at least not
without a VERY persuasive argument -- one no one in the history of the M/B
problem has managed to come up with so far. This history is just one of
hand-waving. Do you think "rethinking" pain as homeostastis does the trick?

> computer operating systems and adaptive communications networks are
> close [to conscious]. The issue is partly one of complexity, partly
> of structure, partly of function.

I'll get back to the question of whether experiencing is an
all-or-none phenomenon or a matter of degree below. For now, I just
wonder what kind and degree of structural/functional "complexity" you
believe adds up to EXPERIENCING pain as opposed to merely behaving as
if experiencing pain.

> I am assuming that neurons and other "simple" systems are C-1 but
> not C-2 -- and C-2 is the kind of consciousness that people are
> really interested in.

Yes, but do you really think that hard questions like these can be
settled by assumption? The question is: What justifies the inference
that an organism or device is experiencing ANYTHING AT ALL (C-1), and
what justifies interpreting internal functions as conscious ones?
Assumption does not seem like a very strong justification for an
inference or interpretation. What is the basis for your assumption?

I have proposed the TTT as the only justifiable basis, and I've given
arguments in support of that proposal. The default assumptions in the
AI/Cog-Sci community seem to be that sufficiently "complex" function
and performance capacity, preferably with "memory" and "learning," can be
dubbed "conscious," especially with the help of the subsidiary
assumption that consciousness admits of degrees. The thrust of my
critique is that this position is rather weak and arbitrary, and open
to telling counter-examples (like Searle's). But, more important, it
is not an issue on which the Cog-sci community even needs to take a
stand! For Cog-sci's objective goal -- of giving a causal explanation
of organisms' and devices' functional properties -- can be achieved
without embellishing any of its functional constructs with a conscious
interpretation. This is what I've called "methodological
epiphenomenalism."
Moreover, the TTT (as an asymptotic goal) even
captures the intuitions about "sufficient functional complexity and
performance capacity,"
in a nonarbitrary way.

It is the resolution of these issues by unsupportable assumption, circularity,
arbitrary fiat and obiter dicta that I think is not doing the field
any good. And this is not at all because (1) it simply makes cog-sci look
silly to philosophers, but because, as I've repeatedly suggested, (2) the
unjustified embellishment of (otherwise trivial, toy-like) function
or performance as "conscious" can actually side-track cog-sci from its
objective, empirical goals, masking performance weaknesses by
anthropomorphically over-interpreting them. Finally (3), the
unrealizable goal of objectively capturing conscious phenomenology,
being illogical, threatens to derail cog-sci altogether, heading it in
the direction of hermeneutics (i.e., subjective interpretation of
mental states, i.e., C-2) rather than objective empirical explanation of
behavioral capacity. [If C-2 is "what people are really interested
in,"
then maybe they should turn to lit-crit instead of cog-sci.]

> The mystery for me is why only >>one<< subsystem in my brain
> seems to have that introspective property -- but
> multiple personalities or split-brain subjects may be examples that
> this is not a necessary condition.

Again, we'd probably be better off tackling the mystery of what the
brain can DO in the world, rather than what subjective states it can
generate. But, for the record, there is hardly agreement in clinical
psychology and neuropsychology about whether split-brain subjects or
multiple-personality patients really have more than one "mind," rather
than merely somewhat dissociated functions -- some conscious, some not --
that are not fully integrated, either temporally or experientially.
Inferring that someone has TWO minds seems to be an even trickier
problem than the usual problem ("solved" by the TTT) of inferring that
someone has ONE (a variant of the mind/body problem called the "other-minds"
problem). At least in the case of the latter we have our own, normal unitary
experience to generalize from...

> [Regarding the question of whether consciousness admits of degrees:]
> An airplane either can fly or it can't. Yet there are
> simpler forms of flight used by other entities-- kites, frisbees,
> paper airplanes, butterflies, dandelion seeds... My own opinion
> is that insects and fish feel pain, but often do so in a generalized,
> nonlocalized way that is similar to a feeling of illness in humans.

Flight is an objective, objectively definable function. Experience is
not. We can, for example, say that a massive body that stays aloft in
space for any non-zero period of time is "flying" to a degree. There
is no logical problem with this. But what does it mean to say that
something is conscious to a degree? Does the entity in question
EXPERIENCE anything AT ALL? If so, it is conscious. If not, not. What
has degree to do with it (apart from how much, or how intensely it
experiences, which is not the issue)?

I too believe that lower animals feel pain. I don't want to conjecture
what it feels like to them; but having conceded that it feels like
anything at all, you seem to have conceded that they are conscious.
Now where does the question of degree come into it?

The mind/body problem is the problem of subjectivity. When you ask
whether something is conscious, you're asking whether it has
subjective states at all, not which ones, how many, or how strong.
That is an all-or-none matter, and it concerns C-1. You can't speak of
C-2 at all until you have a principled handle on C-1.

> I assume that lower forms experience lower forms of consciousness
> along with lower levels of intelligence. Such continuua seem natural
> to me. If you wish to say that only humans and TTT-equivalents are
> conscious, you should bear the burden of establishing the existence
> and nature of the discontinuity.

I happen to share all those assumptions about consciousness in lower
forms, except that I don't see any continuum of consciousness there at
all. They're either conscious or not. I too believe they are conscious,
but that's an all-or-none matter. What's on a continuum is what they're
conscious OF, how much, to what degree, perhaps even what it's "like" for
them (although the latter is more a qualitative than a quantitative
matter). But THAT it's like SOMETHING is what it is that I am
assenting to when I agree that they are conscious at all. That's C-1.
And it's the biggest discontinuity we're ever likely to know of.

(Note that I didn't say "ever likely to experience," because of course
we DON'T experience the discontinuity: We know what it is like to
experience something, and to experience more or less things, more or less
intensely. But we don't know what it's like NOT to experience
something. [Be careful of the scope of the "not" here: I know what
it's like to see not-red, but not what it's like to not-see red, or be
unconscious, etc.] To know what it's like NOT to experience
anything at all is to experience not-experiencing, which is
a contradiction in terms. This is what I've called, in another paper,
the problem of "uncomplemented" categories. It is normally solved by
analogy. But where the categories are uncomplementable in principle,
analogy fails in principle. I think that this is what is behind our
incoherent intuition that consciousness admits of degrees: Because to
experience the conscious/unconscious discontinuity is logically
impossible, hence, a fortiori, experientially impossible.)

> [About why neurons are conscious and atoms are not:]
> When someone demonstrates that atoms can learn, I'll reconsider.

You're showing your assumptions here. What can be more evident about
the gratuitousness of mentalistic interpretation (in place of which I'm
recommending abstention or agnosticism on methodological grounds)
than that you're prepared to equate it with "learning"?

> You are questioning my choice of discontinuity, but mine is easy
> to defend (or give up) because I assume that the scale of
> consciousness tapers off into meaninglessness. Asking whether
> atoms are conscious is like asking whether aircraft bolts can fly.

So far, it's the continuum itself that seems meaningless (and the defense
a bit too easy-going). Asking questions about subjective phenomena
is not as easy as asking about objective ones, hopeful analogies
notwithstanding. The difficulty is called the mind/body problem.

> I hope you're not insisting that no entity can be conscious without
> passing the TTT. Even a rock could be conscious without our having
> any justifiable means of deciding so.

Perhaps this is a good place to point out the frequent mistake of
mixing up "ontic" questions (about what's actually TRUE of the world)
and "epistemic" ones (about what we can KNOW about what's actually true of
the world, and how). I am not claiming that no entity can be conscious
without passing the TTT. I am not even claiming that every entity that
passes the TTT must be conscious. I am simply saying that IF there is
any defensible basis for inferring that an entity is conscious, it is
the TTT. The TTT is what we use with one another, when we daily
"solve" the informal "other-minds" problem. It is also cog-sci's
natural asymptotic goal in mind-modeling, and again the only one that
seems methodologically and logically defensible.

I believe that animals are conscious; I've even spoken of
species-specific variants of the TTT; but with these variants both our
intuitions and our ecological knowledge become weaker, and with them
the usefulness of the TTT in such cases. Our inability to devise or
administer an animal TTT doesn't make animals any less conscious. It just
makes it harder to know whether they are, and to justify our inferences.

(I'll leave the case of the stone as an exercise in applying the
ontic/epistemic distinction.)

>>SH: "(To reply that synthetic substances with the same functional properties
>> must be conscious under these conditions is to beg the question.)"

>KL: I presume that a synthetic replica of myself, or any number of such
> replicas, would continue my consciousness.

I agree completely. The problem was justifying attributing consciousness
to neurons and denying it to, say, atoms. It's circular to say
neurons are conscious because they have certain functional properties
that atoms lack MERELY on the grounds that neurons are functional
parts of (obviously) conscious organisms. If synthetic components
would work just as well (as I agree they would), you need a better
justification for imputing consciousness to neurons than that they are
parts of conscious organisms. You also need a better argument for
imputing consciousness to their synthetic substitutes. The TTT is my
(epistemic) criterion for consciousness at the whole-organism level.
Its usefulness and applicability trail off drastically with lower and lower
organisms. I've criticized cog-sci's default criteria earlier in this
response. What criteria do you propose, and what is the supporting
justification, for imputing consciousness to, say, neurons?

> Perhaps professional philosophers are able to strive for a totally
> consistent world view.

The only thing at issue is logical consistency, not world view. And even
professional scientists have to strive for that.

> Why is there Being instead of Nothingness? Who cares?

These standard examples (along with the unheard sound of the tree
falling alone in the forest) are easily used to lampoon philosophical
inquiry. They tend to be based on naive misunderstandings of what
philosophers are actually doing -- which is usually as significant and
rigorous as any other area of logically constrained intellectual
inquiry (although I wouldn't vouch for all of it, in any area of
inquiry).

But in this case consider the actual ironic state of affairs:
It is cog-sci that is hopefully opening up and taking an ambitious
position on the problems that normally only concern philosophers,
such as the mind/body problem. NONphilosophers are claiming : "this is
conscious and that's not,"
and "this is why," and "this is what
consciousness is."
So who's bringing it up, and who's the one that cares?

Moreover, I happen myself to be a nonphilosopher (although I have a
sizeable respect for that venerable discipline and its inevitable quota
of insightful exponents); yet I repeatedly find myself in the peculiar
role of having to point out the philosophically well-known howlers
that cog-sci keeps tumbling into in its self-initiated inquiry into
"Nothingness." More ironic still, in arguing for the TTT and methodological
epiphenomenalism, I am actually saying: "Why do you care? Worrying about
consciousness will get you nowhere, and there's objective empirical
work to do!"


> If I had to build an aircraft, I would not begin by refuting
> theological arguments about Man being given dominion over the
> Earth rather than the Heavens. I would start from a premise that
> flight was possible and would try to derive enabling conditions.

Building aircraft and devices that (attempt to) pass the TTT are objective,
do-able empirical tasks. Trying to model conscious phenomenology, or to
justify interpreting processes as conscious, gets you as embroiled in
"theology" as trying to justify interpreting the Communal wafer as the
body of Christ. Now who's the pragmatist and who's the theologian?

--

Stevan Harnad (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet

------------------------------

Date: 30 Jan 87 07:39:00 EST
From: "CUGINI, JOHN" <cugini@icst-ecf>
Reply-to: "CUGINI, JOHN" <cugini@icst-ecf>
Subject: Why I am not a Methodological Epiphenomenalist


> > Me: Consciousness may be as superflouous (wrt evolution) as earlobes.
> > That hardly goes to show that it ain't there.
>
> Harnad: Agreed. It only goes to show that methodological epiphenomalism may
> indeed be the right research strategy.
>
> > I don't think [the existence of consciousness] does NEED to be so.
> > It just is so.
>
> Fine. Now what are you going to do about it, methodologically speaking?
>
> ... Methodological epiphenomenalism recommends we face it [the inability
> to objectively measure subjective phenomena] and live
> with it, since not that much is lost. The "incompleteness" of an
> objective account is, after all, just a subjective problem. But
> supposing away the incompleteness -- by wishful thinking, hopeful
> over-interpretation, hidden (subjective) premises or blurring of the
> objective/subjective distinction -- is a logical problem.

A few points:

1. Insofar as meth.. ep.. (ME) is simply the following kind of counsel:
"when trying to get a computer to play chess, don't worry about the
subjective feelings which accompany human chess-playing, just get the
machine to make the right moves"
, I have no particular quarrel with it.

2. It is the claim that the TTT is the only relevant criterion (or,
by far, the major criterion) for the presence of consciousness that
strikes me as unnecessarily provocative and, almost as bad, false.
It is not clear to me whether this claim is an integral part of ME,
or an independent thesis. At any rate, such a claim is clearly
a philosophical one, having to do mainly with the epistemology of
consciousness, and as such is fair game for philosophically-based
(rather than AI-research-based) debate. If the claim instead were
that the TTT is the major criterion for the presence of intelligence
(defined in a perhaps somewhat austere way, as the ability to
perform certain kinds of tasks...) then, again, I would have no
serious disagreement.

3. Is the incompleteness of objective accounts of the world "just
a subjective problem"
? Is it true that "not that much is lost"?
Well, I guess each of us can decide how much to be bothered by this
incompleteness. I agree it's no argument against AI, psychophysics
or anything else that they "leave consciousness out" any more than it
is that they leave astronomy out. But there are astronomers around to
cover that ground (metaphorical ground, of course). It does bother me
(more than it does you?) that consciousness, of all things,
consciousness, which may be subjective, but, we agree, is real,
consciousness, without which my day would be so boring, is simply not
addressed by any systematic rational inquiry.

John Cugini <Cugini@icst-ecf>

------------------------------

Date: 26 Jan 87 23:43:37 GMT
From: clyde!watmath!utzoo!dciem!mmt@rutgers.rutgers.edu (Martin
Taylor)
Subject: Re: Minsky on Mind(s)


> To telescope the intuitive sense
>of the rebuttals: Do you believe rooms or corporations feel pain, as
>we do?

That final comma is crucial. Of course they do not feel pain as we do,
but they might feel pain, as we do.

On what grounds do you require proof that something has consciousness,
rather than proof that it has not? Can there be grounds other than
prejudice (i.e. prior judgment that consciousness in non-humans is
overwhelmingly unlikely?). As I understand the Total Turing Test,
the objective is to find whether soemthing can be distinguished from
human, but this again prejudges the issue. I don't think one CAN use
the TTT to assess whether another entity is conscious.

As I have tried to say in a posting that may or may not get to mod.ai,
Okham's razor demands that we describe the world using the simplest
possible hypotheses, INCLUDING the boundary conditions, which involve
our prior conceptions. It seems to me simpler to ascribe consciousness
to an entity that resembles me in many ways than not to ascribe
consciousness to that entity. Humans have very many points of resemblance;
comatose humans fewer. Silicon-based entities have few overt points
of resemblance, so their behaviour has to be convincingly like mine
before I will grant them a consciousness like mine. I don't really
care whether their behaviour is like yours, if you don't have
consciousness, and as Steve Harnad has so often said, mine is the
only consciousness I can be sure of.

The problem splits in two ways: (1) Define consciousness so that it does
not involve a reference to me, or (2) Find a way of describing behaviour
that is simpler than ascribing consciousness to me alone. Only if you
can fulfil one of these conditions can there be a sensible argument about
the consciousness of some entity other than ME.
--

Martin Taylor
{allegra,linus,ihnp4,floyd,ubc-vision}!utzoo!dciem!mmt
{uw-beaver,qucis,watmath}!utcsri!dciem!mmt

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT