Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 035

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest             Monday, 9 Feb 1987       Volume 5 : Issue 35 

Today's Topics:
Philosophy - Consciousness & Nonconsciousness

----------------------------------------------------------------------

Date: Mon, 02 Feb 87 17:35:13 n
From: DAVIS@EMBL.BITNET
Subject: backtracking.....


It seems like its time on the AIList to cut around some of the
interesting but bottomless waffle that has come to fill this useful organ.
I fear that Stevan Harnad's most important point is being lost in his
endless efforts to deal with a shower of light and insubstantial blows.
At the same time, his own language and approach to the problem is
obscuring some of the issues he himself raises.

I cannot help but notice that the debates on conciousness that
we're seeing resemble the debating of the Data General engineers in
Tracy Kidder's book "The Soul of a New Machine". Its time to wake up
folks - we're not building a new Eclipse, with some giant semiconductor
supplying the new 60880 'concious' chip, and the only real task left
being the arranging of the goodies to make use of its wondrous capacities.
No, its time to wake up to the mystery of the C-1: How can ANYTHING *know*
ANYTHING at all ? We are not concerned with how we shuffle the use of
memory, illusion, perceptual inputs etc., so as to maximise efficiency and
speed - we are concerned with the most fundamental problem of all - how
can we know ? Too many contributors seem to me to be concerned with the
secondary extension of this question to a specific version of the general
one "how can we know about X ?". It may be important for AI programmers
to deal with ways of shuffling the data and the processing order so that
a system gets access to X for further data manipulation, but this has
ABSOLUTELY NOTHING to do with the primary question of how it is possible
to know anything.....

The glimpses of Dennet & Hofstadter's wise approach that we've seen
are encouraging, but still we see Harnad struggling with why's and not how's.
Being a molecular biologist by trade if not religion, I would like to
temporarily assert that conciousness is a *biological* phenomenon, and,
taking Harnad's bull by its horns once again, to assert further that because
this is so, the question of *why* conciousness is used is quite irrelevant
in this context. Haven't any of you read Armstrong and the other arguers
for the selection of conciousness as a means for social interaction ? I
agree with Harnad that to put the origin iof conciousness in the same murky
quasi-random froth as, say, that of selfsplicing introns is a backdoor exit
from the problem. However, conciousness would certianly seem to be here
-leave it to the evolutionary biologists to sort out why, while we get
on with the how.....

Not that we are getting anywhere fast though...and I would hark
back to the point I tried to raise a while back, merely to get sidetracked
into the semantics of intention vs. intension. This is the same point that
Harnad has been making somewhat obliquely in his wise call for performance
oriented development of AI systems. We have to separate the issues of
personal conciousness (subjective experience) and the notorious "problem
of other minds"
. When dealing with the possibilities of designing concious
machines, we can only concern ourselves with the latter issue - such devices
will *always* be "other minds" and never available for our subjective
experience. As many contributors have shown, we can only judge "other-mind"
conciousness by performance-oriented measures. So, on with the nuts, on
with the bolts, and forward to sentient silicon......

paul ("the questions were easy - I just didn't know the answers") davis

mail:EMBL,postfach 10.22.09, 6900 Heidleberg, FRG
email: davis@embl.bitnet
(and hence available from UUCP, ARPA, JANET, CSNET etc...)

------------------------------

Date: 3 Feb 87 06:01:23 GMT
From: princeton!mind!harnad@rutgers.rutgers.edu (Stevan Harnad)
Subject: Why I Am Not A Methodological Mentalist


cugini@icst-ecf.UUCP ("CUGINI, JOHN") wrote on mod.ai:

> "Why I am not a Methodological Epiphenomenalist"

This is an ironic twist on Russell's sceptical book about religious
beliefs! I'm the one who should be writing "Why I'm Not a Methodological
Mentalist."


> Insofar as methodological epiphenomenalism (ME) is simply the
> following kind of counsel: "when trying to get a computer to
> play chess, don't worry about the subjective feelings which
> accompany human chess-playing, just get the machine to make
> the right moves"
, I have no particular quarrel with it.

It's a bit more than that, as indicated by the "Total" in the Total
Turing Test. The counsel is NOT to rest with toy models and modules:
That the only kind of performance which will meet our sole frail intuitive
criterion for contending with the real-world other-minds problem --
indistinguishability from a person like any other -- is the total
performance capacity of a (generic) person. Settling for less mires us
in an even deeper underdetermination than we're stuck in anyway. The
asymptotic TTT is the only way to reduce that underdetermination to
the level we're already accustomed to. Chess-playing's simple not
enough. In mind-modeling, it's all-or-nothing. And this is again a
methodological matter. [I know that this is going to trigger (not from
Cugini) another series of queries about animals, retardates, aliens,
subtotal modules. Please, first read the prior iterations on those matters...]

> It is the claim that the TTT is the only relevant criterion (or,
> by far, the major criterion) for the presence of consciousness that
> strikes me as unnecessarily provocative and, almost as bad, false.
> It is not clear to me whether this claim is an integral part of ME,
> or an independent thesis... If the claim instead were
> that the TTT is the major criterion for the presence of intelligence
> (defined in a perhaps somewhat austere way, as the ability to
> perform certain kinds of tasks...) then, again, I would have no
> serious disagreement.

The TTT is an integral part of ME, and the shorthand reminder of why it
must be is this: A complete, objective, causal theory of the mind will
always be equally true of conscious organisms like ourselves AND of
insentient automata that behave exactly as if they were conscious --
i.e., are turing-indistinguishable from ourselves. (It is irrelevant
whether there could really be such insentient perform-alikes; the point is
that there is no objective way of telling the difference. Hence the
difference, if any, cannot make a difference to the objective
theory. Ergo, methodological epiphenomenalism.)

The TTT may be false, of course; but unfortunately, it's not
falsifiable, so we cannot know whether or not it is in reality false.
[I'd also like to hold off the hordes -- again not Cugini -- who are
now poised to pounce on this "nonfalsifiability." The TTT is a
methodological criterion and not an empirical hypothesis. It's only
justification is that it's the only criterion available and it's the
one we use in real life already. It's also the best that one can hope for
from objective inquiry. And what is science, if not that?]

Nor will it do to try to duck the issue by focusing on "intelligence."
We don't know what intelligence is, except that it's something that
minds have, as demonstrated by what minds do. The issue, as I must
relentlessly keep recalling, is not one of definition. It cannot be
settled by fiat. Intelligence is as intelligence does. We know minds
are intelligent, if anything is. Hence only the capacity to pass the
TTT is so far entitled to be dubbed intelligent. Lesser performances
-- toy models and modules -- are no more than clever tricks, until we
know how (and whether) they figure functionally in a candidate that
can pass the TTT.

> It does bother me (more than it does you?) that consciousness,
> of all things, consciousness, which may be subjective, but, we
> agree, is real, consciousness, without which my day would be so
> boring, is simply not addressed by any systematic rational inquiry.

It does bother me. It used to bother me more; until I realized that
fretting about it only had two outcomes: To lure me into flawed
arguments about how consciousness can be "captured" objectively after
all, and to divert attention from ambitious performance modeling to
doing hermeneutics on trivial performances and promises of
performances. It also helps to settle my mind about it that if one
adopts an epiphenomenalist stance not only is consciousness
bracketed, but so is its vexatious cohort, "free will." I'm less
bothered in principle by the fact that (nondualistic) science has no
room for free will -- that it's just an illusion -- but that certainly
doesn't make the bothersome illusion go away in practice. (By the way,
without consciousness, your day wouldn't even be boring.)
--

Stevan Harnad (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet

------------------------------

Date: 2 Feb 87 22:52:29 GMT
From: clyde!watmath!utzoo!dciem!mmt@rutgers.rutgers.edu (Martin
Taylor)
Subject: Re: Minsky on Mind(s)

>> = Martin Taylor (me) > = Steven Harnad
>
>> Of course [rooms and corporations] do not feel pain as we do,
>> but they might feel pain, as we do.
>
>The solution is not in the punctuation, I'm afraid. Pain is just an
>example standing in for whether the candidate experiences anything AT
>ALL. It doesn't matter WHAT a candidate feels, but THAT it feels, for
>it to be conscious.
Understood. Nevertheless, the punctuation IS important, for although it
is most unlikely they feel as we do, it is less unlikely that they feel.

>
>> [i] Occam's razor demands that we describe the world using the simplest
>> possible hypotheses.
>> [ii] It seems to me simpler to ascribe consciousness to an entity that
>> resembles me in many ways than not to ascribe consciousness to that
>> entity.
>> [iii] I don't think one CAN use the TTT to assess whether another
>> entity is conscious.
>> [iv] Silicon-based entities have few overt points of resemblance,
>> so their behaviour has to be convincingly like mine before I will
>> grant them a consciousness like mine.
>
>{i} Why do you think animism is simpler than its alternative?
Because of [ii].
>{ii} Everything resembles everything else in an infinite number of
>ways; the problem is sorting out which of the similarities is relevant.
Absolutely. Watanabe's Theorem of the Ugly Duckling applies. The
distinctions (and similarities) we deem important are no more or less
real than the infinity of ones that we ignore. Nevertheless, we DO see
some things as more alike than other things, because we see some similarities
(and some differences) as more important than others.

In the matter of consciousness, I KNOW (no counterargument possible) that
I am conscious, Ken Laws knows he is conscious, Steve Harnad knows he is
conscious. I don't know this of Ken or Steve, but their output on a
computer terminal is enough like mine for me to presume by that similarity
that they are human. By Occam's razor, in the absence of evidence to the
contrary, I am forced to believe that most humans work the way I do. Therefore
it is simpler to presume that Ken and Steve experience consciousness than
that they work according to one set of natural laws, and I, alone of all
the world, conform to another.

>{iii} The Total Turing Test (a variant of my own devise, not to be
>confused with the classical turing test -- see prior chapters in these
>discussions) is the only relevant criterion that has so far been
>proposed and defended. Similarities of appearance are obvious
>nonstarters, including the "appearance" of the nervous system to
>untutored inspection. Similarities of "function," on the other hand,
>are moot, pending the empirical outcome of the investigation of what
>functions will successfully generate what performances (the TTT).
All the TTT does, unless I have it very wrong, is provide a large set of
similarities which, taken together, force the conclusion that the tested
entity is LIKE ME, in the sense of [i] and [ii].

>{iv} [iv] seems to be in contradiction with [iii].
Not at all. What I meant was that the biological mechanisms of natural
life follow (by Occam's razor) the same rules in me as in dogs or fish,
and that I therefore need less information about their function than I
would for a silicon entity before I would treat one as conscious.

One of the paradoxes of AI has been that as soon as a mechanism is
described, the behaviour suddenly becomes "not intelligent." The same
is true, with more force, for consciousness. In my theory about another
entity that looks and behaves like me, Occam's razor says I should
presume consciousness as a component of their functioning. If I have
been told the principles by which an entity functions, and those principles
are adequate to describe the behaviour I observe, Occam's razor (in its
original form "Entities should not needlessly be multiplied") says that
I should NOT introduce the additional concept of consciousness. For the
time being, all silicon entities function by principles that are well
enough understood that the extra concept of consciousness is not required.
Maybe this will change.

>
>> The problem splits in two ways: (1) Define consciousness so that it does
>> not involve a reference to me, or (2) Find a way of describing behaviour
>> that is simpler than ascribing consciousness to me alone. Only if you
>> can fulfil one of these conditions can there be a sensible argument
>> about the consciousness of some entity other than ME.
>
>It never ceases to amaze me how many people think this problem is one
>that is to be solved by "definition." To redefine consciousness as
>something non-subjective is not to solve the problem but to beg the
>question.
>
I don't see how you can determine whether something is conscious without
defining what consciousness is. Usually it is done by self-reference.
"I experience, therefore I am conscious." Does he/she/it experience?
But never is it prescribed what experience means. Hence I do maintain
that the first problem is that of definition. But I never suggested that
the problem is solved by definition. Definition merely makes the subject
less slippery, so that someone who claims an answer can't be refuted by
another who says "that wasn't what I meant at all."

The second part of my split attempts to avoid the conclusion from
similarity that beings like me function like me. If a simpler description
of the world can be found, then I no longer should ascribe consciousness
to others, whether human or not. Now, I believe that better descriptions
CAN be found for beings as different from me as fish or bacteria or
computers. I do not therefore deny or affirm that they have experiences.
(In fact, despite Harnad, I rather like Ken Law's (?) proposition that
there is a graded quality of experience, rather than an all-or-none
choice). What I do argue is that I have better grounds for not treating
these entities as conscious than I do for more human-like entities.

Harnad says that we are not looking for a mathematical proof, which is
true. But most of his postings demand that we show the NEED for assuming
consciousness in an entity, which is empirically the same thing as
proving them to be conscious.
--

Martin Taylor
{allegra,linus,ihnp4,floyd,ubc-vision}!utzoo!dciem!mmt
{uw-beaver,qucis,watmath}!utcsri!dciem!mmt

------------------------------

Date: Sun 8 Feb 87 22:52:30-PST
From: Ken Laws <Laws@SRI-STRIPE.ARPA>
Reply-to: AIList-Request@SRI-AI.ARPA
Subject: Disclaimer of Consciousness

From: clyde!watmath!utzoo!dciem!mmt@rutgers.rutgers.edu (Martin Taylor)
In the matter of consciousness, I KNOW (no counterargument possible)
that I am conscious, Ken Laws knows he is conscious, Steve Harnad
knows he is conscious.

I'm not so sure that I'm conscious. Oh, in the linguistic sense I
have the same property of consciousness that [we presume] everyone
has. But I question the "I experience a toothache" touchstone for
consciousness that Steve has been using. On the one hand, I'm not
sure I do experience the pain because I'm not sure what "I" is doing
the experiencing; on the other hand, I'm not sure that silicon systems
can't experience pain in essentially the same way. Instead of claiming
that robots can be conscious, I am just as willing to claim that
consciousness is an illusion and that I am just as unconscious as
any robot.

It is difficult to put this argument into words because the
presumption of consciousness is built into the language itself, so
let's try to examine the linguistic assumptions.

First, the word "I". The aspect of my existence that we are interested
in here is my mind, which is somehow dependent on my brain as a substrate.
My brain is system of neural circuits. The "I" is a property of the
system, and cannot be claimed by any neural subsystem (or homunculus),
although some subsystems may be more "central" to my identity than others.

Consciousness would also seem to be a property of the whole system.
But not so fast -- there is strong evidence that consciousness (in the
sense of experiencing and responding to stimuli) is primarily located
in the brain stem. Large portions of the cortex can be cut away with
little effect on consciousness, but even slight damage to the upper
brain stem cause loss of consciousness. [I am not recanting my position
that consciousness is quantitative across species. Within something
as complex as a human (or a B-52), emergent system properties can be
very fragile and thus seem to be all or nothing.] We must be careful
not to equate sensory consciousness with personality (or personal
behavioral characteristics, as in the TTT), self, or soul.

Well, I hear someone saying, that kind of consciousness hardly counts;
all birds and mammals (at least) can be comatose instead of awake --
that doesn't prove they >>experience<< pain when they are awake. Ah,
but that leads to further difficulties. The experience is real --
after all, behavior changes because of it. We need to know if the
process of experience is just the setting of bits in memory, or if
there is some awareness that goes along with the changes in the neural
substrate.

All right, then, how about self-awareness? As the bits are changed,
some other part of the brain (or the brain as a whole) is "watching"
and interprets the neural changes as a painful experience. But either
that pushes us back to a conscious homunculus (and ultimately to a
nonphysical soul) or we must accept that computers can be self-aware
in that same sense. No, self-awareness is Steve's C-2 consciousness.
What we have to get a grip on is C-1 consciousness, an awareness of
the pain itself.

One way out is to assume that neurons themselves are aware of pain,
and that our overall awareness is some sum over the individual
discomforts. But the summation requires that the neurons communicate
their pain, and we are back to the problem of how the rest of the
brain can sense and interpret that signal. A similar dead end is
to suppose that toothache signals interfere with brain functioning and
that the brain interprets its own performance degradations as pain.
What is the "I" that has the awareness of pain?

How do we know that we experience pain? (Or, following Descarte,
that we experience our own thought?) We can formulate sentences
about the experience, but it seems doubtful that our speech centers
are the subsystems that actually experience the pain. (That theory,
that all awareness is linguistic awareness, has been suggested. I am
reminded of the saying that there is no ideas so outrageous that it
has not been championed by some philosopher.) Similarly we can
rule out the motor center, the logical centers, and just about any
other centers of the brain. Either the pain is experienced by some
tiny neural subsystem, in which case "I" am not the conscious agent,
or it is experienced by the system as a whole, in which case analogous
states or processes in analogous systems should also be considered
conscious.

I propose that we bite the bullet and accept that our "experience"
or "awareness" of pain is an illusion, replicable in all relevant
respect by inorganic systems. Terms such as pain, experience,
awareness, consciousness, and self are crude linguistic analogies,
based on false models, to the true patterns of neural events.
Pain is real, as are the other concepts, but our model of how
they arise and interrelate is hopelessly animistic.

-- Ken Laws

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT