Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 015

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest            Monday, 26 Jan 1987       Volume 5 : Issue 15 

Today's Topics:
Philosophy - Consciousness

----------------------------------------------------------------------

Date: Fri, 23 Jan 87 03:59:40 cst
From: goldfain@uxe.cso.uiuc.edu (Mark Goldfain )
Subject: For the AIList "Consciousness" Discussion


*************************************************************************
* *
* Consciousness is like a large ribosome, working its way along the *
* messenger RNA of our perceptual inputs. Or again, it is like a *
* multi-headed Turing machine, with the heads marching in lock step *
* down the great input tape of life. *
* *
*************************************************************************

Lest anyone think I am saying more than I actually am, please understand
that these are both meant as metaphors. I am not making ANY claim that mRNA
is the chemical of brain activities, nor that we are finite-state machines, et
cetera ad nauseum. I am only trying to get us off of square zero in our
characterization of how "being conscious" can be understood.

It must be something which has a "window" of a finite time period, for we
can sense the "motion" of experiences "through" our consciousness. It must be
more involved than a ribosome or a basic Turing device, since in addition to
being able to access the "present", it continually spins off things that we
call "memories", and ties these things down into a place that allows them to
be pulled back into the consciousness. (Actually, the recall of long term
memory is more like the process of going into a dark room with a tuning fork,
giving it a whack, then listening for something that resonates, going over to
the sound, and picking it up ... so perhaps the memories are not "tied down"
with pointers at all.)

------------------------------

Date: 23 Jan 87 21:15:22 GMT
From: ihnp4!cuae2!ltuxa!cuuxb!mwm@ucbvax.Berkeley.EDU (Marc W.
Mengel)
Subject: Re: More on Minsky on Mind(s)


In article <460@mind.UUCP> Sevan Harnad (harnad@mind.UUCP) writes:
> [ discussion of C-1 and C-2]

It seems to me that the human conciousness is actually more
of a C-n; C-1 being "capable of experiencing sensation",
C-2 being "capable of reasoning about being C-1", and C-n
being "capable of reasoning about C-1..C-(n-1)" for some
arbitrarily large n... Or was that really the intent of
the Minsky C-2?

--
Marc Mengel
...!ihnp4!cuuxb!mwm

------------------------------

Date: 23 Jan 87 16:10:53 GMT
From: princeton!mind!harnad@rutgers.rutgers.edu (Stevan Harnad)
Subject: Re: Minsky on Mind(s)


Ken Laws <Laws@SRI-STRIPE.ARPA> wrote on mod.ai:

> Given that the dog >>is<< conscious,
> the evolutionary or teleological role of the pain stimulus seems
> straightforward. It is a way for bodily tissues to get the attention
> of the reasoning centers.

Unfortunately, this is no reply at all. It is completely steeped in
the anthropomorphic interpretation to begin with, whereas the burden
is to JUSTIFY that interpretation: Why do tissues need to get the
"attention" of reasoning centers? Why can't this happen by brute
cuasality, like everything else, simple or complicated?

Nor is the problem of explaining the evolutionary function of consciousness
any easier to solve than justifying a conscious interpretation of machine
processes. For every natural-selectional scenario -- every
nondualistic one, that is, i.e., one that doesn't give consciousness
an independent, nonphysical causal force -- is faced with the problem
that the scenario is turing-indistinguishable from the exact same ecological
conditions, with the organisms only behaving AS IF they were
conscious, while in reality being insentient automata. The very same
survival/advantage story would apply to them (just as the very same
internal mechanistic story would apply to a conscious device and a
turing-indistinguishable as-if surrogate).

No, evolution won't help. (And "teleology" of course begs the
question.) Consciousness is just as much of an epiphenomenal
fellow-traveller in the Darwinian picture as in the cognitive one.
(And saying "it" was a chance mutation is again to beg the what/why
question.)

> Why (or, more importantly, how) the dog is conscious in the first place,
> and hence >>experiences<< the pain, is the problem you are pointing out.

That's right. And the two questions are intimately related. For when
one is attempting to justify a conscious interpretation of HOW a
device is working, one has to answer WHY the conscious interpretation
is justified, and why the device can't do exactly the same thing (objectively
speaking, i.e., behaviorally, functionally, physically) without the
conscious interpretation.

> an analogy between the brain and a corporation,
> ...the natural tendency of everyone to view the CEO as the
> center of corporate conscious was evidence for emergent consciousness
> in any sufficiently complex hierarchical system.

I'm afraid that this is mere analogy. Everyone knows that there's no
AT&T to stick a pin into, and to correspondingly feel pain. You can do
that to the CEO, but we already know (modulo the TTT) that he's
conscious. You can speak figuratively, and even functionally, of a
corporation as if it were conscious, but that still doesn't make it so.

> my previous argument that Searle's Chinese Room
> understands Chinese even though neither the occupant nor his printed
> instructions do.

Your argument is of course the familiar "Systems Reply." Unfortunately,
it is open to (likewise familiar) rebuttals -- rebuttals I consider
decisive, but that's another story. To telescope the intuitive sense
of the rebuttals: Do you believe rooms or corporations feel pain, as
we do?

> I believe that consciousness is a quantitative
> phenomenon, so the difference between my consciousness and that of
> one of my neurons is simply one of degree. I am not willing to ascribe
> consciousness to the atoms in the neuron, though, so there is a bottom
> end to the scale.

There are serious problems with the quantitative view of
consciousness. No doubt my alertness, my sensory capacity and my
knowledge admit of degrees. I may feel more pain or less pain, more or
less often, under more or fewer conditions. But THAT I feel pain, or
experience anything at all, seems an all-or-none matter, and that's
what's at issue in the mind/body problem.

It also seems arbitrary to be "willing" to ascribe consciousness to
neurons and not to atoms. Sure, neurons are alive. And they may even
be conscious. (So might atoms, for that matter.) But the issue here
is: what justifies interpreting something/someone as conscious? The
Total Turing Test has been proposed as our only criterion. What
criterion are you using with neurons? And even if single cells are
conscious -- do feel pain, etc. -- what evidence is there that this is
RELEVANT to their collective function in a superordinate organism?

Organs can be replaced by synthetic substances with the relevant
functional properties without disturbing the consciousness of the
superordinate organism. It's a matter of time before this can be done
with the nervous system. It can already be done with minor parts of
the nervous system. Why doesn't replacing conscious nerve cells with
synthetic molecules matter? (To reply that synthetic substances with the
same functional properties must be conscious under these conditions is
to beg the question.)

[If I sound like I'm calling an awful lot of gambits "question-begging,"
it's because the mind/body problem is devilishly subtle, and the
temptation to capitulate by slipping consciousness back into one's
premises is always there. I'm just trying to make these potential
pitfalls conscious... There have been postings in this discussion
to which I have given up on replying because they've fallen so deeply
into these pits.]

> What fraction of a neuron (or of its functionality)
> is required for consciousness is below the resolving power of my
> instruments, but I suggest that memory (influenced by external
> conditions) or learning is required. I will even grant a bit of
> consciousness to a flip-flop :-).
> The consciousness only exists in situ, however: a
> bit of memory is only part of an entity's consciousness if it is used
> to interpret the entity's environment.

What instruments are you using? I know only the TTT. You (like Minsky
and others) are placing a lot of faith in "memory" and "learning." But
we already have systems that have remember and learn, and the whole
point of this discussion concerns whether and why this is sufficient to
justify interpreting them as conscious. To reply that it's again a matter
of degree is again to obfuscate. [The only "natural" threshold is the
TTT, and that's not just a cognitive increment in learning/memory, but
complete functional robotics. And of course even that is merely a
functional goal for the theorist and an intuitive sop for the amateur
(who is doing informal turing testing). The philosopher knows that
it's no solution to the other-minds problem.]

What you say about flip-flops of course again prejudges or begs the
question.

> Fortunately, I don't have my heart set on creating conscious systems.
> I will settle for creating intelligent ones, or even systems that are
> just a little less unintelligent than the current crop.

If I'm right, this is the ONLY way to converge on a system that passes
the TTT (and therefore might be conscious). The modeling must be ambitious,
taking on increasingly life-size chunks of organisms' performance
capacity (a more concrete and specific concept than "intelligence").
But attempting to model conscious phenomenology, or interpreting toy
performance and its underlying function as if it were doing so, can
only retard and mask progress. Methodological Epiphenomenalism.
--

Stevan Harnad (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet

------------------------------

Date: 23 Jan 87 08:15:00 EST
From: "CUGINI, JOHN" <cugini@icst-ecf>
Reply-to: "CUGINI, JOHN" <cugini@icst-ecf>
Subject: consciousness as a superfluous concept, but so what


> Stevan Harnad:
>
> ...When the dog's tooth is injured,
> and it does the various things it does to remedy this -- inflamation
> reaction, release of white blood cells, avoidance of chewing on that
> side, seeking soft foods, giving signs of distress to his owner, etc. etc.
> -- why do the processes that give rise to all these sequelae ALSO need to
> give rise to any pain (or any conscious experience at all) rather
> than doing the very same tissue-healing and protective-behavioral job
> completely unconsciously? Why is the dog not a turing-indistinguishable
> automaton that behaves EXACTLY AS IF it felt pain, etc, but in reality
> does not? That's another variant of the mind/body problem, and it's what
> you're up against when you're trying to justify interpreting physical
> processes as conscious ones. Anything short of a convincing answer to
> this amounts to mere hand-waving on behalf of the conscious interpretation
> of your proposed processes.

This seems an odd way to put it - why does X "need" to produce Y ?
Why do spinning magnets "need" to generate electric currents? I
don't think that's quite the right question to ask about causes and
events - sounds vaguely anthrpomorphic to me. It's enough to say
that, in fact, certain types of events (spinning magnets, active
brains) do in fact cause, give rise to, certain other types of events
(electric currents, experiences). Now, now, don't panic, I know that
the epistemological justification for believing in the existence and
causes of experiences (one's own and that of others) is quite
different from that for electric currents. I tried to outline the
epistemology in the longish note I sent a month or so ago (the one
with events A1, B1, C1, which talked about brains as a more important
criterion for consciousness than performance, etc.).

Do I sense here the implicit premise that there must be an evolutionary
explanation for the existence of consciousness? And that consciousness
is a rationally justified concept iff such an evolutionary role for it
can be found? But sez who? Consciousness may be as superflouous (wrt
evolution) as earlobes. That hardly goes to show that it ain't there.

The point is, given a satisfactory justification for believing that
a) experiences exist and b) are (in the cases we know of) caused by
the brain, I don't see why a "pro-consciousness" person should feel
obligated to answer why this NEEDS to be so. I don't think it does
NEED to be so. It just is so.


John Cugini <Cugini@icst-ecf>

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT