Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 013

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest            Friday, 23 Jan 1987       Volume 5 : Issue 13 

Today's Topics:
Philosophy - Introspection & Consciousness

----------------------------------------------------------------------

Date: Tue, 20 Jan 87 21:17:02 CST
From: Girish Kumthekar <kumthek%lsu.csnet@RELAY.CS.NET>
Subject: Another Lengthy Philosopical ....

I can see a storm brewing on the horizon about minds, consciousness etc....
AILIST readers beware!!!! Remember Turing Machines,...... etc ?

Possible future contributers, PLEASE limit your contributions to an
unclutterable volume!!

Girish Kumthekar

kumthek%lsu@CSNET-RELAY.CSNET

------------------------------

Date: Thu, 22 Jan 87 15:51:42 PST
From: kube%cogsci.Berkeley.EDU@berkeley.edu (Paul Kube)
Subject: The sidetracking of introspection

>From: hester@ai.cel.fmc.com (Tom Hester):

>Finally, R.J. Faichney is absolutely correct. It was not Freud that
>side tracked psychology from introspection. Rather it was the "dust
>bowl empiricists"
that rode behaviorism to fame and fortune that did it.

On the chance that it's worth arguing about the intellectual
history of phsychology on AIList:

The behaviorists didn't just sidetrack introspection; they sidetracked
mentalism---engine, car, and caboose, so to speak. Introspection was
already demoted from the position it had had as infallible source of
psychological truth by James (he called his Principles of Psychology
"little more than a collection of illustrations of the difficulty of
discovering by direct introspection exactly what our feelings and
their relations are"
). But James believed there are not any unconscious
mental states; Freud should get some credit for further demoting
introspection by arguing so influentially that there are.

Mentalism is back on track now in the post-behaviorist era, but a
principled skepticism about introspection remains.
A fascinating contemporary survey on the topic is Nisbett & Ross,
"Telling more than we can know: verbal reports on mental processes",
Psych. Rev. May 1977. From the abstract: "Evidence is reviewed
which suggests that there may be little or no direct introspective
access to higher order cognitive processes."


--Paul Kube
kube@cogsci.berkeley.edu, ...!ucbvax!kube

------------------------------

Date: Thu, 22 Jan 87 10:38:21 est
From: Stevan Harnad <princeton!mind!harnad@seismo.CSS.GOV>
Subject: Objective vs. Subjective Inquiry

"CUGINI, JOHN" <cugini@icst-ecf> wrote on mod.ai:

> ...so the toothache is "real" but "subjective"...
> But...if both [subjective and objective phenomena] are real,
> then we know why we need consciousness as a concept --
> because without it we cannot explain/talk about the former class of
> events - even if the latter class is entirely explicable in its own
> terms. Ie, why should we demand of consciousness that it have
> explanatory power for objective events? It's like demanding that
> magnetism be acoustically detectible before we accept it as a valid
> concept.

Fortunately, there is a simple answer to this: Explanation itself is
(or should be) purely an objective matter. Magnetism, and all other
tractable physical phenomena are (in principle) objectively
explainable, so the above analogy simply does not work. Nagel has shown
that all of the other reductions in physics have always been
objective-to-objective. The mind/body problem is an exception
precisely because it resists subjective-to-objective reduction. Now if
there's something (subjectively) real and irreducible left over that
is left out of an objective account, we have to learn to live with
that explanatory incompleteness, rather than wishing it away by
hopeless mixing of categories and hopeful pumping of analogies, images
and interpretations. (In fact, I think that if all the objective
manifestations of consciousness -- performance capacity and neural
substrate -- are indeed "entirely explicable [in their own objective]
terms,"
as I believe and Cugini seems to concede, then why not get on
to explaining them thus, rather than indulging in subjective
overinterpretation and wishful thinking, which can only obscure or
even retard objective progress?)

[Please do not pounce on the parenthetic "subjectively" that preceded
"real," above. The problem of the objective status of consciousness
IS the mind/body problem, and to declare that subjectively-real =
objectively-real is just to state an empty obiter dictum. It happens
to be a correlative fact that all detectable physical phenomena --
i.e., all objective observables -- have subjective manifestations.
That's what we MEAN by observability, intersubjectivity, etc. But the
fact that the objective always piggy-backs on the subjective still
doesn't settle the objective status of the subjective itself. I'll go
even further. I'm not a solipsist. I'm as confident as I am of any
objective inference I have made that other people really exist and have
experiences like my own. But even THAT sense of the "reality" of the
subjective does not help when it comes to trying to give an objective
account of it. As to subjective accounts -- well, I don't go in much
for hermeneutics...]

> I can well understand how those who deny the reality of experiences
> (eg, toothaches) would then insist on the superfluousness of the
> concept of consciousness - but Harnad clearly is not one such.
> So...we need consciousness, not to explain public, objective events,
> such as neural activity, but to explain, or at least discuss, private
> subjective events. If it be objected that the latter are outside the
> proper realm of science, so be it, call it schmience or philosophy or
> whatever you like. - but surely anything that is REAL, even if
> subjective, can be the proper object for some sort of rational
> study, no?

Some sort, no doubt. But not an objective sort, and that's the point.
Empirical psychology, neuroscience and artificial intelligence are
all, I presume, branches of objective inquiry. I know that this is
also the heyday of hermeneutics, but although I share with a vengeance
the belief that philosophy can make a substantive contribution to the
cognitive sciences today, I don't believe that that contribution will
be hermeneutic. Rather, I think it will be logical, methodological
and foundational, pointing out hidden complexities, incoherencies and
false-starts. Let's leave the subjective discussion of private events
to lit-crit, where it belongs.

Stevan Harnad
{allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet
(609)-921-7771

------------------------------

Date: 23 Jan 87 02:29:51 GMT
From: mcvax!cwi.nl!lambert@seismo.CSS.GOV (Lambert Meertens)
Subject: Submission for mod-ai

Path: mcvax!lambert
From: lambert@mcvax.cwi.nl (Lambert Meertens)
Newsgroups: mod.ai
Subject: Re: C-2 as C-1
Summary: Long again--please skip this article
Keywords: mind, consciousness, memory
Message-ID: <7259@boring.mcvax.cwi.nl>
Date: 23 Jan 87 02:29:51 GMT
References: <424@mind.UUCP> <12272599850.11.LAWS@SRI-STRIPE.ARPA>
Reply-To: lambert@boring.UUCP (Lambert Meertens)
Organization: CWI, Amsterdam
Lines: 104

In article <12272599850.11.LAWS@SRI-STRIPE.ARPA> Laws@SRI-STRIPE.ARPA
(Ken Laws) writes:

>> From: Stevan Harnad <princeton!mind!harnad@seismo.CSS.GOV>:
>>
>> Worse than that, C-2 already presupposes C-1. You can't
>> have awareness-of-awareness without having awareness [...].
>
> A quibble: It would be possible [...] that my entire conscious
> perception of a current toothache is an "illusory pain" [...].

I agree.

> These views do not solve the problem, of course; the C-2 consciousness
> must be explained even if the C-1 experience was an illusion. My conscious
> memory of the event is more than just an uninterpreted memory of a memory
> of a memory ...

Here I am not so sure. To start with, the only evidence we have of
organisms having C-1 is if they are on the C-2 level, that is, if they
*claim* they experience something. Even granting them that they are not
lying in the sense of a conscious :-) misrepresentation, why should we (in
our quality of scientific enquirers) believe them on their word? After
all, more than a few people truly believe they have the most incredible
psychic powers.

Now how can we know that the "awareness-of-awareness" is a conscious thing?
There seems to be a hidden assumption that if someone utters a statement
(like "It is getting late"), then the same organisms is consciously aware
of the fact expressed in the statement. Normally, I would grant you that,
because that is the normal everyday meaning of "conscious" and "aware", but
not in the current context in which these words are isolated from their
original function to just provide an expedient way to express certain things.

[You will find that people in general have no problem in saying that a fly
is aware of something, or experiences pain, even though for all we know
there is no higher (coordinating) neural centre in this organism that would
provide a physiological substrate. Many people even have no problem in
ascribing consciousness to trees. I claim that if people (but not young
children) do have qualms in saying that an automaton experiences something,
it is because they have been *taught* that consciousness is limited to
animate, organic, objects.]

So the mere speech act "It is getting late" does not by itself imply a
conscious awareness of it getting late. Otherwise, we are forced to
ascribe consciousness of the occurrence of a syntax error to a compiler
mumbling "*** syntax error". Likewise, not only does someone saying "I
have a toothache"
not imply that the speaker is experiencing a toothache,
it also does not imply that the speaker is consciously aware of the
(possibly illusionary) fact of experiencing one. The only evidence of that
would be a C-3 act, someone saying: "I am aware of the fact that I am aware
of experiencing a toothache."
But again, why should we believe them? (And
so on, ad nauseam.)

This is getting so complicated mainly because of the inadequacy of words.
Allow me to try again. You, reader, are having a toothache. You are
really having one. I can tell, because you are visibly in pain, and,
moreover, I am your dentist, and you are in my chair with your mouth open
into which I am prodding and probing, and boy, you should have a toothache
if anyone ever had one. At this point, I cannot know for sure if you are
consciously experiencing that pain. Maybe neural pathways connect your
exposed pulpa with the centre innervating your grimacing and squirming
muscles while bypassing the centre of consciousness. I retract my
instruments from your mouth, giving you a chance to say "That really hurt,
doctor. I'll pay all my bills in time from now on if only you won't do
that again."
Firmly brushing aside the empathy that threatens to
compromise my scientific attitude, I realize that this still does not mean
that you consciously experienced that pain just a minute ago. All I know
is that you remember it (for if you did not, you wouldn't have said that).
So some symbolic representation, "@#$%@*!" say, may have been stored in
your memory--also bypassing your centre of consciousness--which is now
retrieved and interpreted (maybe illusionary) as "conscious experience of
pain--just now"
. This interpretation act need not mean that you experience
the pain now, after the fact. So it is entirely possible that you did not
consciously experience the pain at any time. Now were you conscious then,
while making that silly promise, of at least the memory of the--by itself
possibly unconscious--suffering of pain? If you are still with me, then
you will probably agree that that is not necessarily the case. Just like
P = <neural event of pain>, even though leaving a trace in memory, need not
imply consciousness of P, so R(P) = <neural event of remembering P> need
not imply consciousness of R(P) itself. However, R(P) can again leave a
trace in memory--what with your Silly Promise and dentists' bills being as
they are, you are bound to trigger R(SP) and therefore, by association,
R(R(P)), many times in the future.

If we had two unconnected memory stores, and a switch would now connect to
one, now to the other store, we would become two personalities in one body
with two "consciousnesses". If we could somehow censor either the storing
or the retrieval of pain events, we would truly, honestly believe that we
are incapable of consciously experiencing pain--notwithstanding the fact
that we would probably have the same *immediate* reaction to pain as other
people--and we wouldn't make such promises to our dentists anymore.

Wrapping it all up, I still maintain that "conscious experience" is a term
that is ascribed *in retrospect* to any neural event NE that has been
stored in memory, at the time R(NE) occurs. Stronger, R(NE) is the
*only*--as I hope I have shown insufficient--evidence of "consciousness"
about NE in a more metaphysical or whatever sense. For all we know and can
know, all consciousness in the sense of being conscious of something *while
it happens* is an "illusion", whether C-1, C-2 or C-17.

--

Lambert Meertens, CWI, Amsterdam; lambert@mcvax.UUCP

------------------------------

Date: Thu, 22 Jan 87 12:30:35 est
From: Stevan Harnad <princeton!mind!harnad@seismo.CSS.GOV>
Subject: Minsky on Mind(s)

MINSKY%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU wrote in mod.ai (AIList Digest V5 #11):

> unless a person IS in the grip of some "theoretical
> position"
- that is, some system of ideas, however inconsistent, they
> can't "know" what anything "means"

I agree, of course. I thought it was obvious that I was referring to a
theoretical position on the mind/body problem, not on the conventions
of language and folk physics that are needed in order to discourse
intelligibly at all. There is of course no atheoretical talk. As to
atheoretical "knowledge," that's another matter. I don't think a dog
shares any of my theories, but we both know when we feel a toothache
(though he can't identify or describe it, nor does he have a theory of
nerve impulses, etc.). But we both share the same (atheoretical)
experience, and that's C-1. Now it's a THEORY that comes in and says:
"You can't have that C-1 without C-2." I happen to have a rival theory
on that. But never mind, let's just talk about the atheoretical
experience I, my rival and the dog share...

> My point was that
> you can't think about, talk about, or remember anything that leaves no
> temporary trace in some part of your mind. In other words, I agree
> that you can't have C-2 without C-1 - but you can't have, think, say,
> or remember that you have C-1 without C-2! So, assuming that I know
> EXACTLY what he means, I understand PERFECTLY that that meaning is
> vacuous.

Fine. But until you've accounted for the C-1, your interpretation of
your processes as C-2 (rather than P-2, where P is just an unconscious
physical process that does the very same thing, physically and objectively)
has not been supported. It's hanging by a skyhook, and the label "C"
of ANY order is unwarranted.

I'll try another pass at it: I'll attempt to show how ducking or denying
the primacy of the C-1 problem gets one into infinite regress or
question-begging: There's something it's like to have the
experience of feeling a toothache. The experience may be an illusion.
You may have no tooth-injury, you may even have no tooth. You may be
feeling referred pain from your elbow. You may be hysterical,
delerious, hallucinating. You may be having a flashback to a year ago,
a minute ago, 30 milliseconds ago, when the physical and neural causes
actually occurred. But if at T-1 in real time you are feeling that
pain (let's make T-1 a smeared interval of Delta-T-1, which satisfies
both our introspective phenomenology AND the theory that there can be no
punctate, absolutely instantaneous experience), where does C-2 come into it?

Recall that C-2 is an experience that takes C-1 as its object, in the
same way C-1 takes its own phenomenal contents as object. To be
feeling-a-tooth-ache (C-1) is to have a certain direct experience; we all
know what that's like. To introspect on, reflect on, remember, think about
or describe feeling-a-toothache (all instances of C-2) is to have
ANOTHER direct experience -- say, remembering-feeling-a-toothache, or
contemplating-feeling-a-toothache. The subtle point is that this
2nd-order experience always has TWO aspects: (1) It takes a 1st order
experience (real or imagined) as object, and is for that reason
2nd-order, and (2) it is ITSELF an experience, which is of course
1st-order (call that C-1'). The intuition is that there is something
it is like to be aware of feeling pain (C-1), and there's ALSO something
it's like to be aware of being-aware-of-feeling-pain. Because a C-1 is
the object of the latter experience, the experience is 2nd order (C-2); but
because it's still an EXPERIENCE -- i.e., there's something it's LIKE to
feel that way -- every C-2 is always also a C-1' (which can in turn become
the object of a C-3, which is then also a C-1'', etc.).

I'm no phenomenologist, nor an advocate of doing phenomenology as we
just did above. I'm also painfully aware that the foregoing can hardly be
described as "atheoretical." It would seem that only direct experience
at the C-1-level can be called atheoretical; certainly formulating a
distinction between 1st and higher-order experience is a theoretical
enterprise, although I believe that the raw phenomenology bears me
out, if anyone has the patience to introspect it through. But the
point I'm making is simple:

It's EASY to tell a story in which certain physical processes play the
role of the contents of our experience -- toothaches, memories of
toothaches, responses to toothaches, etc. All this is fine, but
hopelessly 2nd-order. What it leaves out is why there should be any
EXPERIENCE for them to be contents OF! Why can't all these processes
just be unconscious processes -- doing the same objective job as our
conscious ones, but with no qualitative experience involved? This is
the question that Marvin keeps ignoring, restating instead his
conviction that it's taken care of (by some magical property of "memory
traces,"
as far as I can make out), and that my phenomenology is naive
in suggesting that there's still a problem, and that he hasn't even
addressed it in his proposal. But if you pull out the C-1
underpinnings, then all those processes that Marvin interprets as C-2
are hanging by a sky-hook. You no longer have conscious toothaches and
conscious memories of toothaches, you merely have tooth-damage, and
causal sequelae of tooth-damage, including symbolic code, storage,
retrieval, response, etc.. But where's the EXPERIENCE? Why should I
believe any of that is CONSCIOUS? There's the C-2 interpretation, of
course, but that's all it is: an interpretation. I can intepret a
thermostat (and, with some effort, even a rock) that way. What
justifies the interpretation?

Without a viable C-1 story, there can be no justification. And my
conjecture is that there can be no viable C-1 story. So back to
methodological epiphenomenalism, and forget about C of any order.

[Admonition to the ambitious: If you want to try to tell a C-1 story,
don't get too fancy. All the relevant constraints are there if you can
just answer the following question: When the dog's tooth is injured,
and it does the various things it does to remedy this -- inflamation
reaction, release of white blood cells, avoidance of chewing on that
side, seeking soft foods, giving signs of distress to his owner, etc. etc.
-- why do the processes that give rise to all these sequelae ALSO need to
give rise to any pain (or any conscious experience at all) rather
than doing the very same tissue-healing and protective-behavioral job
completely unconsciously? Why is the dog not a turing-indistinguishable
automaton that behaves EXACTLY AS IF it felt pain, etc, but in reality
does not? That's another variant of the mind/body problem, and it's what
you're up against when you're trying to justify interpreting physical
processes as conscious ones. Anything short of a convincing answer to
this amounts to mere hand-waving on behalf of the conscious interpretation
of your proposed processes.]


Stevan Harnad
{allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet
(609)-921-7771

------------------------------

Date: Thu 22 Jan 87 21:30:13-PST
From: Ken Laws <Laws@SRI-STRIPE.ARPA>
Subject: WHY of Pain


From: Stevan Harnad <princeton!mind!harnad@seismo.CSS.GOV>
-- why do the processes that give rise to all these sequelae ALSO need
to give rise to any pain (or any conscious experience at all) rather
than doing the very same tissue-healing and protective-behavioral job
completely unconsciously?


I know what you mean, but ... Given that the dog >>is<< conscious,
the evolutionary or teleological role of the pain stimulus seems
straightforward. It is a way for bodily tissues to get the attention
of the reasoning centers. Instead of just setting some "damaged
tooth"
bit, the injured nerve grabs the brain by the lapels and says
"I'm going to make life miserable for you until you solve my problem."
Animals might have evolved to react in the same way without the
conscious pain (denying the "need to" in your "why" question), but
the current system does work adequately.

Why (or, more importantly, how) the dog is conscious in the first place,
and hence >>experiences<< the pain, is the problem you are pointing out.


Some time ago I posted an analogy between the brain and a corporation,
claiming that the natural tendency of everyone to view the CEO as the
center of corporate conscious was evidence for emergent consciousness
in any sufficiently complex hierarchical system. I would like to
refute that argument now by pointing out that it only works if the
CEO and perhaps the other processing elements in the hierarchy are
themselves conscious. I still claim that such systems (which I can't
define ...) will appear to have centers of consciousness (and may well
pass Harnad's Total Turing Test), and that the >>system<< may even
>>be conscious<< in some way that I can't fathom, but if the CEO is
not itself conscious no amount of external consensus can make it so.

If it is true that a [minimal] system can be conscious without having
a conscious subsystem (i.e., without having a localized soul), we
must equate consciousness with some threshold level of functionality.
(This is similar to my previous argument that Searle's Chinese Room
understands Chinese even though neither the occupant nor his printed
instructions do.) I believe that consciousness is a quantitative
phenomenon, so the difference between my consciousness and that of
one of my neurons is simply one of degree. I am not willing to ascribe
consciousness to the atoms in the neuron, though, so there is a bottom
end to the scale. What fraction of a neuron (or of its functionality)
is required for consciousness is below the resolving power of my
instruments, but I suggest that memory (influenced by external conditions)
or learning is required. I will even grant a bit of consciousness
to a flip-flop :-). The consciousness only exists in situ, however: a
bit of memory is only part of an entity's consciousness if it is used
to interpret the entity's environment.

Fortunately, I don't have my heart set on creating conscious systems.
I will settle for creating intelligent ones, or even systems that are
just a little less unintelligent than the current crop.

-- Ken Laws

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT