Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 043

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest           Saturday, 14 Feb 1987      Volume 5 : Issue 43 

Today's Topics:
Philosophy - Consciousness & Methodology & Zen

----------------------------------------------------------------------

Date: 10 Feb 87 19:41:21 GMT
From: Diamond!aweinste@husc6.harvard.edu (Anders Weinstein)
Subject: Harnad's epiphenomenalism

In defending his thesis of "methodological epiphenomenalism", one of Harnad's
favorite strategies is apparently a variant of G.E. Moore's "naturalistic
fallacy"
argument: For any proposed definition of consciousness, he will
ask: "You say consciousness is X, but why couldn't you just as well have X
WITHOUT consciousness?"
If we concede the meaningfulness of this question in
all cases, obviously this objection will be decisive.

But, I think this argument is as question-begging now as it was when Moore
used it in ethical philosophy. The definer is proposing that X is just what
consciousness IS. Accordingly, he does *not* grant that you could have X
without consciousness since, on his view, X and consciousness are one and the
same.

Put another way, the materialist is not trying to ADD anything to the
objective, causal story of X by calling it consciousness. Rather, he is
attempting to illuminate the problematic common-sense notion of consciousness
by showing how it is interpretable in naturalistic terms. Obviously the
adequacy of any proposed definition of consciousness will need to be
established; the issues to be considered will pertain to whether or not the
definition does reasonable justice to the pre-analytic application of the
term, etc. But these issues are just the usual ones for inter-theoretical
identification, and don't present any special problem in the case of mind and
brain.

Another point that Harnad has often stated is that behavior is in practice
our only criterion for the ascription of consciousness. While this is
currently true, it does not at all preclude the revision of our theory in the
direction of a more refined criterion. Compare, say, the definition of
"gold." At one time, this substance was identifiable solely on the basis of
its superficial properties such as color, hardness, and specific gravity.
With the growth of scientific knowledge, a new definition of gold in terms of
atomic structure has come to be accepted, and this criterion now supersedes
the earlier ones. If you like, you might say that atomic theory came to
reveal the "essence" of gold. I see no reason to suppose an analagous shift
couldn't arise out of the study of the mind and brain.

Harnad's "methodological epiphenomenalism" is a apparently an unavoidable
consequence of his philosophy of mind, which seems to be epiphenomenalism
simpliciter. I am surprised to find many of Harnad's interlocutors
essentially granting him this controversial premise. Whatever happened to
materialism? As I understood it, the whole field of cognitive science -- the
rehabilitation of mentalistic theorizing in psychology -- was inspired by the
philosophical insight that the functional states of computers seemed to have
just the right sorts of features we would want for psycho-physical
identification. Harnad must believe that this philosophy has failed, dooming
us to return to an uneasy and unappealling view: ontological dualism coupled
with methodological behaviorism -- the worst of both worlds.

Well, I don't think we ought to give this up so easily. I would urge that
cognitivists *not* buy into the premise of so many of Harnad's replies: the
existence of some weird parallel universe of subjective experience.
(Actually, *multiple* such universes, one per conscious subject, though of
course the existence of more than my own is always open to doubt.) We should
recognize no such private worlds. The most promising prospect we have is that
conscious experiences are either to be identified with functional states of
the brain or eliminated from our ultimate picture of the world. How this
reduction is to be carried out in detail is naturally a matter for
empirical study to reveal, but this should remain one (distant) goal of
mind/brain inquiry.

Anders Weinstein aweinste@DIAMOND.BBN.COM
BBN Labs, Cambridge MA

------------------------------

Date: 10 Feb 87 20:09:44 GMT
From: ihnp4!houxm!houem!marty1@ucbvax.Berkeley.EDU (M.BRILLIANT)
Subject: Re: More on Minsky on Mind(s)

In article <490@mind.UUCP>, harnad@mind.UUCP (Stevan Harnad) writes:
> wcalvin@well.UUCP (William Calvin), Whole Earth 'Lectronic Link, Sausalito, CA
> writes:
> > Rehearsing movements may be the key to appreciating the brain
> > mechanisms [of consciousness and free will]
>
> But WHY do the functional mechanisms of planning have to be conscious?
> What does experience, awareness, etc., have to do with the causal
> processes involved in the fanciest plan you may care to describe?...

I have the gall to answer an answer to an answer without having read
Minsky. But then, my interest in AI is untutored and practical.
Here goes:

My notion is that a being that thinks is not necessarily conscious,
but a being that thinks about thinking, and knows when it is just
thinking and when it is actually doing, must be called conscious.

In UNIX(tm) there is a program called "make" that reads a script of
instructions, compares the ages of various files named in the
instructions, and follows the instructions by updating only the files
that need to be updated. It can be said to be acting with some sort of
rudimentary intelligence.

If you invoke the "make" command with the "-n" flag, it doesn't do any
updating, it just tells you what it would do. It is rehearsing a
potential future action. In a sense, it's thinking about what it would
do. But it doesn't have to know that it's only thinking and not
doing. It could simply have its actuators cut off from its rudimentary
intelligence, so that it thinks it's acting but really isn't.

Now suppose the "make" command could, under its own internal program,
run through its instructions with a simulated "-n" flag, varying some
conditions until the result of the "thinking without doing" satisfied
some objective, and then could remove the "-n" flag and actually do
what it had just thought about.

This "make" would appear to know when it is thinking and when it is
acting, because it decided when to think and when to act. In fact, in
its diagnostic output it could say first "I am thinking about the
following alternative,"
and then finally say, "The last run looked
good, so this time I'm really going to do it."
Not only would it
appear to be conscious, but it would be accomplishing a practical
purpose in a manner that requires it to distinguish internally between
introspection and action.

I think that version of "make" would be within the current state of the
art of programming, and I would call it conscious. So we're not far
from artificial consciousness.

Marty
M. B. Brilliant (201)-949-1858
AT&T-BL HO 3D-520 houem!marty1

------------------------------

Date: 11 Feb 87 19:44:14 GMT
From: princeton!mind!harnad@rutgers.rutgers.edu (Stevan Harnad)
Subject: Re: Harnad's epiphenomenalism


aweinste@Diamond.BBN.COM (Anders Weinstein) of BBN Labs, Cambridge, MA,
writes:

> For any proposed definition of consciousness, [Harnad] will
> ask: "You say consciousness is X, but why couldn't you just as
> well have X WITHOUT consciousness?"

> I think this argument is as question-begging now as it was when Moore
> used it in ethical philosophy. The definer is proposing that X is
> just what consciousness IS. Accordingly, he does *not* grant that
> you could have X without consciousness since, on his view, X and
> consciousness are one and the same.

It unfortunately has to be relentlessly reiterated that these matters
are not settled by definitions or obiter dicta. It simply won't do to
say "On my view, consciousness and X [say, memory, learning,
self-referential capacity, linguistic capacity, or what have you] are
one and the same."
It is perfectly legitimate -- indeed, mandatory, if
SOMEONE is going to exercise some self-critical constraints on mentalistic
interpretation -- to ask WHY a candidate process should be interpreted as
conscious. If all the functional answers to that question -- "it's so it
can accomplish X,"
or "it's so it can accomplish Y this way rather than that
way"
-- would be the SAME for an unconscious process, then there are indeed
strong grounds for supposing that the mentalistic interpretation is
methodologically (I might even say, to bait the functionalists more
pointedly, "functionally") superfluous. (It's not the skepticism
that's question-begging, but the mentalistic interpretation that's
supererogatory.)

I have no idea how or why Moore used a similar argument in ethics.
My own argument is purely methodological (and functional -- I am a
kind of functionalist too): I am concerned with how to get devices we
build (and hence understand) to DO what minds can do. These devices may
also turn out to BE what minds are (namely conscious), but I do not
believe that there is any objective, scientific way to ascertain that.
Nor do I think it is methodologically possible or relevant (or, a
fortiori, necessary) to do so. My pointed "why" questions are intended to
pare off the unjustified and distracting mentalistic hype and leave a clearer
image of just how far we really have or haven't gotten in answering the
"how" questions, which are the only scientifically tractable ones in the
area of theoretical bioengineering that mind-modeling occupies.

> the materialist is attempting to illuminate the problematic
> common-sense notion of consciousness by showing how it is
> interpretable in naturalistic terms. Obviously the adequacy of
> any proposed definition of consciousness will need to be established;
> the issues to be considered will pertain to whether or not the
> definition does reasonable justice to the pre-analytic application
> of the term, etc. But these issues are just the usual ones for
> inter-theoretical identification, and don't present any special
> problem in the case of mind and brain.

But it is just the question of whether these issues are indeed the
"usual" ones in the mind/brain case that is at issue. I've given lots of
logical and methodological reasons why they're not. Wishful thinking, hopeful
overinterpretation and scientistic dogma seem to be the only rejoinders
I'm hearing. (I'm a materialist too; methodological constraints on
theoretical inference and its deliverances are what's at issue here.)

> Compare, say, the definition of "gold."...
> growth of scientific knowledge...new definition of gold
> I see no reason to suppose an analogous shift
> couldn't arise out of the study of the mind and brain.

I like the way Nagel handled this old reductionist chesnut: In
a chesnut-shell, he pointed out that all of the standard
reduction/revision scenarios of science have always consisted of one
objective account of an objective phenomenon being superseded or
subsumed by another objective account of an objective phenomenon (heat
--> mean molecular motion, etc.). There's nothing in this standard
revision-scenario that applies to -- much less can handle --
redefining subjective phenomena objectively. That prominent disanalogy
is yet another of the faces of the mind/body problem (that
functionalist euphoria sometimes overlooks). As it stands, the faith
in an eventual successful "redefinition" is just that: a faith. One
wonders why it does not founder in the sea of counter-examples and
disanalogies rightly generated by Moore's (if it's really his) method
of pointed "why" challenges. But there's no accounting for faith.

> Harnad's "methodological epiphenomenalism" is apparently an
> unavoidable consequence of his philosophy of mind, which seems to
> be epiphenomenalism simpliciter.

No, I'm not an ontological epiphenomenalist (which I suppose is a kind
of dualism), just a methodological one. I don't think consciousness
can enter into scientific theory-building and theory-testing, for the
reasons I've stated. In fact, I think it retards theory-building to
try to account for consciousness or to dress theory up with conscious
interpretations. (Among other things, it masks the performance work
that still remains to be done, and lionizes possible nonstarters.)

However, I have no doubt that consciousness exists, and no serious
doubts that organisms are conscious. Moreover, I'm quite prepared to believe
the same of devices that pass the TTT, and on exactly the same grounds. These
devices may well have "captured" consciousness functionally. Yet not only
is there no way of knowing whether or not they really have; it even makes no
methodological difference to their functioning or to our theoretical
understanding of it whether or not they have really captured
consciousness. This is not an ontological issue. The mind/body problem
simply represents a methodological constraint on what can be known objectively,
i.e., scientifically. (Note that this constraint is not just the ordinary
underdetermination of scientific inferences about unobservables; it's
much worse. For, as I've pointed out several times before, although
hypthesized entities such as quarks or superstrings are no more
observable or "verifiable" than consciousness, it is a methodological
fact that the respective theories from which they come cannot account for the
objective phenomena without positing their existence, whereas any
theory of the objective phenomena of mind -- i.e., I/O performance
capacity, perhaps supplemented by structure and function -- will work
just as well with or without a mentalistic interpretation.)

> the whole field of cognitive science -- the rehabilitation of
> mentalistic theorizing in psychology -- was inspired by the
> philosophical insight that the functional states of computers
> seemed to have just the right sorts of features we would want for
> psycho-physical identification. Harnad must believe that this
> philosophy has failed, dooming us to return to an uneasy and
> unappealling view: ontological dualism coupled with methodological
> behaviorism -- the worst of both worlds.

I certainly believe that the view has failed methodologically. But I
don't think the consequence is ontological dualism (for the reasons
I've stated) and it's not clear what "methodological behaviorism" is
(or was, I'll return to this important point). Nor do I consider
cognitive science to be synonymous with mentalistic theorizing; nor
do I consider the field to be inspired by the the psycho-physical
identificatory hopes aroused by the computer. If you want to know what
I think, it's this:

Behaviorism, in a reaction against the sterility of introspectionism,
rejected reflecting and theorizing on what went on in the mind,
suggesting instead that psychology's task was to study observable
behavior. But in its animus against mentalistic theory, behaviorism
managed to do in or trivialize theory altogether. Put another way, not
only was behaviorism opposed to (observing or) theorizing about what went
on in the MIND, it also opposed theorizing about what went on in the HEAD.
As a consequence, behavioristic psychology effectively became a
"science" without a theoretical or inferential branch to speak of.

Now what I think happened with the advent of cognitive science was
that, again, just as unobservable mental processes and unobservable
(shall we call them) "inernal" processes had been jointly banned from
the citadel, they were, with the rise of computer modeling (and
neural modeling), jointly readmitted. The mistake, as I see it,
was to embrace indiscriminately BOTH the legitimate right (and need) to
make theoretical inferences about the unobservable functional subtrates of
behavior AND the temptation to make mentalistic interpretations of
them. In my view, the first advances empirical progress (in fact is
essential for it), the second beclouds and retards it. Cognitive
science is (or should be) behaviorism-with-a-theory (or theories) at
last. If that's "methodological behaviorism," then it took the computer
era to make it so.

> Well, I don't think we ought to give this up so easily.
> I would urge that cognitivists *not* buy into the premise of
> so many of Harnad's replies: the existence of some weird parallel
> universe of subjective experience... conscious experiences are
> either to be identified with functional states of the brain or
> eliminated from our ultimate picture of the world. How this
> reduction is to be carried out in detail is naturally a matter for
> empirical study to reveal, but this should remain one (distant)
> goal of mind/brain inquiry.

Identify it with the functional states if you like. But then FORGET
about it until you've GOT the functional states that deliver the
performance (TTT) goods. When you've got those -- i.e., when all the
objective questions there are to be answered are answered -- then no
harm whatever will be done by an orgy of mentalistic interpretation of
the objective story.

No "weird parallel universe." Just the familiar subjective one we all
know at first hand. Plus the methodological constraint that the
complete scientific picture is doomed to fail to account to our satisfaction
for the existence, nature, and utility of subjectivity.

--

Stevan Harnad (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet

------------------------------

Date: 11 Feb 87 17:39:14 GMT
From: mcvax!ukc!cheviot!rosa@seismo.css.gov (Rosa Michaelson - U of
Dundee)
Subject: Re: More on Minsky on Mind(s) (Reply to Davis)


This is really afollow up to Cuigini but I do not have the moderators address.

Please refer to McCarthy's seminal work "The conciousness of Thermostats".
All good AI believers emphasize with thermostats rather than other humans.
Thank goodness I do computer science...(:-)

Has Zen and the art of Programming not gone far enough??? Please no more
philosophy, I admit it I do Not care about conciousness/minsky/the mind
brain identity problem....

Is it the cursor that moves, the computer that thinks or the human
that controls?
None of these, grasshopper, only a small data error on the tape of life.

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT