Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 058

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest             Sunday, 1 Mar 1987       Volume 5 : Issue 58 

Today's Topics:
Philosophy & AI Methodology - Consciousness

----------------------------------------------------------------------

Date: 23 Feb 87 04:14:52 GMT
From: princeton!mind!harnad@rutgers.rutgers.edu (Stevan Harnad)
Subject: Evolution of consciousness


DAVIS%EMBL.BITNET@wiscvm.wisc.edu wrote on mod.ai:

> Sure - there is no advantage in a conscious system doing what can
> be done unconciously. BUT, and its a big but, if the system that
> gets to do trick X first *just happens* to be conscious, then all
> future systems evolving from that one will also be conscious.

I couldn't ask for a stronger concession to methodological epiphenomenalism.

> In fact, it may not even be an accident - when you
> consider the sort of complexity involved in building a `turing-
> indistinguishable' automaton, versus the slow, steady progress possible
> with an evolving, concious system, it may very well be that the ONLY
> reason for the existence of conscious systems is that they are
> *easier* to build within an evolutionary, biochemical context.

Now it sounds like you're taking it back.

> Hence, we have no real reason to suppose that there is a 'why' to be
> answered.

You'll have to make up your mind. But as long as anyone proposes a
conscious interpretation of a functional "how" story, I must challenge
the interpretation by asking a functional "why?", and Occam's razor
will be cutting with me, not with my opponent. It is not the existence
of consciousness that's at issue (of course it exists) but its
functional explanation and the criteria for inferring that it is
present in cases other than one's own.
--

Stevan Harnad (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet

------------------------------

Date: 24 Feb 87 08:41:00 EST
From: "CUGINI, JOHN" <cugini@icst-ecf>
Reply-to: "CUGINI, JOHN" <cugini@icst-ecf>
Subject: epistemology vs. functional theory of mind


> > Me: The Big Question: Is your brain more similar to mine than either
> > is to any plausible silicon-based device?
>
> SH: that's not the big question, at least not mine. Mine is "How does the
> mind work?"
To answer that, you need a functional theory of how the
> mind works, you need a way of testing whether the theory works, and
> you need a way of deciding whether a device implemented according to
> the theory has a mind.

> Cugini keeps focusing on the usefulness of "presence of `brain'"
> as evidence for the possession of a mind. But in the absence of a
> functional theory of the brain, its superficial appearance hardly
> helps in constructing and testing a functional theory of the mind.
>
> Another way of putting it is that I'm concerned with a specific
> scientific (bioengineering) problem, not an exobiological one ("Does this
> alien have a mind?"
), nor a sci-fi one ("Does this fictitious robot
> have a mind?"
), nor a clinical one ("Does this comatose patient or
> anencephalic have a mind?"
), nor even the informal, daily folk-psychological
> one ("Does this thing I'm interacting with have a mind?"). I'm only
> concerned with functional theories about how the mind works.

How about the epistemological one (philosophical words sound so, so...
*dignified*): Are we justified in believing that others have
minds/consciousness, and if so, on what rational basis?

I thought that was the issue we (you and I) were mostly talking
about. (I have the feeling you're switching the issue.) Whether
detailed brain knowledge will be terribily relevant to building a
functional theory of the mind, I don't know. As you say, it's a
question of the level of simulation. My hunch is that the chemistry
and low-level structure of the brain are tied very closely to
consciousness, simpliciter. I suspect that the ability to see red,
etc (good ole C-1) will require neurons. (I take this to be the
point of Searle's remark somewhere or other that consciousness
without a brain is as likely as lactation without mammary glands). On
the other hand, integer addition clearly is implementable without
wetware.

But even if a brain isn't necessary for consciousness, it's still good
strong evidence for it, as long as one accepts the notion that brains
form a "natural kind" (like stars, gold, electrons, light switches).
As I'm sure you know, there's a big philosophical problem with
natural kinds, struggled with by philosophers from Plato to
Goodman. My point was that it's no objection to brain-as-evidence
to drag in the natural-kinds problem, because that is not unique
to the issue of other minds. And it seems to me that's what you
are (were?) guilty of when you challenge the premise that our brains
are relevantly similar, the point being that if they are similar,
then the my-brain-causes-consciousness-therefore-so-does-yours
reasoning goes through.


John Cugini <Cugini@icst-ecf>

------------------------------

Date: 24 Feb 87 16:28:46 GMT
From: clyde!burl!codas!mtune!mtuxo!houxm!houem!marty1@rutgers.rutgers.
edu (M.BRILLIANT)
Subject: Re: Evolution of consciousness

I'm sorry if it's necessary to know the technical terminology of
philosophy to participate in discussions of engineering and artifice.
I admit my ignorance and proceed to make my point anyway.

In article <552@mind.UUCP>, harnad@mind.UUCP (Stevan Harnad) writes
(I condense and paraphrase):

> DAVIS%EMBL.BITNET@wiscvm.wisc.edu wrote on mod.ai:
> > ... if the system that [does] X first [is] conscious, then all
> > future systems evolving from that one will also be conscious.
> I couldn't ask for a stronger concession to methodological epiphenomenalism.

In 25 words or less, what's methodological epiphenomenalism?

> > In fact ... [maybe] conscious systems ... are
> > *easier* to build within an evolutionary, biochemical context.
> Now it sounds like you're taking it back.

I think DAVIS is just suggesting an alternative hypothesis.

> > Hence, we have no real reason to suppose that there is a 'why' to be
> > answered.

Then why did DAVIS propose that "easier" is "why"?

Let me propose another "why." Not long ago I suggested that a simple
unix(tm) command like "make" could be made to know when it was acting,
and when it was merely contemplating action. It would then not only
appear to be conscious, but would thereby work more effectively.

Let us go further. IBM's infamous PL/I Checkout Compiler has many
states, in each of which it can accept only a limited set of commands
and will do only a limited set of things. As user, you can ask it what
state it's in, and it can even tell you what it can do in that state,
though it doesn't know what it could do in other states. But you can
ask it what it's doing now, and it will tell you. It answers questions
as though it were very stupid, but dimly conscious.

Of course, the "actuality" of consciousness is private, in that the
question of whether X "is conscious" can be answered only by X. An
observer of X can only tell whether X "acts as though it were
conscious."
If the observer empathizes with X, that is, observes
him/her/it-self as the "same type of being" as X, the "appearance" of
consciousness becomes evidence of "actuality." I propose that we pay
less attention to whether we are the "same type of being" as X and more
attention to the (inter)action.

If expert systems can be written to tell you an answer, and also tell
you how they got the answer, it should not be hard to write a system
like the Checkout Compiler, but with a little more knowledge of its own
capabilities. That would make it a lot easier for an inexpert user to
interact with it.

Consider also the infamous "Eliza" as a system that is not conscious.
At first it appears to interact much as a psychotherapist would, but
you can test it by pulling its leg, and it won't know you're pulling
its leg; a therapist would notice and shift to another state. You can
also make a therapist speak to you non-professionally by a verbal
time-out signal, and then go back to professional mode. But Eliza has
only one functional state, and hence neither need nor capacity for
consciousness.

Thus, the evolutionary advantage of consciousness in primates (the
actuality as well as the appearance) is that it facilitates such social
interactions as communication and cooperation. The advantage of
building consciousness into computer programs (now I refer to the
appearance, since I can't empathize with a computer program) is the
same: to facilitate communication and cooperation.

I propose that we ignore the philosophy and get on with the
engineering. We already know how to build systems that interact as
though they were conscious. Even if a criterion could be devised to
tell whether X is "actually" conscious, not just "seemingly" conscious,
we don't need it to build functionally conscious systems.

Marty
M. B. Brilliant (201)-949-1858
AT&T-BL HO 3D-520 houem!marty1

------------------------------

Date: 25 Feb 87 14:32:04 GMT
From: princeton!mind!harnad@rutgers.rutgers.edu (Stevan Harnad)
Subject: Re: Evolution of consciousness


M. B. Brilliant (marty1@houem.UUCP) of AT&T-BL HO 3D-520 asks:

> In 25 words or less, what's methodological epiphenomenalism?

Your own reply (less a few words) defines it well enough:

> I propose that we ignore [the philosophy] and get on with the
> engineering. [We already know how] to build systems that interact as
> though they were conscious. Even if a criterion could be devised to
> tell whether X is "actually" conscious, not just "seemingly" conscious,
> we don't need it to build [functionally] conscious systems.

Except that we DON'T already know how. This ought to read: "We should get
down to trying"
to build systems that can pass the Total Turing Test (TTT)
-- i.e., are completely performance-indistinguishable from conscious
creatures like ourselves. Also, there is (and can be) no other functional
criterion than the TTT, so "seemingly" conscious is as close as we will
ever get. Hence there's nothing gained (and a lot masked and even lost)
from focusing on interpreting trivial performance as conscious instead
of on strengthening it. What we should ignore is conscious interpretation:
That's a good philosophy. And I've dubbed it "methodological epiphenomenalism."

> Thus, the evolutionary advantage of consciousness in primates (the
> actuality as well as the appearance) is that it facilitates such social
> interactions as communication and cooperation. The advantage of
> building consciousness into computer programs (now I refer to the
> appearance, since I can't empathize with a computer program) is the
> same: to facilitate communication and cooperation.

This simply does not follow from the foregoing (in fact, it's at odds
with it). Not even a hint is given about the FUNCTIONAL advantage (or even
the functional role) of either actually being conscious or even of appearing
conscious. "Communication-and-cooperation" -- be it ever as "seemingly
conscious"
as you wish -- does not answer the question about what functional
role consciousness plays, it simply presupposes it. Why aren't communication
and cooperation accomplished unconsciously? What is the FUNCTIONAL
advantage of conscious communication and cooperation? How we feel about one
another and about the devices we build is beside the point (except for
the informal TTT). It concerns the phenomenological and ontological
fact of consciousness, not its functional role, which (if there were
any) would be all that was relevant to mind engineering. That's
methodological epiphenomenalism.
--

Stevan Harnad (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT