Copy Link
Add to Bookmark
Report

AIList Digest Volume 4 Issue 268

eZine's profile picture
Published in 
AIList Digest
 · 11 months ago

AIList Digest           Wednesday, 26 Nov 1986    Volume 4 : Issue 268 

Today's Topics:
Philosophy - Searle, Turing, Symbols, Categories

----------------------------------------------------------------------

Date: 22 Nov 86 12:13:02 GMT
From: mcvax!lambert@seismo.css.gov (Lambert Meertens)
Subject: Re: Searle, Turing, Symbols, Categories

In article <229@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
> I know directly that my
> performance is caused by my mind, and I infer that my
> mind is caused by my brain. I'll go even further (now that we're
> steeped in phenomenology): It is part of my EXPERIENCE of my behavior
> that it is caused by my mind. [I happen to believe (inferentially) that
> "free will" is an illusion, but I admit it's a phenomenological fact
> that free will sure doesn't FEEL like an illusion.] We do not experience our
> performance in the passive way that we experience sensory input. We
> experience it AS something we (our minds) are CAUSING. (In fact, that's
> probably the source of our intuitions about what causation IS. I'll
> return to this later.)

I hope I am not suffering from a terrible disease like incipient
schizophrenia, but for me it is not the case that I perceive/experience/
am-directly-aware-of my performance being caused by anything. It just
happens. I have some indirect evidence that there is some relation between
the performance I can watch happening and some sensations (such as anxiety
or happiness) that I can somehow experience directly whereas others have no
such direct access and can only infer the presence or absence of these
sensations within me by circumstantial evidence.

How do I know I have a mind? This reminds me of the question put to a
priest (teaching religion) by one of the pupils: "Father, how do we know
that people have a soul?"
"Well," said the priest, "here I have a card in
memory of Klaas de Vries. Look, here it says: `Pray for the soul of Klaas
de Vries.' They wouldn't put that there if people had no souls, would
they?"
There is something funny with this debate: it is hardly
translatable into Dutch. The problem is that if you look up "mind" in an
English-Dutch dictionary, some eight translations are suggested, none of
which has "mind" as their primary meaning if translated back to English,
except for idiomatic reasons (like in: "So many men, so many minds").
Instead, we find (1) memory; (2) meaning; (3) thoughts; (4) ghost; (5)
soul; (6) understanding; (7) attention; (8) desire. Of these, I contend,
"ghost" and "soul" are closest in meaning if someone says: "I know I have
mind. But how can I know that other people have minds?"


OK, if you substitute "consciousness" for "mind", then this does no
essential harm to the debate and things become translatable to Dutch. What
you gain, is that you loose the suggestion evoked (at least to me) by the
word "mind" that it is something perhaps not quite, but almost, tangible,
something that you could lock up in a box, or cut in three, or take a
picture of with a camera using aura-sensitive film. "Consciousness" is
more like "appetite": you can have it and you can loose it, but even though
it is functionally related to bodily organs, you normally don't think of it
as something located somewhere. Does our appetite cause our eating? ("My
appetite made me eat too much."
) How can we know for sure that other
people have appetites as well? I propose to consider the question, "Can
machines have an appetite?"


Now why is consciousness "real", if free will is an illusion? Or, rather,
why should the thesis that consciousness is "real" be more compelling than
the analogous thesis for free will? In either case, the essential argument
is: "Because I [the proponent of that thesis] have direct, immediate,
evidence of it."
Sometimes we are conscious of certain sensations. Do
these sensations disappear if we are not conscious of them? Or do they go
on on a subconscious level? That is like the question if a falling tree in
the middle of a forest makes a sound in the absence of creatures capable of
hearing. That is a matter of the most useful (convenient) definition. Let
us agree that the sensations continue at least if it can be shown that the
person involved keeps behaving as if the concomitant sensations continued,
even though professing in retrospection not to have been aware of them. So
people can be afraid without realizing it, say, or drive a car without
being conscious of the traffic lights (and still halt for a red light).

How can you know that you have been conscious of something that you reacted
upon? You stopped in front of a red light (or so others tell you) while
involved in a heated argument. You have no remembrance whatsoever of that
light being red, or of your slowing down (or of having been at that
intersection at all). Maybe your attention was so completely focussed on
the argument that the reaction to the traffic light was fully automatic.
Now someone tells you: No, it wasn't automatic. You muttered something
unfriendly about that other car driver who made as if he was going to drive
on and then suddenly braked. And now, zzzap!, the whole episode pops up in
your mind. You remember that car, the intersection, the traffic light, its
jumping to red, the slight annoyance at not making it, and the anger about
that *@#$%!!! other driver whose car you almost crashed into.

Maybe everything is conscious. Maybe stones are conscious of lying on the
ground, being kicked against, being picked up. Their problem is, they can
hardly tell us. The other problem is, they have no memory (lacking an
appropriate substrate for storing a trace of these experiences). They are
like us with that traffic light, if there hadn't been that other car with
that idiot driver. Even if we experience something consciously, if we
loose all remembrance of it, there is no way in which we can tell for sure
that there was a conscious experience. Maybe we can infer consciousness by
an indirect argument, but that doesn't count. Indirect evidence can be
pretty strong, but it can never give certainty. Barring false memories, we
can only be sure if we remember the experience itself. Now maybe
everything we experience is stored in memory. It may be that we cannot
recall it like that, but using special techniques (hypnosis, electro-
stimulation, mnemonic drugs) it could be retrieved. On the other hand, it
is more plausible that not quite everything is stored in memory, since that
would require a tremendous channel width for storing things, which is not
really functional, or, at least, there are presumably better trade-offs in
terms of survival capability given a limited bran capacity.

If some things we experience do not leave a recallable trace, then why
should we say that they were experienced consciously? Or, why shouldn't we
maintain the position that stones are conscious as well? That position is
maintainable, but it is not very useful in the sense that the word
"consciousness" looses its meaning; it becomes coextensive with
"existence". We "loose" our bicameral minds, Freud, and all that jazz.
More useful, then, to use "consciousness" only for experiences that are,
somehow, recallable. It makes sense that not all, not most of, but some of
the things that go on in our heads are stored away: in order to use for
determining patterns, for better evaluation of the expected outcome of
alternatives, for collecting material that is useful for the construction
or refinement of the model we have of the outside world, and so on.

Being the kind of animal homo is, it also makes sense to store material
that is useful for the refinement of the model we have of our inside world,
that which we think of as "ourselves". After all, we consult that model to
pre-evaluate the outcome of certain alternatives. If we don't "know"
ourselves, we are bound to do things (take on a responsibility, marry
someone, etc., things with a long-term commitment) that will lead us unto
suffering. (We do these things anyway, and one of the causes is that we
don't know ourselves that well.) So a lot of the things that go on "in the
front of our minds"
are stored away, and are recallable. And it is only
because of this recallability that we can say that these things were "in
the front of our minds"
, or "in our minds" at all.

Imagine now a machine programmed to "eat" and also to keep up some dinner
conversation. It has some rules built-in about etiquette like that it is
impolite to eat too much, but also some parameter varying in time to model
"hunger", and a rule IF hunger THEN eat. It just happens that the machine
is very, very hungry. There is a conflict here, but fortunately our
machine is equipped with a conflict-resolution module (CRM) that uses fuzzy
logic to get an outcome for conflicting rules. The outcome here is that
the machine eats more than is polite. The dinner-conversation module (DCM)
has no direct interface with the CRM, but it is supplied with the resultant
behaviour as part of its input data and so it concludes (using the rule
base) that it is not behaving too politely. Speaking anthropomorphically,
we would say that the machine is feeling uneasy about it. Actually, a flag
"uneasiness" is raised, and the DCM is programmed to do something about it.
Using the rule base, the DCM finds a rule that tells it that uneasiness
about being impolite can be reduced by apologizing about it. The apology
submodule (ASM) is invoked, which discovers that a casual apology will do
in this case, one form of which is just to state an appropriate cause for
the inappropriate behaviour. The rule base tells ASM that PROBABLE CAUSE
OF eat IS appetite, (next to tape-worms, but these are measured as less
appropriate under the circumstances), so "<<SELF, having, appetite>;
<goodness, 0.6785>>"
is passed back to DCM, which, after invoking
appropriate syntactic transformations, utters the unforgettable words:
"Boy, do I have an appetite today."

How different are we from that machine? If we keep wolfing down food at a
dinner, knowing that we are misbehaving (or just substitute any behaviour
that you are prone to and that you realize is just not quite right--come
on, there must be something), is the choice made the result of a conscious
process? I think it is not. I have no reason to think it is. Even if we
ponder a question consciously ("Whether 'tis nobler in the mind to suffer
..."
), I think the outcome is not the result of the conscious process, but,
rather, that the consciousness is a side-effect of the conflict-resolution
process going on. I think the same can be said about all "conscious"
processes. The process is there, anyway; it could (in principle) take
place without leaving a trace in memory, but for functional reasons it does
leave such a trace. And the word we use for these cognitive processes that
we can recall as having taken place is "conscious".

We can as it were instantly focus our attention on things that we are not
conscious of most of the time (the sensation of sitting on a chair, the
colour of the sky). This means merely that we can influence which part of
the processes going on all the time get the preferential treatment of being
stored away for future reference. The ability to do so is clearly
functional, notwithstanding the fact that we can make a non-functional use
of it. This is not different from the fact that it is functional that I
can raise my arm by "willing" it to raise, although I can use that ability
to raise it gratuitously. If the free will here is an illusion (which I
think is primarily a matter of how you choose to define something as
elusive as "free will"), then so is the free will to direct your attention
now to this, then to that. Rather than to say that free will is an
"illusion", we might say that it is something that features in the model
people have about "themselves". Similarly, I think it is better to say
that consciousness is not so much an illusion, but rather something to be
found in that model. A relatively recent acquisition of that model is
known as the "subconscious". A quite recent addition are "programs",
"sub-programs", "wrong wiring", etc.

A sufficiently "intelligent" machine, able to pass not only the dinner-
conversation test but also a sophisticated Turing test, must have a model
of itself. Using that model, and observing its own behaviour (including
"internal" behaviour!), it will be led to conclude not only that it has an
appetite, but also volition and awareness, and it will probably attribute
some of its darker sides (about which it comes to conclude that it feels
guilt, from which it deduces that it has a conscience) to lack of affection
in childhood or "wrong wiring". Is it mistaken then? Is the machine taken
in by an illusion?

I propose to consider the question, "Can machines have illusions?"

--

Lambert Meertens, CWI, Amsterdam; lambert@mcvax.UUCP

------------------------------

Date: 21 Nov 86 19:08:02 GMT
From: rutgers!princeton!mind!harnad@lll-crg.arpa (Stevan Harnad)
Subject: Re: Searle, Turing, Symbols, Categories: Reply to Cugini (2)

[Part I. See the next digest for the conclusion. -- KIL]


On mod.ai <8611200632.AA19202@ucbvax.Berkeley.EDU> "CUGINI, JOHN"
<cugini@nbs-vms.ARPA> wrote:

> I know I have a mind. In order to determine if X
[i.e., anyone else but myself]
> has a mind I've got to look for analogous
> external things about X which I know are causally connected with mind
> in *my own* case. I naively know (and *how* do I know this??) that large
> parts of my performance are an effect of my mind. I scientifically
> know that my mind depends on my brain. I can know this latter
> correlation even *without* performance correlates, eg, when the dentist
> puts me under, I can directly experience my own loss of mind which
> results from loss of whatever brain activity. (I hope it goes
> without saying that all this knowledge is just regular old
> reliable knowledge, but not necessarily certain - ie I am not
> trying to respond to radical skepticism about our everyday and
> scientific knowledge, the invocation of deceptive dentists, etc.)

These questions and reflections are astute ones, and very relevant to
the issues under discussion. It is a matter of some ancillary interest
that the people who seem to be keeping their heads more successfully
in the debates about artificial intelligence and (shall we call it)
"artificial consciousness" are the more sceptical ones, as you reveal
yourself to be at the end of this module. The zealous advocates, on
the other hand, seem to be more prone to flights of
over-interpretative fancy, leaving critical judgment by the wayside.
(This is not to say that some of the more dogged critics haven't waxed
irrational in their turn too.)

Now on to the substance of your criticism. I think the crucial points
will turn on the difference between what you call "naively know" and
"scientifically know." It will also involve (like it or not) the issue
of radical scepticicm, uncertainty and the intersubjectivity and validity of
inferences and correlations. Now, I am neither an expert in, nor an advocate
of, phenomenological introspection, but if you will indulge me and do
a little of it here, I think you will notice that there is something very
different about "naive knowing" as compared to "scientific knowing."

Scientific knowing is indirect and inferential. It is based on
inference to the best explanation, the weight of the evidence, probability,
Popperian (testability, falsifiability) considerations, etc. It is the
paradigm for all empirical inquiry, and it is open to a kind of
radical scepticism (scepticism about induction) that we all reasonably
agree not to worry about, except insofar as noting that scientific
"knowledge" is not certain, but only highly likely on the evidence,
and is always in principle open to inductive "risk" or falsification
by future evidence. This is normal science, and if that were all there
was to the special case of the mind/body problem (or, more perspicuously,
the other-minds problem) then a lot of the matters we are discussing
here could be settled much more easily.

What you call "naive knowing," on the other hand (and about which you
ask "*how* do I know this?") is the special preserve of 1st-hand,
1st-person subjective experience. It is "privileged" (no one has
access to it but me), direct (I do not INFER from evidence that I am
in pain, I know it directly), and it has been described as
"incorrigible" (can I be wrong that I am feeling pain?). The
inferences we make (about the outside world, about inductive
regularities, about other minds) are open to radical scepticism, but
the phenomenological content of 1st-hand experience is different. This
makes "naive knowing" radically different from "scientific knowing."

(Let me add a quick parenthetical remark, but not pursue it unless
someone brings it up: Even our inferential knowledge depends on our
capacity for phenomenological experience. Put another way: we must
have direct experience in order to make indirect inferences, otherwise
the inferences would have no content, whether right or wrong. I
conjecture that this is significantly connected with what I've called
the "grounding" problem that lies at the root of this discussion. It
is also related to Locke's (inchoate) distinction between primary and
secondary qualities, turning his distinction on its head.)

Now let's go on. You say that I "naively know" that my performance
is caused by my mind and I "scientifically know" that my mind is caused
by my brain. (Let's not quibble about "cause"; the other words, such
as "determined by," "a function of," "supervenient on," or Searle's
notorious "caused-by-and-realized-in" are just vague ways of trying to
finesse a problematic and unique relationship otherwise known as the
mind/body problem. Let's just bite the bullet with "cause" and see
where that gets us.) Let me translate that: I know directly that my
performance is caused by my mind, and I infer that my
mind is caused by my brain. I'll go even further (now that we're
steeped in phenomenology): It is part of my EXPERIENCE of my behavior
that it is caused by my mind. [I happen to believe (inferentially) that
"free will" is an illusion, but I admit it's a phenomenological fact
that free will sure doesn't FEEL like an illusion.] We do not experience our
performance in the passive way that we experience sensory input. We
experience it AS something we (our minds) are CAUSING. (In fact, that's
probably the source of our intuitions about what causation IS. I'll
return to this later.)

So there is a very big difference between my direct knowledge that my
mind causes my behavior and my inference (say, in the dentist's chair)
that my brain causes my mind. [Even my rational inference (at the
metalevel) that my mind doesn't really cause my behavior, that that's
just an illusion, leaves the incorrigible phenomenological fact that I
know directly that that's not the way it FEELS.] So, to put it briefly,
what I've called the "informal component" of the Total Turing Test --
does the candidate act as if it had a mind (i.e., roughly as I would)? --
appeals to precisely those intuitions, and not the inferential kind, about
brains, etc. Note, however, that I'm not claiming we have direct
knowledge of other minds. That's just an inference. But it's not the
same kind of inference as the inference that there are, say, quarks, or
cosmic strings. We are appealing, in the informal TTT, to our
intuitions about subjectivity, not to ordinary, objective scientific
evidence (such as brain-correlates).

As a consequence (and again I invite you to do some introspection), the
intuitive force of the direct knowledge that I have (or am) a mind, and
that that causes my behavior, is of an entirely different order from my
empirical inference that I have a brain and that that causes my mind.
Consider, for example, that there are plenty of people who doubt that
their brains are the true causes of their minds, but very few (like
me) who venture to doubt that their minds cause their behavior; and I
confess that I am not very successful in convincing myself, because my
direct experience keeps contradicting my inference, incorrigibly.

In summary: There is a vast difference between knowing causes directly and
inferring them; subjective phenomena are unique and radically different from
other phenomena in that they confer this direct certainty; and
inferences about other minds (i.e., about subjective phenomena in
others) are parasitic on these direct experiences of causation, rather
than on ordinary causal inference, which carries little or no
intuitive force in the case of mental phenomena, in ourselves or
others. And rightly not, because mind is a private, direct, subjective
matter, not something that can be ascertained -- even in the normal
inductive sense -- by public, indirect, objective correlations.

[To be continued ...]

Stevan Harnad (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT