Copy Link
Add to Bookmark
Report

AIList Digest Volume 4 Issue 267

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest           Wednesday, 26 Nov 1986    Volume 4 : Issue 267 

Today's Topics:
Queries - Lisp or Smalltalk for Amiga & XLISP 1.8,
Philosophy - Searle, Turing, Nagel

----------------------------------------------------------------------

Date: 24 Nov 86 11:21:33 PST (Monday)
From: Tom.EdServices@Xerox.COM
Subject: Lisp, Smalltalk for Amiga


Does anyone know of Smalltalk or any Lisps (besides Xlisp and Cambridge
Lisp) for the Commodore-Amiga? What I really want is a Common Lisp.

Thanks for any help.

------------------------------

Date: 24 Nov 86 17:42:57 GMT
From: mcvax!ukc!einode!tcdcs!omahony@seismo.css.gov (O'Mahony Donal)
Subject: Looking for source of XLISP 1.8

I am looking for the source of Dave Betz's XLISP version 1.8. This is
a version of LISP with object oriented extensions. I understand that
it is available on the BIX bulletin board, but it is difficult to
gain access from here. I would be grateful if sombody would post a copy

Donal O'Mahony,
Trinity College,
Dublin,
Ireland

------------------------------

Date: 22 Nov 86 21:46:13 GMT
From: rutgers!princeton!mind!harnad@titan.arc.nasa.gov (Stevan
Harnad)
Subject: Re: Searle, Turing, Nagel


On mod.ai, rjf@ukc.UUCP <8611071431.AA18436@mcvax.uucp>
Rob Faichney (U of Kent at Canterbury, Canterbury, UK) made
nonspecific reference to prior discussions of intelligence,
consciousness and Nagel. I'm not altogther certain that his
contribution was intended as a followup to the discussion that has
been going on lately under the heading "Searle, Turing, Categories,
Symbols,"
but since it concerns the issues of that discussion, I am
responding on the assumption that it was. R. Faichney writes:

> [T. Nagel's] paper [See Mortal Questions, Cambridge University Press
> 1979, and The View From Nowhere, Oxford University Press 1986]
> is not ... strictly relevant to a discussion of machine
> intelligence, because what Nagel is concerned with is not intelligence,
> but consciousness. That these are not the same, may be realised on a
> little contemplation. One may be most intensely conscious while doing
> little or no cogitation. To be intelligent - or, rather, to use
> intelligence - it seems necessary to be conscious, but the converse
> does not hold - that to be conscious it is necessary to be intelligent.
> I would suggest that the former relationship is not a necessary one
> either - it just so happens that we are both conscious and (usually)
> intelligent.

It would seem that if you believe that "to use intelligence...it seems
necessary to be conscious"
then that amounts to agreeing that Nagel's
paper on consciousness is "relevant to a discussion of machine
intelligence."
It is indisputable that intelligence admits of degrees,
both as a stable trait and as a fluctuating state. What is at issue in
discussions of the turing test is not the proposition that consciousness
is the same as intelligence. Rather, it is whether a candidate has
intelligence at all. It seems that consciousness in man is a sufficient
condition for being intelligent (i.e., for exhibiting performance that is
validly described as "intelligent" in the same way we would apply that
term to our own performance). Whether consciousness is a necessary
condition for intelligence is probably undecidable, and goes to the
heart of the mind/body problem and its attendant uncertainties.

The converse proposition -- that intelligence is a necessary condition for
consciousness is synonymous with the proposition that consciousness is
a sufficient condition for intelligence, and this is indeed being
claimed (e.g., by me). The argument runs like this: The issue in
turing-testing is sorting out intelligent performance from its unintelligent
look-alikes. As a completely representative example, consider my asking
you how much 2 + 2 is, and your replying "4" -- as compared to my writing
a computer program whose only function is to put out the symbol "4" whenever
it encounters the string of symbols "How much is 2 + 2?" (this is basically
Searle's point too). There you have it all in microcosm. If the word
"intelligence" has any meaning at all, over and above displaying ANY
arbitrary performance at all (including a rock sliding down a hill, or,
for that matter, a rock NOT sliding down a hill), then we need a principled
way of distinguishing these two cases. That's what the Total Turing
Test I've proposed is meant to do; it amounts to equating
intelligence with total performance capacities indistinguishable from
our own. This also coincides with our only basis for inferring that
anyone else but ourselves has a mind (i.e., is conscious).

There is no contradiction between agreeing that intelligence admits
of degrees and that mind is all-or-none. The Total Turing Test does
not demand the performance capacity of Newton or Bach, only that of an
(undistinguished) person indistinguishable from any other person one might
know for a lifetime. Moreover, the Total Turing Test admits of
variants for other species, although this involves problems of ecological
knowledge and intuitions that humans may lack for any other species but
their own. It even admits of pathological variants of our own species
(retardation, schizophrenia, aphasia, paralysis, coma, etc. as discussed
in other iterations of this discussion, e.g., with J. Cugini) although
here too intuitions and validity probably break down.

> Animals probably are conscious without being intelligent. Machines may
> perhaps be intelligent without being conscious. If these are defined
> seperately, the problem of the intelligent machine becomes relatively
> trivial (though that may seem too good to be true): an intelligent
> machine is capable of doing that which would require intelligence in
> a person, eg high level chess.

Not too good to be true: Too easy. And it would fail to capture
almost all of our relevant pretheoretic generalizations or intuitions.
Animals ARE intelligent (in addition to being conscious), although, as usual,
their intelligence admits of degrees, and can only be validly assessed
relative to their ecological or adaptive contexts (although even
relative to our own ecology, many other species display some degree of
intelligence). The machine intelligence problem -- which is the heart
of the matter -- cannot be settled so quickly and easily. Moreover,
the empirical question of what intelligence is cannot be settled by a
definition (remember "2 + 2 = 4" and the rolling stone, above). Many
intelligent people (with minds) can't play high-level chess, but no
machine can currently do EVERYTHING that the least intelligent of
these people can do. That's the burden of the Total Turing Test.

> Nagel views subjectivity as irreducible to objectivity, indeed the
> latter derives from the former, being a corrected and generalised
> version of it. A maximally objective view of the world must admit
> the reality of subjectivity.

Nagel is one of the few thinkers today who doesn't lapse into
arbitrary hand-waving on the issue of consciousness and its
"reducibility" to something else. Nagel's point is that there is
something it's "like" to have experience, i.e., to be conscious, and
that it's only open to the 1st person point of view. It's hence radically
unlike all other "objective" or "intersubjective" phenomena in science
(e.g., meter-readings), which anyone else can verify as being independent of
one's "point of view" (although Nagel correctly reminds us that even
objectivity is parasitic on subjectivity). The upshot of his analysis
is that utopian scientific mind-science (cognitive science?)
-- that future complete theory that will predict and explain it all --
will be essentially "incomplete" in a way that utopian physics will not be:
Both will successfully predict and explain all their respective observable
(objective) data, but mind-science will be left with something
irreducible, hence unexplained.

For me, this is not a great problem, since I regard the mission of
devising a candidate that can pass the Total Turing Test to be an abundantly
profound and challenging one, and I regard its potential results -- a
functional explanation of the objective features of the mind -- as
sufficiently desirable and useful, so that the part it will FAIL to
explain does not bother me. That may well forever remain philosophy's
province. But I do keep reminding the overzealous that that utopian
mind science will be turing-indistinguishable from a mindless one. I
keep doing this for two reasons: First, because I believe that this
Nagelian point is correct, and worth keeping in mind. And second, because
I believe that attempts to capture or incorporate consciousness in cognitive
science more "directly" are utterly misguided, and lead in the direction of
highly subjective over-interpretations, hermeneutics and self-delusion,
instead of down the only objective scientific road to be traveled: modeling
lifesize performance capacity (i.e., the Total Turing Test). It is for
this reason that I recommend "methodological epiphenomenalism" as a
research strategy in cognitive science.

> So what, really, is consciousness? According to Nagel, a thing is
> conscious if and only if it is like something to be that thing.
> In other words, when it may be the subject (not the object!) of
> intersubjectivity. This accords with Minsky (via Col. Sicherman):
> 'consciousness is an illusion to itself but a genuine and observable
> phenomenon to an outside observer...' Consciousness is not
> self-consciousness, not consiousness of being conscious, as some
> have thought, but is that with which others can identify. This opens
> the way to self-awareness through a hall of mirrors effect - I
> identify with you identifying with me... And in the negative mode
> - I am self-conscious when I feel that someone is watching me.

The Nagel part is right, but unfortunately all the rest
(Minsky/Sicherman/hall-of-mirrors) has it all wrong, and is precisely
the type of lapse into hermeneutics and euphoria I warned against earlier.
The quote above (via the Colonel) is PRECISELY THE OPPOSITE of Nagel's
point. The only aspect of conscious experience that involves direct
observability is the subjective, 1st-person aspect (and the fact THAT I
am having a conscious experience is certainly no illusion since
Descartes at least, although what it tells me about the outside world may be,
at least since Hume). Let's call this private terrain Nagel-land.
The part others "can identify" is Turing-land: Objective, observable
performance (and its structural and functional substrates). Nagel's point
is that Nagel-land is not reducible to Turing-land.

Consciousness is the capacity to have subjective experience (or perhaps
the state of having subjective experience). The rest of the "mirrors"
business is merely metaphor and word-play; such subject matter may make for
entertaining and thought-provoking reading, as in Doug Hofstadter's books,
but it hardly amounts to an objective contribution to cognitive science.

> It may perhaps be supposed that the concept of consciousness evolved
> as part of a social adaptation - that those individuals who were more
> socially integrated, were so at least in part because they identified
> more readily, more intelligently and more imaginatively with others,
> and that this was a successful strategy for survival. To identify with
> others would thus be an innate behavioural trait.

Except that Nagel would no doubt suggest (and I would agree) that
there's no reason to believe that the asocial or minimally social
animals are not conscious too. But apart from that, there's a much
deeper reason why it is probably futile to try to make evolutionary
conjectures about the adaptive function of conscious experience:
According to standard evolutionary theory, the only traits that are
amenable to the kind of trial-and-error selection on the basis of
their consequences for the survival of the organism and propogation of its
genes are (what Nagel would call) OBJECTIVE traits: structure,
function and behavior. Standard evolutionary conjectures about the
putative adaptive function of consciousness are open to precisely the
same objection as the utopian mind-science spoken of earlier:
Evolution is blind to the difference between organisms that are
actually conscious and organisms that merely behave as if they were
conscious. Turing-indistinguishability again. On the other hand, recent
variants of standard evolutionary theory would be compatible with a
NON-selectional origin of consciousness, as an epiphenomenon.

(In pointing out the futility of adaptive scenarios for the origin of
consciousness, I am drawing on my own theoretical failures. I tried
that route in an earlier paper and only later realized that such
"Just-SO" stories suffer from even worse liabilities in speculations
about the evolutionary origins of consciousness than they do in
speculations about the evolutionary origins of behaviors; radically
worse liabilities, for the reason given above. Caveat Emptor.)

> ...When I suppose myself to be conscious, I am imagining myself
> outside myself - taking the point of view of an (hypothetical) other
> person. An individual - man or machine - which has never communicated
> through intersubjectivity might, in a sense, be conscious, but neither
> the individual nor anyone else could ever know it.

I'm afraid you've either gravely misunderstood Nagel or left him far
behind here. When I feel a pain -- when I am in the qualitative state of
knowing what it's like to be feeling a pain -- I am not "supposing"
anything at all. I'm simply feeling pain. If I were not conscious, I
wouldn't be feeling pain, I'd just be acting as if I felt pain. The
same is true of you and of animals. There's nothing social about this.
Nor is "imagination" particularly involved (except perhaps in whatever
external attributions are made to the pain, such as, "there must be something
wrong with my tooth"
). Even what is called clinically "imaginary" or
psychosomatic pain -- such as phantom-limb pain or hysterical pain --
is subjectively real, and that's the point: When I'm really feeling
pain, I'm not imagining I'm in pain; I AM in pain.

This is referred to by philosophers as the "incorrigibility" of 1st-person
experience. Although it's not without controversy, it's useful to keep in
mind, because it's what's really at issue in the problem of artificial
minds. We are asking whether candidates have THAT sort of qualitative,
conscious experience. (Again, the "mirror" images about
self-consciousness, etc., are mere icing or fine-tuning, compared to
the more basic issue of whether or not, to put it bluntly, a machine
can actually FEEL pain, or merely ACTS as if it did.)

> Subjectively, we all know that consciousness is real. Objectively,
> we have no reason to believe in it. Because of the relationship
> between subjectivity and objectivity, that position can never be
> improved on. Pragmatism demands a compromise between the two
> extremes, and that is what we already do, every day, the proportion
> of each component varying from one context to another. But the
> high-flown theoretical issue of whether a machine can ever be
> conscious allows no mere pragmatism. All we can say is that we do
> not know, and, if we follow Nagel, that we cannot know - because the
> question is meaningless.

Some crucial corrections that may set the whole matter in a rather different
light: Subjectively (and I would say objectively too), we all know that
OUR OWN consciousness is real. Objectively, we have no way of knowing
that anyone else's consciousness is real. Because of the relationship
between subjectivity and objectivity, direct knowledge of the kind we
have in our own case is impossible in any other. The pragmatic
compromise we practice every day with one another is called the Total
Turing Test: Ascertaining that others behave indistinguishably from our
paradigmatic model for a creature with consciousness: ourselves. We
were bound to come face-to-face with the "high-flown theoretical
issue"
of artificial consciousness as soon as we went beyond everyday naive
pragmatic considerations and took on the burden of constructing a
predictive and explanatory causal thoery of mind.

We cannot know directly whether any other organism OR device has a mind,
and, if we follow Nagel, our inferences are not meaningless, but in some
respects incomplete and undecidable.


--

Stevan Harnad (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT