Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 036

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest             Monday, 9 Feb 1987       Volume 5 : Issue 36 

Today's Topics:
Philosophy - Consciousness & Objective vs. Subjective Inquiry

----------------------------------------------------------------------

Date: 3 Feb 87 07:15:00 EST
From: "CUGINI, JOHN" <cugini@icst-ecf>
Reply-to: "CUGINI, JOHN" <cugini@icst-ecf>
Subject: pique experience


> Harnad: {iii} The Total Turing Test (a variant of my own devise, not
> to be confused with the classical turing test -- see prior chapters
> in these discussions) is the only relevant criterion that has so far
> been proposed and defended. Similarities of appearance are obvious
> nonstarters, including the "appearance" of the nervous system to
> untutored inspection.

Just a quick pout here - last December I posted a somewhat detailed
defense of the "brain-as-criterion" position, since it seemed to be a
major point of contention. (Again, the one with the labeled events
A1, B1, etc.). No one has responded directly to this posting. I'm
prepared to argue the brain-vs-TTT case on its merits, but it would be
helpful if those who assert the TTT position would acknowledge the
existence, if not the validity, of counter-arguments.

John Cugini <Cugini@icst-ecf>

------------------------------

Date: 3 Feb 87 19:51:58 GMT
From: norman@husc4.harvard.edu (John Norman)
Subject: Re: Objective vs. Subjective Inquiry

In article <462@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:

>Let's leave the subjective discussion of private events
>to lit-crit, where it belongs.

Could you elaborate on this smug comment, in detail?


John Norman

UUCP: {seismo,ihnp4,allegra,ut-sally}harvard!h-sc4!norman
Internet: norman%h-sc4@harvard.HARVARD.EDU
BITNET: NORMAN@HARVLAW1

------------------------------

Date: Wed, 4 Feb 87 10:44:32 pst
From: Ray Allis <ray@BOEING.COM>
Subject: Conscious Intelligence


I've been reading the discussion of consciousness with interest,
because I DO consider such philosophical inquiry to be relevant
to AI. Philosophical issues must be addressed if we are serious
about building "intelligent" systems. Lately, though, several
people, either explicitly or by expressing impatience with the
subject, have implied that they consider consciousness irrelevant
to AI. Does this reflect a belief that consciousness is irrelevant
to "natural" intelligence as well? What is the explanation for the
observation that "intelligent behavior" and consciousness seem to
occur together? Can an entity "behave intelligently" without being
conscious? Can an entity be conscious without being "intelligent"?
Is consciousness required in order to have "intelligent behavior" or
is it a side-effect? What are some examples? Counter-examples?
Even prior to some definitive answer to "The Mind-Body Problem", I
believe we should try to understand the nature of the relationship
between consciousness and "intelligent behavior", justify the
conclusion that there is no relationship, or lower our expectations
(and proclamations) considerably.

I'd like to see some forum for these discussions kept available,
whether or not it's the AILIST.

------------------------------

Date: 5 Feb 87 07:10:19 GMT
From: ptsfa!hoptoad!tim@LLL-LCC.ARPA (Tim Maroney)
Subject: Re: More on Minsky on Mind(s)

How well respected is Minsky among cognitive psychologists? I was rather
surprised to see him putting the stamp of approval on Drexler's "Engines of
Creation"
, since the psychology is so amazingly shallow; e.g., reducing
identity to a matter of memory, ignoring effects of the glands and digestion
on personality. Drexler had apparently read no actual psychology, only AI
literature and neuro-linguistics, and in my opinion his approach is very
anti-humanistic. (Much like that of hard sf authors.)

Is this true in general in the AI world? Is it largely incestuous, without
reference to scientific observations of psychic function? In short, does it
remain almost entirely speculative with respect to higher-order cognition?
--
Tim Maroney, Electronic Village Idiot
{ihnp4,sun,well,ptsfa,lll-crg,frog}!hoptoad!tim (uucp)
hoptoad!tim@lll-crg (arpa)

Second Coming Still Vaporware After 2,000 Years

------------------------------

Date: 3 Feb 87 16:10:05 GMT
From: princeton!mind!harnad@rutgers.rutgers.edu (Stevan Harnad)
Subject: Re: Minsky on Mind(s)


mmt@dciem.UUCP (Martin Taylor) writes:

> we DO see some things as more alike than other things, because
> we see some similarities (and some differences) as more important
> than others.

The scientific version of the other-minds problem -- the one we deal
with in the lab and at the theoretical bench, as opposed to the informal
version of the other-minds problem we practice with one another every
day -- requires us to investigate what causal devices have minds, and,
in particular, what functional properties of those causal devices are
responsible for their having minds. In other words (unless you know
the answer to the theoretical problems of cognitive science and
neurosience a priori) it is an EMPIRICAL question what the relevant
underlying functional and structural similarities are. The only
defensible prior criterion of similarity we have cannot be functional
or structural, since we don't know anything about that yet; it can
only be the frail, fallible, underdetermined one we use already in
everyday life, namely, behavioral similarity.

Every other similarity is, in this state of ignorance, arbitrary,
a mere similarity of superficial appearance. (And that INCLUDES the
similarity of the nervous system, because we do not yet have the vaguest
idea what the relevant properties there are either.) Will this state of
affairs ever change? (Will we ever find similarities other than behavioral
ones on the basis of which we can infer consciousness?) I argue that it will
not change. For any other correlate of consciousness must be VALIDATED
against the behavioral criterion. Hence the relevant functional
similarities we eventually discover will always have to be grounded in
the behavioral ones. Their predictive power will always be derivative.
And finally, since the behavioral-indistinguishability criterion is itself
abundantly fallible -- incommensurably moreso than ordinary scientific
inferences and their inductive risks -- our whole objective structure
will be hanging on a skyhook, so to speak, always turing
indistinguishable from state of affairs in which everything behaves
exactly the same way, but the similarities are all deceiving, and
consciousness is not present at all. The devices merely behave exactly
as if it were.

Throughout the response, by the way, Taylor freely interchanges the
formal scientific problem of modeling mind -- inferring its substrates,
and hence trying to judge what functional conditions are validly
inferred to be conscious (what the relevant similarities are) -- with
the informal problem of judging who else in our everyday world is
conscious. Similarities of superficial appearance may be good enough
when you're just trying to get by in the world, and you don't have the
burden of inferring causal substrate, but it won't do any good with
the hard cases you have to judge in the lab. And in the end, even
real-world judgments are grounded in behavioral similarity
(indistinguishability) rather than something else.

> it is simpler to presume that Ken and Steve experience
> consciousness than that they work according to one set of
> natural laws, and I, alone of all the world, conform to another.

Here's an example of conflating the informal and the empirical
problems. Informally, we just want to make sure we're interacting with
thinking/feeling people, not insentient robots. In the lab, we have to
find out what the "natural laws" are that generate the former and not
the latter. (Your criterion for ascribing consciousness to Ken and me,
by the way, was a turing criterion...)

> All the TTT does, unless I have it very wrong, is provide a large set of
> similarities which, taken together, force the conclusion that the tested
> entity is LIKE ME

The Total Turing Test simply requires that the performance capacity of
a candidate that I infer to have a mind be indistinguishable from the
performance capacity of a real person. That's behavioral similarity
only. When a device passes that test, we are entitled to infer that
its functional substrate is also relevantly similar to our own. But
that inference is secondary and derivative, depending for its
validation entirely on the behavioral similarities.

> If a simpler description of the world can be found, then I no
> longer should ascribe consciousness to others, whether human or not.

I can't imagine a description sufficiently simple to make solipsism
convincing. Hence even the informal other-minds problem is not settled
by "Occam's Razor." Parsimony is a constraint on empirical inference,
not on our everyday, intuitive and practical judgements, which are
often not only uneconomical, but irrational, and irresistible.

> What I do argue is that I have better grounds for not treating
> these [animals and machines] as conscious than I do for more
> human-like entities.

That may be good enough for everyday practical and perhaps ethical
judgments. (I happen to think that it's extremely wrong to treat
animals inhumanely.) I agree that our intuitions about the minds of
animals are marginally weaker than about the minds of other people,
and that these intuitions get rapidly weaker still as we go down the
phylogenetic scale. I also haven't much more doubt that present-day
artificial devices lack minds than that stones lack minds. But none
of this helps in the lab, or in the principled attempt to say what
functions DO give rise to minds, and how.

> Harnad says that we are not looking for a mathematical proof, which is
> true. But most of his postings demand that we show the NEED for assuming
> consciousness in an entity, which is empirically the same thing as
> proving them to be conscious.

No. I argue for methodological epiphenomenalism for three reasons
only: (1) Wrestling with an insoluble problem is futile. (2) Gussying
up trivial performance models with conscious interpretations gives the
appearance of having accomplished more than one has; it is
self-deceptive and distracts from the real goal, which is a
performance goal. (3) Focusing on trying to capture subjective phenomenology
rather than objective performance leads to subjectively gratifying
analogy, metaphor and hermeneutics instead of to objectively stronger
performance models. Hence when I challenge a triumphant mentalist
interpretation of a process, function or performance and ask why it
wouldn't function exactly the same way without the consciousness, I am
simply trying to show up theoretical vacuity for what it is. I promise
to stop asking that question when someone designs a device that passes
the TTT, because then there's nothing objective left to do, and an
orgy of interpretation can no longer do any harm.



--

Stevan Harnad (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet

------------------------------

Date: 4 Feb 87 02:46:27 GMT
From: clyde!watmath!utzoo!dciem!mmt@rutgers.rutgers.edu (Martin
Taylor)
Subject: Re: Consciousness?


(Moved from mod.ai)
> I always thought that a scientific theory had to undergo a number of
> tests to determine how "good" it is. Needless to say, a perfect score
> on one test may be balanced by a mediocre score on another test. Some
> useful tests are:
>
> - Does the theory account for the data?
> - Is the theory simple? Are there unnecessary superfluousities?
> - Is the theory useful? Does it provide the basis for a fruitful
> program of research?
All true, and probably all necessary.
> ....
> While the study of consciousness is fascinating and lies at the base of
> numerous religions, it doesn't seem to be scientifically useful. Do I
> rewrite my code because the machine is conscious or because it is
> getting the wrong answer?
If you CAN write your code without demanding your machine be conscious,
then you don't need consciousness to write your code. But if you want
to construct a system that can, for example, darn socks or write a fine
sonata, you should probably (for now) write your code with the assumption
of consciousness in the executing machine.

In other words, you are confusing the unnecessary introduction of
consciousness into a system wherein you know all the working principles
with the question of whether consciousness is required for certain
functions.
> Is there a program of experimentation
> suggested by the search for consciousness?
Consciousness need not be sought. You experience it (I presume). The
question is whether behaviour can better (by the tests you present above)
be described by including consciousness or by not including it. If, by
"the search for consciousness" you mean the search for a useful definition
of consciousness, I'll let others answer that question.
> Does consciousness change the way artificial intelligence must be
> programmed? The evidence so far says NO. [How is that for a baldfaced
> assertion?
Pretty good. But for reasons stated above, it's irrelevant if you start
with the idea that AI must be programmed in a silicon (i.e. constructed)
machine. Any such development precludes the necessity of using consciousness
in the design, although it does not preclude the possibility that the
end product might BE conscious.
>
>
> I don't think scientific theories of consciousness are incorrect, I
> think they are barren.
Now THAT's a bald assertion. Barren for what purpose? Certainly for
construction purposes, but perhaps not for understanding what evolved
organisms do. (I take no stand on whether consciousness is in fact a
useful construct. I only want to point out that it has potential for
being useful, even though not in devising artificial constructs).
>
> Seth
--

Martin Taylor
{allegra,linus,ihnp4,floyd,ubc-vision}!utzoo!dciem!mmt
{uw-beaver,qucis,watmath}!utcsri!dciem!mmt
mmt@zorac.arpa

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT