Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 022

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest           Thursday, 29 Jan 1987      Volume 5 : Issue 22 

Today's Topics:
Policy - AI Magazine Code,
Philosophy - Methodological Epiphenomenalism & Consciousness,
Psychology - Objective Measurement of Subjective Variables

----------------------------------------------------------------------

Date: Wed 28 Jan 87 10:26:52-PST
From: PAT <HAYES@SPAR-20.ARPA>
Reply-to: HAYES@[128.58.1.2]
Subject: Re: AIList Digest V5 #18

Weve had some bitches about too much philosophy, but I never expected
to be sent CODE to read.
Pat Hayes
PS Especially with price lists in the comments. Anyone who is willing to pay
$50.00 for a backward chaining program shouldnt be reading AIList Digest.

------------------------------

Date: 28 Jan 87 14:57:08 est
From: Walter Hamscher <hamscher@ht.ai.mit.edu>
Subject: AIList Digest V5 #20

AIList Digest Wednesday, 28 Jan 1987 Volume 5 : Issue 20
Today's Topics:
AI Expert Magazine Sources (Part 3 of 22)


I can't believe you're really sending 22 of these moby messages to the
entire AIList. Surely you could have collected requests from
interested individuals and then sent it only to them.

------------------------------

Date: Wed 28 Jan 87 22:12:59-PST
From: Ken Laws <Laws@SRI-STRIPE.ARPA>
Reply-to: AIList-Request@SRI-AI.ARPA
Subject: Code Policy


The bulk of this code mailing does bother me, but there seems to be
at least as much interest in it as in the seminar notices, bibliographies,
and philosophy discussions. AIList reaches thousands of students, and
a fair proportion are no doubt interested in examining the code. The
initial offer of the code drew only positive feedback, so far as I
know. Even the mailing of the entire stream (in nine 50-K files) on
the comp.ai distribution drew no public protest. I'm still open to
discussion, but I'll continue the series unless there is substantial
protest. Keeping up with current issues of AI Magazine will be much
less disruptive once this backlog is cleared up.

The mailing process is much more efficient for batched addresses
than for individual mailings (which send multiple copies through
many intermediate and destination hosts), so individual replies
seem out of the question -- and I can't afford to condense hundreds
of requests into a batch distribution list. (Can't somebody invent
an AI program to do that?)

It would be nice if the code could be distributed by FTP, but that
only works for the Arpanet readership. Most of the people signing
on in the last year or two are on BITNET. I still haven't gotten
around to finding BITNET relay sites, so there is no convenient way
to split the mailing. Anyway, that would still force hundreds of
Arpanet readers to go through the FTP process, and it is probably
more cost effective to just mail out the code and let uninterested
readers ignore it.

Suggestions are welcome.

-- Ken Laws

------------------------------

Date: 26 Jan 87 18:55:54 GMT
From: clyde!watmath!sunybcs!colonel@rutgers.rutgers.edu (Col. G. L.
Sicherman)
Subject: Re: Minsky on Mind(s)

> ... It is a way for bodily tissues to get the attention
> of the reasoning centers. Instead of just setting some "damaged
> tooth"
bit, the injured nerve grabs the brain by the lapels and says
> "I'm going to make life miserable for you until you solve my problem."

This metaphor seems to suggest that consciousness wars with itself. I
would prefer to say that the body grabs the brain by the handles, like
a hedge clipper or a geiger counter. In other words, just treat the
mind as a tool, without any personality of its own. After all, it's the
body that is real; the mind is only an abstraction.

By the way, it's well known that if the brain has a twist in it, it
needs only one handle. Ask any topologist!
--
Col. G. L. Sicherman
UU: ...{rocksvax|decvax}!sunybcs!colonel
CS: colonel@buffalo-cs
BI: colonel@sunybcs, csdsiche@ubvms

------------------------------

Date: Mon, 26 Jan 87 23:57:40 est
From: Stevan Harnad <princeton!mind!harnad@seismo.CSS.GOV>
Subject: Methodological Epiphenomenalism

"CUGINI, JOHN" <cugini@icst-ecf> wrote on mod.ai"

> Subject: Consciousness as a Superfluous Concept, But So What?

So methodological epiphenomenalism.

> Consciousness may be as superflouous (wrt evolution) as earlobes.
> That hardly goes to show that it ain't there.

Agreed. It only goes to show that methodological epiphenomalism may
indeed be the right research strategy. (The "
why" is a methodological
and logical question, not an evolutionary one. I'm arguing that no
evolutionary scenario will help. And it was never suggested that
consciousness "
ain't there." If it weren't, there would be no
mind/body problem.)

> I don't think it does NEED to be so. It just is so.

Fine. Now what are you going to do about it, methodologically speaking?

Stevan Harnad
{allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet
(609)-921-7771

------------------------------

Date: 27 Jan 87 19:44:16 GMT
From: princeton!mind!harnad@rutgers.rutgers.edu (Stevan Harnad)
Subject: Re: Objective measurement of subjective variables


adam@mtund.UUCP (Adam V. Reed), of AT&T ISL Middletown NJ USA, wrote:

> Stevan Harnad makes an unstated assumption... that subjective
> variables are not amenable to objective measurement. But if by
> "
objective" Steve means, as I think he does, "observer-invariant", then
> this assumption is demonstrably false.

I do make the assumption (let me state it boldly) that subjective
variables are not objectively measurable (nor are they objectively
explainable) and that that's the mind/body problem. I don't know what
"
observer-invariant" means, but if it means the same thing as in
physics -- which is that the very same physical phenomenon can
occur independently of any particular observation, and can in
principle be measured by any observer, then individuals' private events
certainly are not such, since the only eligible observer is the
subject of the experience himself (and without an observer there is no
experience -- I'll return to this below). I can't observe yours and you
can't observe mine. That's one of the definitive features of the
subjective/objective distinction itself, and it's intimately related to
the nature of experience, i.e., of subjectivity, of consciousness.

> Whether or not a stimulus is experienced as belonging to some target
> category is clearly a private event...[This is followed by an
> interesting thought-experiment in which the signal detection parameter
> d' could be calculated for himself by a subject after an appropriate
> series of trials with feedback and no overt response.]... the observer
> would be able to mentally compute d' without engaging in any externally
> observable behavior whatever.

Unfortunately, this in no way refutes the claim that subjective experience
cannot be objectively measured or explained. Not only is there (1) no way
of objectively testing whether the subject's covert calculations on
that series of trials were correct, not only is there (2) no way of
getting any data AT ALL without his overt mega-response at the end
(unless, of course, the subject is the experimenter, which makes the
whole exercise solipsistic), but, worst of all, (3) the very same
performance data could be generated by presenting inputs to a
computer's transducer, and no matter how accurately it reported its
d', we presumably wouldn't want to conclude that it had experienced anything
at all. So what's OBJECTIVELY different about the human case?

At best, what's being objectively measured happens to correlate
reliably with subjective experience (as we can each confirm in our own
cases only -- privately and subjectively). What we are actually measuring
objectively is merely behavior (and, if we know what to look for, also
its neural substrate). By the usual objective techniques of scientific
inference on these data we can then go on to formulate (again objective)
hypotheses about underlying functional (causal) mechanisms. These should
be testable and may even be valid (all likewise objectively). But the
testability and validity of these hypotheses will always be objectively
independent of any experiential correlations (i.e., the presence or
absence of consciousness).

To put it my standard stark way: The psychophysics of a conscious
organism (or device) will always be objectively identical to that
of a turing-indistinguishable unconscious organism (or device) that
merely BEHAVES EXACTLY AS IF it were conscious. (It is irrelevant whether
there are or could be such organisms or devices; what's at issue here is
objectivity. Moreover, the "
reliability" of the correlations is of
course objectively untestable.) This leaves subjective experience a
mere "
nomological dangler" (as the old identity theorists used to call
it) in a lawful psychophysical account. We each (presumably) know it's
there from our respective subjective observations. But, objectively speaking,
psychophysics is only the study of, say, the detecting and discriminating
capacity (i.e., behavior) of our trandsucer systems, NOT the qualities of our
conscious experience, no matter how tight the subjective correlation.
That's no limit on psychophysics. We can do it as if it were the study
of our conscious experience, and the correlations may all be real,
even causal. But the mind/body problem and the problem of objective
measurement and explanation remain completely untouched by our findings,
both in practise and in principle.

So even in psychophysics, the appropriate research strategy seems to
be methodological epiphenomenalism. If you disagree, answer this: What
MORE is added to our empirical mission in doing psychophysics if we
insist that we are not "
merely" trying to account for the underlying
regularities and causal mechanisms of detection, discrimination,
categorization (etc.) PERFORMANCE, but of the qualitative experience
accompanying and "
mediating" it? How would someone who wanted to
undertake the latter rather than merely the former go about things any
differently, and how would his methods and findings differ (apart from
being embellished with a subjective interpretation)? Would there be any
OBJECTIVE difference?

I have no lack of respect for psychophysics, and what it can tell us
about the functional basis of categorization. (I've just edited and
contributed to a book on it.) But I have no illusions about its being
in any better a position to make objective inroads on the mind/body
problem than neuroscience, cognitive psychology, artificial
intelligence or evolutionary biology -- and they're in no position at all.

> In principle, two investigators could perform the [above] experiment
> ...and obtain objective (in the sense of observer-independent)
> results as to the form of the resulting lawful relationships between,
> for example, d' and memory retention time, *without engaging in any
> externally observable behavior until it came time to compare results*.

I'd be interested in knowing how, if I were one of the experimenters
and Adam Reed were the other, he could get "
objective
(observer-independent) results" on my experience and I on his. Of
course, if we make some (question-begging) assumptions about the fact
that the experience of our respective alter egos (a) exists, (b) is
similar to our own, and (c) is veridically reflected by the "
form" of the
overt outcome of our respective covert calculations, then we'd have some
agreement, but I'd hardly dare to say we had objectivity.

(What, by the way, is the difference in principle between overt behavior
on every trial and overt behavior after a complex-series-of-trials?
Whether I'm detecting individual signals or calculating cumulating d's
or even more complex psychophysical functions, I'm just an
organism/device that's behaving in a certain way under certain
conditions. And you're just a theorist making inferences about the
regularities underlying my performance. Where does "
experience" come
into it, objectively speaking? -- And you're surely not suggesting that
psychophyics be practiced as a solipsistic science, each experimenter
serving as his own sole subject: for from solipsistic methods you can
only arrive at solipsistic conclusions, trivially observer-invariant,
but hardly objective.)

> The following analogy (proposed, if I remember correctly, by Robert
> Efron) may illuminate what is happening here. Two physicists, A and B,
> live in countries with closed borders, so that they may never visit each
> other's laboratories and personally observe each other's experiments.
> Relative to each other's personal perception, their experiments are
> as private as the conscious experiences of different observers. But, by
> replicating each other's experiments in their respective laboratories,
> they are capable of arriving at objective knowledge. This is also true,
> I submit, of the psychological study of private, "
subjective"
> experience.

As far as I can see, Efron's analogy casts no light at all.
It merely reminds us that even normal objectivity in science (intersubjective
repeatability) happens to be piggy-backing on the existence of
subjective experience. We are not, after all, unconscious automata. When we
perform an "
observation," it is not ONLY objective, in the sense that
anyone in principle can perform the same observation and arrive at the
same result. There is also something it is "
like" to observe
something -- observations are also conscious experiences.

But apart from some voodoo in certain quantum mechanical meta-theories,
the subjective aspect of objective observations in physics seems to be
nothing but an innocent fellow-traveller: The outcome of the
Michelson-Morley Experiment would presumably be the same if it were
performed by an unconscious automaton, or even if WE were unconscious automata.
This is decidely NOT true of the (untouched) subjective aspect of a
psychophysical experiment. Observer-independent "
experience" is a
contradiction in terms.

(Most scientists, by the way, do not construe repeatability to require
travelling directly to one another's labs; rather, it's a matter of
recreating the same objective conditions. Unfortunately, this does not
generalize to the replication of anyone else's private events, or even
to the EXISTENCE of any private events other than one's own.)

Note that I am not denying that objective knowledge can be derived
from psychophysics; I'm only denying that this can amount to objective
knowledge about anything MORE than psychophysical performance and its
underlying causal substrate. The accompanying subjective phenomenology is
simply not part of the objective story science can tell, no matter how, and
how tightly, it happens to be coupled to it in reality. That's the
mind/body problem, and a fundamental limit on objective inquiry.
Methodological epiphenomenalism recommends we face it and live with
it, since not that much is lost. The "
incompleteness" of an objective
account is, after all, just a subjective problem. But supposing away
the incompleteness -- by wishful thinking, hopeful over-interpretation,
hidden (subjective) premises or blurring of the objective/subjective
distinction -- is a logical problem.
--

Stevan Harnad (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet

------------------------------

Date: Mon, 26 Jan 87 23:25:17 est
From: mnetor!dciem!mmt@seismo.CSS.GOV
Subject: Necessity of consciousness

Newsgroups: mod.ai
Subject: Re: Minsky on Mind(s)
Summary:
Expires:
References: <8701221730.AA04257@seismo.CSS.GOV>
Sender:
Reply-To: mmt@dciem.UUCP (Martin Taylor)
Followup-To:
Distribution:
Organization: D.C.I.E.M., Toronto, Canada
Keywords:

I tried to send this direct to Steve Harnad, but his signature is
incorrect: seismo thinks princeton is an "
unknown host". Also mail
to him through allegra bounced.
===============
>just answer the following question: When the dog's tooth is injured,
>and it does the various things it does to remedy this -- inflamation
>reaction, release of white blood cells, avoidance of chewing on that
>side, seeking soft foods, giving signs of distress to his owner, etc. etc.
>-- why do the processes that give rise to all these sequelae ALSO need to
>give rise to any pain (or any conscious experience at all) rather
>than doing the very same tissue-healing and protective-behavioral job
>completely unconsciously? Why is the dog not a turing-indistinguishable
>automaton that behaves EXACTLY AS IF it felt pain, etc, but in reality
>does not? That's another variant of the mind/body problem, and it's what
>you're up against when you're trying to justify interpreting physical
>processes as conscious ones. Anything short of a convincing answer to
>this amounts to mere hand-waving on behalf of the conscious interpretation
>of your proposed processes.]

I'm not taking up your challenge, but I think you have overstated
the requirements for a challenge. Okham's razor demands only that
the simplest explanation be accepted, and I take this to mean inclusive
of boundary conditions AND preconceptions. The acceptability of a
hypothesis must be relative to the observer (say, scientist), since
we have no access to absolute truth. Hence, the challenge should be
to show that the concept of consciousness in the {dog|other person|automaton}
provides a simpler description of the world than the elimination of
the concept of consciousness does.

The whole-world description includes your preconceptions, and a hypothesis
that demands you to change those precoceptions is FROM YOUR VIEWPOINT
more complex than one that does not. Since you start from the
preconception that consciousness need not (or perhaps should not)
be invoked, you need stronger proof than would, say, an animist.

Your challenge should ask for a demonstration that the facts of observable
behaviour can be more succinctly described using consciousness than not
using it. Obviously, there can be no demonstration of the necessity of
consciousness, since ALL observable behaviour could be the result of
remotely controlled puppetry (except your own, of course). But this
hypothesis is markedly more complex than a hypothesis derived from
psychological principles, since every item of behaviour must be separately
described as part of the boundary conditions.

I have a mathematization of this argument, if you are interested. It is
about 15 years old, but it still seems to hold up pretty well. Ockham's
razor isn't just a good idea, it is informationally the correct means
of selecting hypotheses. However, like any other razor, it must
be used correctly, and that means that one cannot ignore the boundary
conditions that must be stated when using the hypothesis to make specific
predictions or descriptions. Personally, I think that hypotheses that
allow other people (and perhaps some animals) to have consciousness are
simpler than hypotheses that require me to describe myself as a special
case. Hence, Ockham's razor forces me to prefer the hypothesis that other
beings have consciousness. The same does not hold true for silicon-based
behaving entities, because I already have hypotheses that explain their
behaviour without invoking consciousness, and these hypotheses already
include the statement that silicon-based beings are different from me.
Any question of silon-based consciousness must be argued on a different
basis, and I think such arguments are likely to turn on personal preference
rather than on the facts of behaviour.

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT