Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 017

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest            Tuesday, 27 Jan 1987      Volume 5 : Issue 17 

Today's Topics:
Psychology - Objective Measurement of Subjective Variables,
Philosophy - Quantitative Consciousness

----------------------------------------------------------------------

Date: 25 Jan 87 15:15:06 GMT
From: clyde!burl!codas!mtune!mtund!adam@rutgers.rutgers.edu (Adam V. Reed)
Subject: Objective measurement of subjective variables

John Cugini:
>> .... to explain, or at least discuss, private
>> subjective events. If it be objected that the latter are outside the
>> proper realm of science, so be it, call it schmience or philosophy or
>> whatever you like. - but surely anything that is REAL, even if
>> subjective, can be the proper object for some sort of rational
>> study, no?
Stevan Harnad:
> Some sort, no doubt. But not an objective sort, and that's the point.
> Empirical psychology, neuroscience and artificial intelligence are
> all, I presume, branches of objective inquiry.
> .... Let's leave the subjective discussion of private events
> to lit-crit, where it belongs.

Stevan Harnad makes an unstated assumption here, namely, that subjective
variables are not amenable to objective measurement. But if by
"objective" Steve means, as I think he does, "observer-invariant", than
this assumption is demonstrably false. I shall proceed to demonstrate
this in two parts: (1) private events are amenable to parametric
measurement; and (2) relevant results of such measurement can be
observer-invariant.

(1) Whether or not a stimulus is experienced as belonging to some target
category is clearly a private event. Now data for the measurement of d',
the detection-theoretic measure of discriminability, are usually
gathered using overt behavior, such as pressing "target" and
"non-target" buttons. But in principle, d' can be measured without any
resort to externally observable behavior. Suppose I program a computer
to present a sequence of stimuli and, following enough time after
each stimulus to allow the observer to mentally classify the experience
as target or non-target, display the actual category of the preceding
stimulus. The observer would use this information to maintain a mental
count of hits and false alarms. The category feedback for the last
stimulus could be followed by a display of a table for the conversion of
hit and false alarm rates into d'. Thus, the observer would be able to
mentally compute d' without engaging in any externally observable
behavior whatever.

(2) In some well-defined contexts, the variation of d' with an
independent variable is as lawful as anything in the "known to be
objective"
sciences such as physics (see Reed, Memory and Cognition
1976, 4(4), 453-458, equation 5 and bottom panel of figure 1, for an
example of this). The parameters of such lawful relationships will
differ from observer to observer, but their form is observer-invariant.
In principle, two investigators could perform the experiment as in (1)
above, and obtain objective (in the sense of observer-independent)
results as to the form of the resulting lawful relationships between,
for example, d' and memory retention time, *without engaging in any
externally observable behavior until it came time to compare results*.

The following analogy (proposed, if I remember correctly, by Robert
Efron) may illuminate what is happening here. Two physicists, A and B,
live in countries with closed borders, so that they may never visit each
other's laboratories and personally observe each other's experiments.
Relative to each other's personal perception, their experiments are
as private as the conscious experiences of different observers. But, by
replicating each other's experiments in their respective laboratories,
they are capable of arriving at objective knowledge. This is also true,
I submit, of the psychological study of private, "subjective"
experience.
Adam Reed
mtund!adam,attmail!adamreed

------------------------------

Date: 24 Jan 87 15:34:27 GMT
From: princeton!mind!harnad@rutgers.rutgers.edu (Stevan Harnad)
Subject: Re: More on Minsky on Mind(s)


mwm@cuuxb.UUCP (Marc W. Mengel) of AT&T-IS, Software Support, Lisle IL
writes:

> It seems to me that the human conciousness is actually more
> of a C-n; C-1 being "capable of experiencing sensation",
> C-2 being "capable of reasoning about being C-1", and C-n
> being "capable of reasoning about C-1..C-(n-1)" for some
> arbitrarily large n... Or was that really the intent of
> the Minsky C-2?

It's precisely this sort of overhasty overinterpretation that my critique
of the excerpts from Minsky's forthcoming book was meant to counteract. You
can't help yourself to higher-order C's until you've handled 1st-order C
-- unless you're satisfied with hanging them on a hermeneutic sky-hook.
--

Stevan Harnad (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet

------------------------------

Date: Sun 25 Jan 87 22:08:34-PST
From: Ken Laws <Laws@SRI-STRIPE.ARPA>
Reply-to: AIList-Request@SRI-AI.ARPA
Subject: Quantitative Consciousness

Stevan Harnad:
Everyone knows that there's no
AT&T to stick a pin into, and to correspondingly feel pain. You can do
that to the CEO, but we already know (modulo the TTT) that he's
conscious. You can speak figuratively, and even functionally, of a
corporation as if it were conscious, but that still doesn't make it so.
[...] Do you believe [...] corporations feel pain, as we do?

They sure act like it when someone puts arsenic in their capsules.
I'm inclined to grant a limited amount of consciousness to corporations
and even to ant colonies. To do so, though, requires rethinking the
nature of pain and pleasure (to something related to homeostatis).
I don't know of any purely mechanical systems that approach consciousness,
but computer operating systems and adaptive communications networks are
close. The issue is partly one of complexity, partly of structure,
partly of function. I am assuming that neurons and other "simple"
systems are C-1 but not C-2 -- and C-2 is the kind of consciousness
that people are really interested in. C-2 consciousness seems to
require that at least one subsystem be "wired" to reason about its
own existence, although I gather that this may be denied in the
theory of situated automata. The mystery for me is why only >>one<<
subsystem in my brain seems to have that introspective property -- but
multiple personalities or split-brain subjects may be examples that
this is not a necessary condition.


There are serious problems with the quantitative view of
consciousness. No doubt my alertness, my sensory capacity and my
knowledge admit of degrees. I may feel more pain or less pain, more or
less often, under more or fewer conditions. But THAT I feel pain, or
experience anything at all, seems an all-or-none matter, and that's
what's at issue in the mind/body problem.

An airplane either can fly or it can't. (And there's no way half a
B-52 can fly, no matter how you choose your half.) Yet there are
simpler forms of flight used by other entities -- kites, frisbees,
paper airplanes, butterflies, dandelion seeds, ... My own opinion
is that insects and fish feel pain, but often do so in a generalized,
nonlocalized way that is similar to a feeling of illness in humans.
Octopi seem to be conscious, but with a psychology like that of spiders
(i.e., if hungry, conserve energy and wait for food to come along).
I assume that lower forms experience lower forms of consciousness
along with lower levels of intelligence. Such continuua seem natural
to me. If you wish to say that only humans and TTT-equivalents are
conscious, you shoud bear the burden of establishing the existence
and nature of the discontinuity.


It also seems arbitrary to be "willing" to ascribe consciousness to
neurons and not to atoms.

When someone demonstrates that atoms can learn, I'll reconsider.
(Incidentally, this raises the metaphysical question of whether God
can be conscious if He already knows everything.) You are questioning
my choice of discontinuity, but mine is easy to defend (or give up)
because I assume that the scale of consciousness tapers off into
meaninglessness. Asking whether atoms are conscious is like asking
whether aircraft bolts can fly.


The issue here is: what justifies interpreting something/someone as
conscious? The Total Turing Test has been proposed as our only criterion.
What criterion are you using with neurons?

Your TTT has been put forward as the only justifiable means of deciding
that an entity is conscious. I can't force myself to believe that,
although you have already punched holes in arguments far more cogent
than I could have raised. Still, I hope you're not insisting that
no entity can be conscious without passing the TTT. Even a rock could
be conscious without our having any justifiable means of deciding so.


And even if single cells are
conscious -- do feel pain, etc. -- what evidence is there that this is
RELEVANT to their collective function in a superordinate organism?

What evidence is there that it isn't? Evolved and engineered systems
generally support the "form follows function" dictum. Aircraft parts
have to be airworthy whether or not they can fly on their own.


Why doesn't replacing conscious nerve cells with
synthetic molecules matter? (To reply that synthetic substances with the
same functional properties must be conscious under these conditions is
to beg the question.)

I beg your pardon? Or rather, I beg to beg your question. I presume
that a synthetic replica of myself, or any number of such replicas,
would continue my consciousness.


If I sound like I'm calling an awful lot of gambits "question-begging,"
it's because the mind/body problem is devilishly subtle, and the
temptation to capitulate by slipping consciousness back into one's
premises is always there.

Perhaps professional philosophers are able to strive for a totally
consistent world view. We armchair amateurs have to settle for
tackling one problem at a time. A standard approach is to open
back doors and try to push the problem through; if no one push back,
the problem is [temporarily] solved. (Another approach is to duck
out the back way ourselves, leaving the problem unsolved: Why is
there Being instead of Nothingness? Who cares?) I'm glad you've
been guarding the back doors and I appreciate your valiant efforts
to clarify the issues. I have to live with my gut feelings, though,
and they remain unconvinced that the TTT is of any use. If I had to
build an aircraft, I would not begin by refuting theological arguments
about Man being given dominion over the Earth rather than the Heavens.
I would start from a premise that flight was possible and would
try to derive enabling conditions. Perhaps the attempt would be
futile. Perhaps I would invent only the automobile and the rocket,
and fail to combine them into an aircraft. But I would still try.

-- Ken Laws

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT