Copy Link
Add to Bookmark
Report

AIList Digest Volume 4 Issue 236

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest            Monday, 27 Oct 1986      Volume 4 : Issue 236 

Today's Topics:
Administrivia - Mod.ai Followup Problem,
Philosophy - Replies from Stevan Harnad to Mozes, Cugini, and Kalish

----------------------------------------------------------------------

Date: Sun 26 Oct 86 17:11:45-PST
From: Ken Laws <Laws@SRI-STRIPE.ARPA>
Reply-to: AIList-Request@SRI-AI.ARPA
Subject: Mod.ai Followup Problem

The following five messages are replies by Stevan Harnad to some of the
items that have appeared in AIList. These five had not made it to the
AIList@SRI-STRIPE mailbox and so were never forwarded to the digest or
to mod.ai. Our current hypothesis is that the Usenet readnews command
does not correctly deliver followup ("f") messages when used to reply
to mod.ai items. Readers with this problem can send replies to net.ai
or to sri-stripe!ailist.

-- Kenneth Laws

------------------------------

Date: Mon, 27 Oct 86 00:15:36 est
From: princeton!mind!harnad@seismo.CSS.GOV (Stevan Harnad)
Subject: for posting on mod.ai (reply to E. Mozes, reconstructed)

On mod.ai, in Message-ID: <8610160605.AA09268@ucbvax.Berkeley.EDU>
on 16 Oct 86 06:05:38 GMT, eyal@wisdom.BITNET (Eyal mozes) writes:

> I don't see your point at all about "categorical
> perception"
. You say that "differences between reds and differences
> between yellows look much smaller than equal-sized differences that
> cross the red/yellow boundary"
. But if they look much smaller, this
> means they're NOT "equal-sized"; the differences in wave-length may be
> the same, but the differences in COLOR are much smaller.

There seems to be a problem here, and I'm afraid it might be the
mind/body problem. I'm not completely sure what you mean. If all
you mean is that sometimes equal-sized differences in inputs can be
made unequal by internal differences in how they are encoded, embodied
or represented -- i.e., that internal physical differences of some
sort may mediate the perceived inequalities -- then I of course agree.
There are indeed innate color-detecting structures. Moreover, it is
the hypothesis of the paper under discussion that such internal
categorical representations can also arise as a consequence of
learning.

If what you mean, however, is that there exist qualitative differences among
equal-sized input differences with no internal physical counterpart, and
that these are in fact mediated by the intrinsic nature of phenomenological
COLOR -- that discontinuous qualitative inequalities can occur when
everything physical involved, external and internal, is continuous and
equal -- then I am afraid I cannot follow you.

My own position on color quality -- i.e., "what it's like" to
experience red, etc. -- is that it is best ignored, methodologically.
Psychophysical modeling is better off restricting itself to what we CAN
hope to handle, namely, relative and absolute judgments: What differences
can we tell apart in pairwise comparison (relative discrimination) and
what stimuli or objects can we label or identify (absolute
discrimination)? We have our hands full modeling this. Further
concerns about trying to capture the qualitative nature of perception,
over and above its performance consequences [the Total Turing Test]
are, I believe, futile.

This position can be dubbed "methodological epiphenomenalism." It amounts
to saying that the best empirical theory of mind that we can hope to come
up with will always be JUST AS TRUE of devices that actual have qualitative
experiences (i.e., are conscious) as of devices that behave EXACTLY AS IF
they had qualitative experiences (i.e., turing-indistinguishably), but do
not (if such insentient look-alikes are possible). The position is argued
in detail in the papers under discussion.


> Your whole theory is based on the assumption that perceptual qualities
> are something physical in the outside world (e.g., that colors ARE
> wave-lengths). But this is wrong. Perceptual qualities represent the
> form in which we perceive external objects, and they're determined both
> by external physical conditions and by the physical structure of our
> sensory apparatus; thus, colors are determined both by wave-lengths and
> by the physical structure of our visual system. So there's no apriori
> reason to expect that equal-sized differences in wave-length will lead
> to equal-sized differences in color, or to assume that deviations from
> this rule must be caused by internal representations of categories. And
> this seems to completely cut the grounds from under your theory.

Again, there is nothing for me to disagree with if you're saying that
perceived discontinuities are mediated by either external or internal
physical discontinuities. In modeling the induction and representation
of categories, I am modeling the physical sources of such
discontinuities. But there's still an ambiguity in what you seem to be
saying, and I don't think I'm mistaken if I think I detect a note of
dualism in it. It all hinges on what you mean by "outside world." If
you only mean what's physically outside the device in question, then of
course perceptual qualities cannot be equated with that. It's internal
physical differences that matter.

But that doesn't seem to be all you mean by "outside world." You seem
to mean that the whole of the physical world is somehow "outside" conscious
perception. What else can you mean by the statement that "perceptual
qualities represent the form [?] in which we perceive external
objects"
or that "there's no...reason to expect that...[perceptual]
deviations from [physical equality]...must be caused by internal
representations of categories."


Perhaps I have misunderstood, but either this is just a reminder that
there are internal physical differences one must take into account too
in modeling the induction and representation of categories (but then
they are indeed taken into account in the papers under discussion, and
I can't imagine why you would think they would "completely cut the
ground from under"
my theory) or else you are saying something metaphysical
with which I cannot agree.

One last possibility may have to do with what you mean by
"representation." I use the word eclectically, especially because the
papers are arguing for a hybrid representation, with the symbolic
component grounded in the nonsymbolic. So I can even agree with you
that I doubt that mere symbolic differences are likely to be the sole
cause of psychophysical discontinuities, although, being physically
embodied, they are in principle sufficient. I hypothesize, though,
that nonsymbolic differences are also involved in psychophysical
discontinuities.


> My second criticism is that, even if "categorical perception" really
> provided a base for a theory of categorization, it would be very
> limited; it would apply only to categories of perceptual qualities. I
> can't see how you'd apply your approach to a category such as "table",
> let alone "justice".

How abstract categories can be grounded "bottom-up" in concrete psychophysical
categories is the central theme of the papers under discussion. Your remarks
were based only on the summaries and abstracts of those papers. By now I
hope the preprints have reached you, as you requested, and that your
question has been satisfactorily answered. To summarize "grounding"
briefly: According to the model, (learned) concrete psychophysical categories
are formed from sampling positive and negative instances of a category
and then encoding the invariant information that will reliably identify
further instances. This might be how one learned the concrete
categories "horse" and "striped" for example. The (concrete) category
"zebra" could then be learned without need for direct perceptual
ACQUAINTANCE with the positive and negative instances by simply being
told that a zebra is a striped horse. That is, the category can
be learned by symbolic DESCRIPTION by merely recombining the labels of
the already-grounded perceptual categories.

All categorization involves some abstraction and generalization (even
"horse," and certainly "striped" did), so abstract categories such as
"goodness," "truth" and "justice" could be learned and represented by
recursion on already grounded categories, their labels and their
underlying representations. (I have no idea why you think I'd have a
problem with "table.")


> Actually, there already exists a theory of categorization that is along
> similar lines to your approach, but integrated with a detailed theory
> of perception and not subject to the two criticisms above; that is the
> Objectivist theory of concepts. It was presented by Ayn Rand... and by
> David Kelley...

Thanks for the reference, but I'd be amazed to see an implementable,
testable model of categorization performance issue from that source...


Stevan Harnad
{allegra, bellcore, seismo, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet
(609)-921-7771

------------------------------

Date: Sun, 26 Oct 86 11:05:47 est
From: princeton!mind!harnad@seismo.CSS.GOV (Stevan Harnad)
Subject: Please post on mod.ai -- first of 4 (cugini)


In Message-ID: <8610190504.AA08059@ucbvax.Berkeley.EDU> on mod.ai
CUGINI, JOHN <cugini@nbs-vms.ARPA> replies to my claim that

>> there is no rational reason for being more sceptical about robots'
>> minds (if we can't tell their performance apart from that of people)
>> than about (other) peoples' minds.

with the following:

> One (rationally) believes other people are conscious BOTH because
> of their performance and because their internal stuff is a lot like
> one's own.

This is a very important point and a subtle one, so I want to make
sure that my position is explicit and clear: I am not denying that
there exist some objective data that correlate with having a mind
(consciousness) over and above performance data. In particular,
there's (1) the way we look and (2) the fact that we have brains. What
I am denying is that this is relevant to our intuitions about who has a
mind and why. I claim that our intuitive sense of who has a mind is
COMPLETELY based on performance, and our reason can do no better. These
other correlates are only inessential afterthoughts, and it's irrational
to take them as criteria.

My supporting argument is very simple: We have absolutely no intuitive
FUNCTIONAL ideas about how our brains work. (If we did, we'd have long
since spun an implementable brain theory from our introspective
armchairs.) Consequently, our belief that brains are evidence of minds and
that the absence of a brain is evidence of the absence of a mind is based
on a superficial black-box correlation. It is no more rational than
being biased by any other aspect of appearance, such as the color of
the skin, the shape of the eyes or even the presence or absence of a tail.

To put it in the starkest terms possible: We wouldn't know what device
was and was not relevantly brain-like if it was staring us in the face
-- EXCEPT IF IT HAD OUR PERFORMANCE CAPACITIES (i.e., it could pass
the Total Turing Test). That's the only thing our intuitions have to
go on, and our reason has nothing more to offer either.

To take one last pass at setting the relevant intuitions: We know what
it's like to DO (and be able to do) certain things. Similar
performance capacity is our basis for inferring that what it's like
for me is what it's like for you (or it). We do not know anything
about HOW we do any of those things, or about what would count as the
right way and the wrong way (functionally speaking). Inferring that
another entity has a mind is an intuitive judgment based on performance.
It's called the (total) turing test. Inferring HOW other entities
accomplish their performance is ordinary scientific inference. We're in
no rational position to prejudge this profound and substantive issue on
the basis of the appearance of a lump of grey jelly to our untutored but
superstitious minds.

> [W]e DO have some idea about the functional basis for mind, namely
> that it depends on the brain (at least more than on the pancreas, say).
> This is not to contend that there might not be other bases, but for
> now ALL the minds we know of are brain-based, and it's just not
> dazzlingly clear whether this is an incidental fact or somewhat
> more deeply entrenched.

The question isn't whether the fact is incidental, but what its
relevant functional basis is. In other words, what is it about he
brain that's relevant and what incidental? We need the causal basis
for the correlation, and that calls for a hefty piece of creative
scientific inference (probably in theoretical bio-engineering). The
pancreas is no problem, because it can't generate the brain's
performance capacities. But it is simply begging the question to say
that brain-likeness is an EXTRA relevant source of information in
turing-testing robots, when we have no idea what's relevantly brain-like.

People were sure (as sure as they'll ever be) that other people had
minds long before they ever discovered they had brains. I myself believed
the brain was just a figure of speech for the first dozen or so years of
my life. Perhaps there are people who don't learn or believe the news
throughout their entire lifetimes. Do you think these people KNOW any
less than we do about what does or doesn't have a mind? Besides, how
many people do you think could really pick out a brain from a pancreas
anyway? And even those who can have absolutely no idea what it is
about the brain that makes it conscious; and whether a cow's brain or
a horse-shoe crab's has it; or whether any other device, artificial or
natural, has it or lacks it, or why. In the end everyone must revert to
the fact that a brain is as a brain does.

> Why is consciousness a red herring just because it adds a level
> of uncertainty?

Perhaps I should have said indeterminacy. If my arguments for
performance-indiscernibility (the turing test) as our only objective
basis for inferring mind are correct, then there is a level of
underdetermination here that is in no way comparable to that of, say,
the unobservable theoretical entities of physics (say, quarks, or, to
be more trendy, perhaps strings). Ordinary underdetermination goes
like this: How do I know that your theory's right about the existence
and presence of strings? Because WITH them the theory succeeds in
accounting for all the objective data (let's pretend), and without
them it does not. Strings are not "forced" by the data, and other
rival theories may be possible that work without them. But until these
rivals are put forward, normal science says strings are "real" (modulo
ordinary underdetermination).

Now try to run that through for consciousness: How do I know that your
theory's right about the existence and presence of consciousness (i.e.,
that your model has a mind)? "Because its performance is
turing-indistinguishable from that of creatures that have minds."
Is
your theory dualistic? Does it give consciousness an independent,
nonphysical, causal role? "Goodness, no!" Well then, wouldn't it fit
the objective data just as well (indeed, turing-indistinguishably)
without consciousness? "Well..."

That's indeterminacy, or radical underdetermination, or what have you.
And that's why consciousness is a methodological red herring.

> Even though any correlations will ultimately be grounded on one side
> by introspection reports, it does not follow that we will never know,
> with reasonable assurance, which aspects of the brain are necessary for
> consciousness and which are incidental...Now at some level of difficulty
> and abstraction, you can always engineer anything with anything... But
> the "multi-realizability" argument has force only if its obvious
> (which it ain't) that the structure of the brain at a fairly high
> level (eg neuron networks, rather than molecules), high enough to be
> duplicated by electronics, is what's important for consciousness.

We'll certainly learn more about the correlation between brain
function and consciousness, and even about the causal (functional)
basis of the correlation. But the correlation will really be between
function and performance capacity, and the rest will remain the intuitive
inference or leap of faith it always was. And since ascertaining what
is relevant about brain function and what is incidental cannot depend
simply on its BEING brain function, but must instead depend, as usual, on
the performance criterion, we're back where we started. (What do you
think is the basis for our confidence in introspective reports? And
what are you going to say about robots'introspective reports...?)

I don't know what you mean, by the way, about always being able to
"engineer anything with anything at some level of abstraction." Can
anyone engineer something to pass the robotic version of the Total
Turing Test right now? And what's that "level of abstraction" stuff?
Robots have to do their thing in the real world. And if my
groundedness arguments are valid, that ain't all done with symbols
(plus add-on peripheral modules).

Stevan Harnad
princeton!mind!harnad
harnad%mind@princeton.csnet
(609)-921-7771

------------------------------

Date: Sun, 26 Oct 86 11:11:08 est
From: princeton!mind!harnad@seismo.CSS.GOV (Stevan Harnad)
Subject: For posting on mod.ai - 2nd of 4 (reply to Kalish)


In mod.ai, Message-ID: <861016-071607-4573@Xerox>,
"charles_kalish.EdServices"@XEROX.COM writes:

> About Stevan Harnad's two kinds of Turing tests [linguistic
> vs. robotic]: I can't really see what difference the I/O methods
> of your system makes. It seems that the relevant issue is what
> kind of representation of the world it has.

I agree that what's at issue is what kind of representation of the
world the system has. But you are prejudging "representation" to mean
only symbolic representation, whereas the burden of the papers in
question is to show that symbolic representations are "ungrounded" and
must be grounded in nonsymbolic processes (nonmodularly -- i.e., NOT
by merely tacking on autonomous peripherals).

> While I agree that, to really understand, the system would need some
> non-purely conventional representation (not semantic if "semantic"
> means "not operable on in a formal way" as I believe [given the brain
> is a physical system] all mental processes are formal then "semantic"
> just means governed by a process we don't understand yet), giving and
> getting through certain kinds of I/O doesn't make much difference.

"Non-purely conventional representation"? Sounds mysterious. I've
tried to make a concrete proposal as to just what that hybrid
representation should be like.

"All mental processes are formal"? Sounds like prejudging the issue again.
It may help to be explicit about what one means by formal/symbolic:
Symbolic processing is the manipulation of (arbitrary) physical tokens
in virtue of their shape on the basis of formal rules. This is also
called syntactic processing. The formal goings-on are also
"semantically interpretable" -- they have meanings; they are connected
to objects in the outside world that they are about. The Searle
problem is that so far the only devices that do semantic
interpretations intrinsically are ourselves. My proposal is that
grounding the representations nonmodularly in the I/O connection may provide
the requisite intrinsic semantics. This may be the "process we don't
understand yet."
But it means giving up the idea that "all mental
processes are formal"
(which in any case does not follow, at least on
the present definition of "formal," from the fact that "the brain is a
physical system"
).

> Two for instances: SHRDLU operated on a simulated blocks world. The
> modifications to make it operate on real blocks would have been
> peripheral and not have affected the understanding of the system.

This is a variant of the "Triviality of Transduction (& A/D, & D/A,
and Effectors)"
Argument (TT) that I've responded to in another
iteration. In brief, it's toy problems like SHRDLU that are trivial.
The complete translatability of internal symbolic descriptions into
the objects they stand for (and the consequent partitioning of
the substantive symbolic module and the trivial nonsymbolic
peripherals) may simply break down, as I predict, for life-size
problems approaching the power to pass the Total Turing Test.

To put it another way: There is a conjecture implicit in the solutions
to current toy/microworld problems, namely, that something along
essentially the same lines will suitably generalize to the
grown-up/macroworld problem. What I'm saying amounts to a denial of
that conjecture, with reasons. It is not a reply to me to simply
restate the conjecture.

> Also, all systems take analog input and give analog output. Most receive
> finger pressure on keys and return directed streams of ink or electrons.
> It may be that a robot would need more "immediate" (as opposed to
> conventional) representations, but it's neither necessary nor sufficient
> to be a robot to have those representations.

The problem isn't marrying symbolic systems to any old I/O. I claim
that minds are "dedicated" systems of a particular kind: The kind
capable of passing the Total Turing Test. That's the only necessity and
sufficiency in question.

And again, the mysterious word "immediate" doesn't help. I've tried to
make a specific proposal, and I've accepted the consequences, namely, that it's
just not going to be a "conventional" marriage at all, between a (substantive)
symbolic module and a (trivial) nonsymbolic module, but rather a case of
miscegenation (or a sex-change operation, or some other suitably mixed
metaphor). The resulting representational system will be grounded "bottom-up"
in nonsymbolic function (and will, I hope, display the characteristic
"hybrid vigor" that our current pure-bred symbolic and nonsymbolic processes
lack), as I've proposed (nonmetaphorically) in the papers under discussion.

Stevan Harnad
princeton!mind!harnad
harnad%mind@princeton.csnet
(609)-921-7771

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT