Copy Link
Add to Bookmark
Report

AIList Digest Volume 4 Issue 263

eZine's profile picture
Published in 
AIList Digest
 · 11 months ago

AIList Digest           Thursday, 20 Nov 1986     Volume 4 : Issue 263 

Today's Topics:
Philosophy - D/A Distinction and Symbols &
Machine Intelligence/Consciousness &
Philosophy of Mind Stuff

----------------------------------------------------------------------

Date: 12 Nov 86 18:01:31 GMT
From: trwrb!aero!marken@ucbvax.Berkeley.EDU (Richard Marken)
Subject: D/A Distinction and Symbols

In article <3490001@hpfcph.HP.COM> Bob Myers makes an eloquent debut to the
D/A distinction debate with the following remarks:

>The difference between "analog" and "digital" is nothing more than the
>difference between a table of numbers and the corresponding graph; in a
>digital representation, we assign a finite-precision number to indicate the
>value of something (usually a signal) at various points in time (or frequency,
>or space, or whatever). An "analog" representation is just that - we choose
>to view some value (voltage, current, water pressure, anything) as hopefully
>being a faithful copy of something else. An excellent example is a
>microphone, which converts a varying pressure into an "analogous" signal -
>a varying voltage. This distinction has nothing to do with the accuracy of
>the representation obtained, the technology used to obtain, or any of a host
>of other items that come to mind when we think of the terms "analog" and
>"digital".

We haven't heard for some time from the usually prolific Dr. Har-
nad, who started the debate with a request for definitions of the
A/D distinction. It seems to me that the topic was broached in
the first place because Harnad had some notion that "analog" or
"non-symbolic" robots are, in some way, a better subject for a test
of machine intelligence (a la Turing) than the"symbol manipula-
tor" envisioned by Turing himself.

Whether this was where Harnad was going or not, I would like to
make one point. It seems to me, based on the cogent A/D distinc-
tion made by Myers, that both analog and digital representations
are "symbolic". In both cases, some variable (number, signal lev-
el) is used to represent another. The relationship between the
variables is _arbitrary_ in, potentially, two ways: 1) the
nature of the analog signal or number used to represent the
other variable is arbitrary- other types of signals
or other number values could have also been used. Using electri-
city to represent sound pressure level is arbitrary
(though, possibly, a good enginnering decision)-- sound pressure
level could have been represented by height of a needle (hey,
it is) or by water pressure or whatever.

2) the values of the analog (or digital) variable used to
represent the values of another variabl are, in principle, also arbi-
trary. Randomly different voltages could be used to represent
different sound pressure levels. This would be difficult (and
possibly ridiculous) to try to implement but it could be done
(like where changes over time in the variable being
represented are very slow).

Maybe the best way to put this is as follows:
in digital or analog representation we have some variable, y,that
represents some other, x, so that y= f(x). Regardless of the analog
or digital characteristics of x and y, y "symbolizes" x because
1) another variable, y', could be used to represent x (so y is arbitrary)
and 2) y could be defined by a different function, f', so f is arbitrary.

I think 1) and 2) capture what is meant when it is said that
symbols are arbitrary representations of events. Symbols are not
completely arbitrary. Once y and f are selected you've got to
stick with them (in the context of your application) or the
symbol system is useless. Thus, the sounds that we use to
represent events (f) and the fact that we use sounds (y) is an
arbitrary propery of our language symbol system. But now that
we've settled on it we've got to stick with it for it to be
useful (for communication). We could (like humpty-dumpty) keep
changing the relationship between words and events but this kind
of arbitrariness would make communication impossible.

Conclusion: I don't believe that the A/D distinction is a distinction
between non-symbol vs symbol systems. If there is a difference between
robots (that deal with "real world" variables) and turing machines
(that deal with artificial symbol systems) I don't believe it can turn
on the fact that one deals with symbols and the other doesn't. They
both deal with symbols. So what is the difference? I think there
is a difference between robots (of certain types) and turing machines--
and a profound one at that. But that's another posting.

--
Disclaimer-- The opinions expressed are my own. My employer, mother
wife and teachers should not be held responsible -- though some tried
valiantly to help.

Richard Marken Aerospace Corp.
(213) 336-6214 Systems Simulation and Analysis
P.O. Box 92957
M1/076
Los Angeles, CA 90009-2957

marken@aero.ARPA
kk

------------------------------

Date: Fri, 7 Nov 86 15:31:31 +0100
From: mcvax!ukc!rjf@seismo.CSS.GOV
Subject: Re: machine intelligence/consciousness


There has been some interesting discussion (a little while back now)
on the possibility of 'truly' intelligent machines; in particular
the name of Nagel has been mentioned, and his paper 'What is it like to be
a bat?'.

This paper is not, however, strictly relevant to a discussion of machine
intelligence, because what Nagel is concerned with is not intelligence, but
consciousness. That these are not the same, may be realised on a little
contemplation. One may be most intensely conscious while doing little or no
cogitation. To be intelligent - or, rather, to use intelligence - it seems
necessary to be conscious, but the converse does not hold - that to be
conscious it is necessary to be intelligent. I would suggest that the former
relationship is not an necessary one either - it just so happens that we are
both conscious and (usually) intelligent.

Animals probably are conscious without being intelligent. Machines may
perhaps be intelligent without being conscious. If these are defined
seperately, the problem of the intelligent machine becomes relatively trivial
(though that may seem too good to be true): an intelligent machine is capable
of doing that which would require intelligence in a person, eg high level
chess. On the other hand, it becomes obvious that what really exercises the
philosophers and would-be philosophers (I include myself) is machine
consciousness. As for that:

Another article in the same collection by Nagel (Mortal Questions, 1978)
takes his ideas on consciousness somewhat further. A summary of the
arguments developed in 'Subjective and Objective' could not possibly do them
justice (anyone interested is heartily recommended to obtain a copy), so only
the conclusions will be mentioned here. Briefly, Nagel views subjectivity as
irreducible to objectivity, indeed the latter derives from the former, being
a corrected and generalised version of it. A maximally objective view of the
world must admit the reality of subjectivity, in the minimal sense that
individuals do hold differing views, and there is no better - or worse -
judge of which view is more truly objective, than another individual.

This view does not to any extent denigrate the practicality of objective
methods (the hypothesis of objective reality is proven by the success of the
scientific method), but nor is it possible to deny the necessity of
subjectivity in some situations, notably those directly involving other
people. It is surely safe to say that no new objective method will ever
substitute for human relationships. And the reason that subjectivity works
in this context is because of what Nagel terms 'intersubjectivity' -
individuals identifying with each other - using their imaginations creatively
and for the most part accurately to put themselves in another person's shoes.

So what, really, is consciousness? According to Nagel, a thing is conscious
if and only if it is like something to be that thing. In other words, when
it may be the subject (not the object!) of intersubjectivity. This accords
with Minsky (via Col. Sicherman): 'consciousness is an illusion to itself but
a genuine and observable phenomenon to an outside observer...' Consciousness
is not self-consciousness, not consiousness of being conscious, as some have
thought, but is that with which others can identify. This opens the way to
self-awareness through a hall of mirrors effect - I identify with you
identifying with me... And in the negative mode - I am self-conscious when I
feel that someone is watching me.

It may perhaps be supposed that the concept of consciousness evolved as part
of a social adaptation - that those individuals who were more socially
integrated, were so at least in part because they identified more readily,
more intelligently and more imaginatively with others, and that this was a
successful strategy for survival. To identify with others would thus be an
innate behavioural trait.

So consciousness is at a high level (the top?) in software, and is, moreover,
not supported by a single unit of hardware, but by a social network. In its
development, at least. I, or anyone else, might suppose that I am still
conscious when alone, but not without (the supposer, whether myself or
another) having become conscious in a social context. When I suppose myself
to be conscious, I am imagining myself outside myself - taking the point of
view of an (hypothetical) other person. An individual - man or machine -
which has never communicated through intersubjectivity might, in a sense, be
conscious, but neither the individual nor anyone else could ever know it. A
community of machines sufficiently sophisticated that they identify with each
other in the same way as we do, may some day develop, but how could we decide
whether they were really conscious or not? They might know it, but we never
could - and that is neither pessimism not prejudice, but a matter of
principle.

Subjectively, we all know that consciousness is real. Objectively, we have
no reason to believe in it. Because of the relationship between subjectivity
and objectivity, that position can never be improved on. Pragmatism demands
a compromise between the two extremes, and that is what we already do, every
day, the proportion of each component varying from one context to another.
But the high-flown theoretical issue of whether a machine can ever be
conscious allows no mere pragmatism. All we can say is that we do not know,
and, if we follow Nagel, that we cannot know - because the question is
meaningless.

(Technically, the concept of two different but equally valid ways of seeing,
in this case subjectively and objectively, is a double aspect theory; the
dichotomy lies not in the nature of reality, but in our perception. Previous
double aspect theories, interestingly consistent with this one, have been
propounded by Spinoza - regarding our perception of our place within nature -
and Strawson - on the concept of a person. I do not have the full references
to hand.)

Any useful concepts among those foregoing probably derive from Nagel, any
misleading ones from myself; none from my employers.

Rob Faichney

------------------------------

Date: 18 Nov 86 08:30:00 EST
From: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Reply-to: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Subject: philosophy of mind stuff (get it?)


Can't resist a few more go-rounds with S. Harnad. Lest the size of these
messages increase exponentially, I'll try to avoid re-hashing old
issues and responding to side-issues...

> Harnad:
> I agree that scientific inference is grounded in observed correlations.
> But the primary correlation in this special case is, I am arguing, between
> mental states and performance. That's what both our inferences and our
> intuitions are grounded in. The brain correlate is an additional cue, but only
> inasmuch as it agrees with performance.

> ...in ambiguous
> cases, behavior was and is the only rational arbiter. Consider, for
> example, which way you'd go if (1) an alien body persisted in behaving like a
> clock-like automaton in every respect -- no affect, no social interaction,
> just rote repetition -- but it DID have something that was indistinguishable
> (on the minute and superficial information we have) from a biological-like
> nervous system), versus (2) if a life-long close friend of yours had
> to undergo his first operation, and when they opened him up, he turned
> out to be all transistors on the inside. I don't set much store by
> this hypothetical sci-fi stuff, especially because it's not clear
> whether the "possibilities" we are contemplating are indeed possible. But
> the exercise does remind us that, after all, performance capacity is
> our primary criterion, both logically and intuitively, and its
> black-box correlates have whatever predictive power they may have
> only as a secondary, derivative matter. They depend for their
> validation on the behavioral criterion, and in cases of conflict,
> behavior continues to be the final arbiter.

I think I may have been tactitly conceding the point above, which I
now wish to un-concede. Roughly speaking, I think my (everyone's)
epistemological position is as follows: I know I have a mind. In
order to determine if X has a mind I've got to look for analogous
external things about X which I know are causally connected with mind
in *my own* case. I naively know (and *how* do I know this??) that large
parts of my performance are an effect of my mind. I scientifically
know that my mind depends on my brain. I can know this latter
correlation even *without* performance correlates, eg, when the dentist
puts me under, I can directly experience my own loss of mind which
results from loss of whatever brain activity. (I hope it goes
without saying that all this knowledge is just regular old
reliable knowledge, but not necessarily certain - ie I am not
trying to respond to radical skepticism about our everyday and
scientific knowledge, the invocation of deceptive dentists, etc.)

I'll assume that "mind" means, roughly, "conscious intelligence".
Also, assume throughout of course that "brain" is short-hand for
"brain activity known (through usual neuro-science techniques) to be
necessary for consciousness".

Now then, armed with the reasonably reliable knowledge that in my own
case, my brain is a cause of my mind, and my mind is a cause of my
performance, I can try to draw appropriate conclusions about others.
Let's take 4 cases:

1. X1 has brains and performance - ie another normal human. Certainly
I have good reason to assume X1 has a mind (else why should similar
causes and effects be mediated by something so different from that
which mediates in my own case?)

2. X2 has neither brains nor performance - and no mind.

3. X3 has brains, but little/no performance - eg a case of severe
retardation. Well, there doesn't seem much reason to believe that
X has intelligence, and so is disqualified from having mind, given
our definition. However, it is still reasonable to believe that
X3 might have consciousness, eg can feel pain, see colors, etc.

4. X4 has normal human cognitive performance, but no brains, eg the
ultimate AI system. Well, no doubt X4 has intelligence, but the issue
is whether X4 has consciousness. This seems far from obvious to me,
since I know in my own case that brain causes consciousness causes
performance. But I already know, in the case of X4, that the causal
chain starts out at a different place (non-brain), even if it ends up
in the same place (intelligent performance). So I can certainly
question (rationally) whether it gets to performance "via
consciousness" or not.

If this seems too contentious, ask yourself: given a choice between
destroying X3 or X4, is it really obvious that the more moral choice
is to destroy X3?

Finally, a gedanken experiment (if ever there was one) - suppose
(a la sci-fi stories) they opened you up and showed you that you
really didn't have a brain after all, that you really did have
electronic circuits - and suppose it transpired that while most
humans had brains, a few, like yourself, had electronics. Now,
never doubting your own consciousness, if you *really* found that
out, would you not then (rationally) be a lot more inclined to
attribute consciousness to electronic entities (after all you know
what it feels like to be one of them) than to brained entities (who
knows what, if anything, it feels like to be one of them?)?
Even given *no* difference in performance between the two sub-types?
Showing that "similarity to one's own internal make-up" is always
going to be a valid criterion for consciousness, independent of
performance.

I make this latter point to show that I am a brain-chauvinist *only
insofar* as I know/believe that I *myself* am a brained entity (and
that my brain is what causes my consciousness). This really
doesn't depend on my own observation of my own performance at all -
I'd still know I had a mind even if I never did any (external) thing
clever.

To summarize: brainedness is a criterion, not only via the indirect
path of: others who have intelligent performance also have brains,
ergo brains are a secondary correlate for mind; but also via the
much more direct path (which *also* justifies performance as a
criterion): I have a mind and in my very own case, my mind is
closely causally connected with brains (and with performance).

> As to CAUSATION -- well, I'm
> sceptical that anyone will ever provide a completely satisfying account
> of the objective causes of subjective effects. Remember that, except for
> the special case of the mind, all other scientific inferences have
> only had to account for objective/objective correlations (and [or,
> more aptly, via) their subjective/subjective experiential counterparts).
> The case under discussion is the first (and I think only) case of
> objective/subjective correlation and causation. Hence all prior bets,
> generalizations or analogies are off or moot.

I agree that there are some additional epistemological problems, compared
to the usual cases of causation. But these don't seem all that daunting,
absent radical skepticism. We already know which parts of the brain
correlate with visual experience, auditory experience, speech competence,
etc. I hardly wish to understate the difficulty of getting a full
understanding, but I can't see any problem in principle with finding
out as much as we want. What may be mysterious is that at some level,
some constellation of nerve firings may "just" cause visual experience,
(even as electric currents "just" generate magnetic fields.) But we are
always faced with brute-force correlation at the end of any scientific
explanation, so this cannot count against brain-explanatory theory of mind.

> Perhaps I should repeat that I take the context for this discussion to
> be science rather than science fiction, exobiology or futurology. The problem
> we are presumably concerned with is that of providing an explanatory
> model of the mind along the lines of, say, physics's explanatory model
> of the universe. Where we will need "cues" and "correlates" is in
> determining whether the devices we build have succeeded in capturing
> the relevant functional properties of minds. Here the (ill-understood)
> properties of brains will, I suggest, be useless "correlates." (In
> fact, I conjecture that theoretical neuroscience will be led by, rather
> than itself leading, theoretical "mind-science" [= cognitive
> science?].) In sci-fi contexts, where we are guessing about aliens'
> minds or those of comatose creatures, having a blob of grey matter in
> the right place may indeed be predictive, but in the cog-sci lab it is
> not.

Well, I plead guilty to diverting the discussion into philosophy, and as
a practical matter, one's attitude in this dispute will hardly affect
one's day-to-day work in the AI lab. One of my purposes is a kind of
pre-emptive strike against a too-grandiose interpretation of the
results of AI work, particularly with regard to claims about
consciousness. Given a behavioral definition of intelligence, there
seems no reason why a machine can't be intelligent. But if "mind"
implies consciousness, it's a different ball-game, when claiming
that the machine "has a mind".

My as-yet-unarticulated intuition is that, at least for people, the
grounding-of-symbols problem, to which you are acutely and laudably
sensitive, inherently involves consciousness, ie at least for us,
meaning requires consciousness. And so the problem of shoehorning
"meaning" into a dumb machine at least raises the issue about how
this can be done without making them conscious (or, alternatively,
how to go ahead and make them conscious). Hence my interest in your
program of research.

John Cugini <Cugini@NBS-VMS>

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT