Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 037

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest             Monday, 9 Feb 1987       Volume 5 : Issue 37 

Today's Topics:
Philosophy - Consciousness

----------------------------------------------------------------------

Date: 9 Feb 87 06:14:48 GMT
From: well!wcalvin@lll-lcc.arpa (William Calvin)
Subject: Re: More on Minsky on Mind(s)


In following the replies to Minsky's excerpts from SOCIETY OF MIND, I
am struck by all the attempts to use slippery word-logic. If that's all
one has to use, then one suffers with word-logic until something better
comes along. But there are some mechanistic concepts from both
neurobiology and evolutionary biology which I find quite helpful in
thinking about consciousness -- or at least one major aspect of it, namely
what the writer Peter Brooks described in READING FOR THE PLOT (1985) as
follows:

"Our lives are ceaselessly intertwined with narrative, with the
stories that we tell and hear told, those we dream or imagine or would
like to tell, all of which are reworked in that story of our own lives
that we narrate to ourselves in an episodic, sometimes semiconscious,
but virtually uninterrupted monologue. We live immersed in narrative,
recounting and reassessing the meaning of our past actions,
anticipating the outcome of our future projects, situating ourselves
at the intersection of several stories not yet completed."

Note the emphasis on both past and future, rather than the perceiving-
the-present and recalling-the-recent-past, e.g., Minsky:

> although people usually assume that consciousness is knowing
> what is happening in the minds, right at the
> present time, consciousness never is really concerned with the
> present, but with how we think about the records of our recent
> thoughts... how thinking about our short term memories changes them!

But simulation is more the issue, e.g., E.O. Wilson in ON HUMAN NATURE
1978:
"Since the mind recreates reality from abstractions of sense
impressions, it can equally well simulate reality by recall and
fantasy. The brain invents stories and runs imagined and remembered
events back and forth through time."

Rehearsing movements may be the key to appreciating the brain mechanisms,
if I may quote myself (THE RIVER THAT FLOWS UPHILL: A JOURNEY FROM THE BIG
BANG TO THE BIG BRAIN, 1986):

"We have an ability to run through a motion with our muscles detached
from the circuit, then run through it again for real, the muscles
actually carrying out the commands. We can let our simulation run
through the past and future, trying different scenarios and judging
which is most advantageous -- it allows us to respond in advance to
probable future environments, to imagine an accidental rockfall
loosened by a climber above us and to therefore stay out of his fall
line."

Though how we acquired this foresight is a bit of a mystery. Never
mind for a moment all those "surely it's useful" arguments which, using
compound interest reasoning, can justify anything (given enough
evolutionary time for compounding). As Jacob Bronowski noted in THE
ORIGINS OF KNOWLEDGE AND IMAGINATION 1967, foresight hasn't been
widespread:

"[Man's] unique ability to imagine, to make plans... are generally
included in the catchall phrase "free will." What we really mean by
free will, of course, is the visualizing of alternatives and making a
choice between them. In my view, which not everyone shares, the
central problem of human consciousness depends on this ability to
imagine..... Foresight is so obviously of great evolutionary
advantage that one would say, `Why haven't all animals used it and
come up with it?' But the fact is that obviously it is a very strange
accident. And I guess as human beings we must all pray that it will
not strike any other species."

So if other animals have not evolved very much of our fussing-about-the-
future consciousness via its usefulness, what other avenues are there for
evolution? A major one, noted by Darwin himself but forgotten by almost
everyone else, is conversion ("functional change in anatomical
continuity"), new functions from old structures. Thus one looks at brain
circuitry for some aspects of the problem -- such as planning movements --
and sees if a secondary use can be made of it to yield other aspects of
consciousness -- such as spinning scenarios about past and future.

And how do we generate a detailed PLAN A and PLAN B, and then compare
them? First we recognize that detailed plans are rarely needed: many
elaborate movements can get along fine on just a general goal and feedback
corrections, as when I pick up my cup of coffee and move it to my lips.
But feedback has a loop time (nerve conduction time, plus decision-making,
often adds up to several hundred milliseconds of reaction time). This
means the feedback arrives too late to do any good in the case of certain
rapid movements (saccadic eye flicks, hammering, throwing, swinging a golf
club). Animals who utilize such "ballistic movements" (as we call them in
motor systems neurophysiology) simply have to evolve a serial command
buffer: plan at leisure (as when we "get set" to throw) but then pump out
that whole detailed sequence of muscle commands without feedback. And get
it right the first time. Since it goes out on a series of channels (all
those muscles of arm and hand), it is something like planning a whole
fireworks display finale (carefully coordinated ignitions from a series of
launch platforms with different inherent delays, etc.).

But once a species has such a serial command buffer, it may be useful
for all sorts of things besides the actions which were originally under
natural selection during evolution (throwing for hunting is my favorite
shaper-upper --see J.Theor.Biol. 104:121-135,1983 -- but word-order-coded
language is conceivably another way of selecting for a serial command
buffer). Besides rehearsing slow movements better with the new-fangled
ballistic movement sequencer, perhaps one could also string together other
concepts-images-schemata with the same neural machinery: spin a scenario?

The other contribution from evolutionary biology is the notion that
one can randomly generate a whole family of such strings and then select
amongst them (imagine a railroad marshalling yard, a whole series of
possible trains being randomly assembled). Each train is graded against
memory for reasonableness -- Does it have an engine at one end and a
caboose at the other? -- before one is let loose on the main line. "Best"
is surely a value judgment determined by memories of the fate of similar
sequences in the past, and one presumes a series of selection steps that
shape up candidates into increasingly more realistic sequences, just as
many generations of evolution have shaped up increasingly more
sophisticated species. To quote an abstract of mine called "Designing
Darwin Machines":

This selection of stochastic sequences is more
analogous to the ways of Darwinian evolutionary biology
than to von Neumann machines. One might call it a
Darwin machine instead, but operating on a time scale
of milliseconds rather than millennia, using innocuous
virtual environments rather than noxious real-time
ones.

Is this what Darwin's "bulldog," Thomas Henry Huxley, would have
agreed was the "mechanical equivalent of consciousness" which Huxley
thought possible, almost a century ago? It would certainly be fitting.

We do not yet know how much of our mental life such stochastic
sequencers might explain. But I tend to think that this approach using
mechanical analogies from motor systems neurophysiology and evolutionary
biology might have something to recommend it, in contrast to word-logic
attempts to describe consciousness. At least it provides a different place
to start, hopefully less slippery than variants on the little person inside
the head with all their infinite regress.

William H. Calvin
Biology Program NJ-15
University of Washington
Seattle WA 98195 USA
206/328-1192
USENET: wcalvin@well.uucp

------------------------------

Date: 9 Feb 87 08:41:00 EST
From: "CUGINI, JOHN" <cugini@icst-ecf>
Reply-to: "CUGINI, JOHN" <cugini@icst-ecf>
Subject: another stab at "what are we arguing about"


> > Me: "Why I am not a Methodological Epiphenomenalist"
>
> Harnad: This is an ironic twist on Russell's sceptical book about religious
> beliefs! I'm the one who should be writing "Why I'm Not a Methodological
> Mentalist."

Yeah, but I said it first...

OK, seriously folks, I think I see this discussion starting to converge on
a central point of disagreement (don't look so skeptical). Harnad,
Reed, Taylor, and I have all mentioned this "on the side" but I think it
may be the major sticking point between Harnad and the latter three.

> Reed: ...However, I don't buy the assumption that two must *observe the same
> instance of a phenomenon* in order to perform an *observer-independent
> measurement of the same (generic) phenomenon*. The two physicists can
> agree that they are studying the same generic phenomenon because they
> know they are doing similar things to similar equipment, and getting
> similar results. But there is nothing to prevent two psychologists from
> doing similar (mental) things to similar (mental) equipment and getting
> similar results, even if neither engages in any overt behavior apart
> from reporting the results of his measurements to the other....
>
> What is objectively different about the human case is that not only is
> the other human doing similar (mental) things, he or she is doing those
> things to similar (human mind implemented on a human brain) equipment.
> If we obtain similar results, Occam's razor suggests that we explain
> them similarly: if my results come from measurement of subjectively
> experienced events, it is reasonable for me to suppose that another
> human's similar results come from the same source. But a computer's
> "mental" equipment is (at this point in time) sufficiently dissimilar
> from a human's that the above reasoning would break down at the point
> of "doing similar things to similar equipment with similar results",
> even if the procedures and results somehow did turn out to be identical.



> > Harnad: Everything resembles everything else in an infinite number of
> > ways; the problem is sorting out which of the similarities is relevant.
>
> Taylor: Absolutely. Watanabe's Theorem of the Ugly Duckling applies. The
> distinctions (and similarities) we deem important are no more or less
> real than the infinity of ones that we ignore. Nevertheless, we DO see
> some things as more alike than other things, because we see some similarities
> (and some differences) as more important than others.
>
> In the matter of consciousness, I KNOW (no counterargument possible) that
> I am conscious, Ken Laws knows he is conscious, Steve Harnad knows he is
> conscious. I don't know this of Ken or Steve, but their output on a
> computer terminal is enough like mine for me to presume by that similarity
> that they are human. By Occam's razor, in the absence of evidence to the
> contrary, I am forced to believe that most humans work the way I do.
> Therefore
> it is simpler to presume that Ken and Steve experience consciousness than
> that they work according to one set of natural laws, and I, alone of all
> the world, conform to another.


The Big Question: Is your brain more similar to mine than either is to any
plausible silicon-based device?

I (and Reed and Taylor?) been pushing the "brain-as-criterion" based
on a very simple line of reasoning:

1. my brain causes my consciousness.
2. your brain is a lot like mine.
3. therefore, by "same cause, same effect" your brain probably
causes consciousness in you.

(BTW, The above does NOT deny the relevance of similar performance in
confirming 3.)

Now, when I say simple things like this, Harnad says complicated things like:
re 1: how do you KNOW your brain causes your consciousness? How can you have
causal knowledge without a good theory of mind-brain interaction?
Re 2: How do you KNOW your brain is similar to others'? Similar wrt
what features? How do you know these are the relevant features?

For now (and with some luck, for ever) I am going to avoid a
straightforward philosophical reply. I think there may be some
reasonably satisfactory (but very long and philosophical) answers to
these questions, but I maintain the questions are really not relevant.

We are dealing with the mind-body problem. That's enough of a philosophical
problem to keep us busy. I have noticed (although I can't explain why),
that when you start discussing the mind-body problem, people (even me, once
in a while) start to use it as a hook on which to hang every other
known philosophical problem:

1. well how do we know anything at all, much less our neighbors' mental states?
(skepticism and epistemology).

2. what does it mean to say that A causes B, and what is the nature of
causal knowledge? (metaphysics and epistemology).

3. is it more moral to kill living thing X than a robot? (ethics).

All of these are perfectly legitimate philosophical questions, but
they are general problems, NOT peculiar to the mind-body problem.
When addressing the mind-body problem, we should deal with its
peculiar features (of which there are enough), and not get mired in
more general problems * unless they are truly in doubt and thus their
solution truly necessary for M-B purposes. *

I do not believe that this is so of the issues Harnad raises. I
believe people can a) have causal knowledge, both of instances and
types of events, without any articulated "deep" theory of the
mechanics going on behind the scenes (indeed the deep knowledge
comes later as an attempt to explain the already observed causal
interaction), and b) can spot relevant similarities without being
able to articulate them.

A member of an Amazon tribe could find out, truly know, that light
switches cause lights to come on, with a few minutes of
experimentation. It is no objection to his knowledge to say that he
has no causal theory within which to embed this knowledge, or to
question his knowledge of the relevance of the similarities among
various light switches, even if he is hard-pressed to say anything
beyond "they look alike." It is a commonplace example that many
people can distinguish between canines and felines without being
able to say why. I do not assert, I am quick to add, that
these rough-and-ready processes are infallible - yes, yes, are whales
more like cows than fish, how should I know?

But again, to raise the specter of certainty is again a side-issue.
Do we all not agree that the Indian's knowledge of lights and light
switches is truly knowledge, however unsophisticated?

Now, S. Harnad, upon your solemn oath, do you have any serious practical
doubt, that, in fact,

1. you have a brain?
2. that it is the primary cause of your consciousness?
3. that other people have brains?
4. that these brains are similar to your own (and if not, why do you
and everyone else use the same word to refer to them?), at least
more so than any other object with which you are familiar?

Now if you do know these utterly ordinary assertions to be true,
* even if you can't produce a high-quality philosophical defense for
them, (which inability, I argue, does not cast serious doubt on them,
or on the status of your belief in them as knowledge) * then, what
is wrong with the simple inference that others' possession of a brain
is a good reason (not necessarily the only reason) to believe that
they are conscious?

John Cugini <Cugini@icst-ecf>

------------------------------

Date: 9 Feb 87 08:59:00 EST
From: "CUGINI, JOHN" <cugini@icst-ecf>
Reply-to: "CUGINI, JOHN" <cugini@icst-ecf>
Subject: and another thing..


Yeah, while I'm at it, how do you, Harnad, know that two performances
by two entities in question (a human and a robot) are relevantly
similar? What is it precisely about the performances you intend to
measure? How do you know that these are the important aspects?

Refresh my memory if I'm wrong, but as I recall, the TTT was a kind
of gestalt you'll-know-intelligent-behavior-when-you-see-it test.
How is this different from looking at two brains and saying, yeah
they look like the same kind of thing to me?

John Cugini <Cugini@icst-ecf>

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT