Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 050

eZine's profile picture
Published in 
AIList Digest
 · 11 months ago

AIList Digest            Sunday, 22 Feb 1987       Volume 5 : Issue 50 

Today's Topics:
Philosophy - Consciousness & Other Minds

----------------------------------------------------------------------

Date: 17 Feb 87 20:15:29 GMT
From: "Col. G. L. Sicherman"
<colonel%sunybcs%math.waterloo.edu@RELAY.CS.NET>
Subject: Re: artificial minds

In article <8702132202.AA01947@BOEING.COM>, ray@BOEING.COM (Ray Allis) writes:

> ... Homo Sap.'s
> distinguished success among inhabitants of this planet is primarily due
> to our ability to think. ...

Success is relative! Cockroaches are successful too, for quite
different reasons. And our own success is questionable, considering
how many of us starve to death. Try explaining _that_ to a cockroach.

Biologically, our chief advantages over other species are erect
posture and prehensile hands. Abstract thought is only ancillary;
other species lack it mainly because they cannot use it.

> and it is difficult
> for me to imagine a goal more relevant than improving the chances for
> survival by increasing our ability to act intelligently.

Well said. But this is an argument for using computers as tools, and
it is seldom true that tools ought to be designed to resemble the human
components that they extend. Would you use a hammer that looks like
a fist? Or wear a shoe with toes?

Why try to endow a lump of inorganic matter with the soul of a human
being? You don't yet know what your own mind is capable of. Besides,
if you do produce an intelligent computer, it may not like you!
--
Col. G. L. Sicherman
UU: ...{rocksvax|decvax}!sunybcs!colonel
CS: colonel@buffalo-cs
BI: colonel@sunybcs, csdsiche@ubvms

------------------------------

Date: 19 Feb 87 17:13:11 GMT
From: princeton!mind!harnad@rutgers.rutgers.edu (Stevan Harnad)
Subject: More on the functional irrelevance of the brain to
mind-modeling


"CUGINI, JOHN" <cugini@icst-ecf> wrote on mod.ai:

> The Big Question: Is your brain more similar to mine than either
> is to any plausible silicon-based device?

That's not the big question, at least not mine. Mine is "How does the
mind work?"
To answer that, you need a functional theory of how the
mind works, you need a way of testing whether the theory works, and
you need a way of deciding whether a device implemented according to
the theory has a mind. That's what I proposed the formal and informal
TTT for: testing and implementing a functional theory of mind.

Cugini keeps focusing on the usefulness of "presence of `brain'"
as evidence for the possession of a mind. But in the absence of a
functional theory of the brain, its superficial appearance hardly
helps in constructing and testing a functional theory of the mind.

Another way of putting it is that I'm concerned with a specific
scientific (bioengineering) problem, not an exobiological one ("Does this
alien have a mind?"
), nor a sci-fi one ("Does this fictitious robot
have a mind?"
), nor a clinical one ("Does this comatose patient or
anencephalic have a mind?"
), nor even the informal, daily folk-psychological
one ("Does this thing I'm interacting with have a mind?"). I'm only
concerned with functional theories about how the mind works.

> A member of an Amazon tribe could find out, truly know, that light
> switches cause lights to come on, with a few minutes of
> experimentation. It is no objection to his knowledge to say that he
> has no causal theory within which to embed this knowledge, or to
> question his knowledge of the relevance of the similarities among
> various light switches, even if he is hard-pressed to say anything
> beyond "they look alike."

Again, I'm not concerned with informal, practical, folk heuristics but
with functional, scientific theory.

> Now, S. Harnad, upon your solemn oath, do you have any serious
> practical doubt, that, in fact,
> 1. you have a brain?
> 2. that it is the primary cause of your consciousness?
> 3. that other people have brains?
> 4. that these brains are similar to your own

My question is not a "practical" one, but a functional, scientific
one, and none of these correlations among superficial appearances help.

> how do you know that two performances
> by two entities in question (a human and a robot) are relevantly
> similar? What is it precisely about the performances you intend to
> measure? How do you know that these are the important aspects?
> ...as I recall, the TTT was a kind
> of gestalt you'll-know-intelligent-behavior-when-you-see-it test.
> How is this different from looking at two brains and saying, yeah
> they look like the same kind of thing to me?

Making a brain look-alike is a trivial task (they do it in Hollywood
all the time). Making a (TTT-strength) behavioral look-alike is not. My
claim is that a successful construction of the latter is as close as we
can hope to get to a functional understanding of the mind.

There's no "measurement" problem. The data are in. Build a robot that
can detect, discriminate, identify, manipulate and describe objects
and events and can interact linguistically indistinguishably from the
way we do (as ultimately tested informally by laymen) and you'll have
the problem licked.

As to "relevant" similarities: Perhaps the TTT is too exacting. TOTAL
human performance capacity may be more than what's necessary to capture mind
(for example, nonhuman species and retarded humans also have minds).
Let's say it's to play it safe; to make sure we haven't left anything
relevant out; in any case, there will no doubt be many subtotal
way-stations on the long road to the asymptotic TTT.

The brain's another matter, though. Its structural appearance is
certainly not good enough to go on. And its function is an ambiguous
matter. On the one hand, its behavioral capacities are among its functional
capacities, so behavioral function is a subset of brain function. But,
over and above that we do not know what implementational details are
relevant. The TTT could in principle be beefed up to demand not only
behavioral indistinguishability, but anatomical, physiological and
pharmacologcal indistinguishability. I'd go for the behavioral
asymptote first though, as the most likely criterion of relevance,
before adding on implementational constraints too -- especially because
those implementational details will play no role in our intuitive
judgments about whether the device in question has a mind like us, any
more than they do now. Nor will they significantly increase the
objective validity of the (frail) TTT criterion itself, since brain
correlates are ultimately validated against behavioral correlates.

My own guess, though, is that our total performance capacity will be
as strong a hardware constraint as is needed to capture all the relevant
functional similarities.

> Just a quick pout here - last December I posted a somewhat detailed
> defense of the "brain-as-criterion" position...
> No one has responded directly to this posting.

I didn't reply because, as I indicated above, you're not addressing the same
question I am (and because our exchanges have become somewhat repetitive).
--

Stevan Harnad (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet

------------------------------

Date: Fri, 20 Feb 87 15:23:32 pst
From: Ray Allis <ray@BOEING.COM>
Subject: Other Minds

Hello? Where'd everyone go? Was it something I said? I have
a couple of things to say, yet. But fear not, this is part 2 of 2,
so you won't have me cluttering up your mail again in the near future.
This is a continuation of my 2/13/87 posting, in which I am proposing
a radical paradigm shift in AI.


[The silence on the Arpanet AIList is due to my saving the
philosophical messages for a weekly batch mailing. This gives
other topics a chance and reduces the annoyance of those who
don't care for these discussions. -- KIL]


Our common sense thought is based on and determined by those things
which are "sensible" (i.e. that we can sense). "The fog comes in on
little cat feet"
[Sandberg]. Ladies and gentlemen of the AI
community, you are not even close! Let me relax the criteria a little,
take this phrase, "a political litmus test". How do you expect a
machine to understand that without experience? Nor can you ever
*specify* enough "knowledge" to allow understanding in any useful
sense. The current computer science approach to intelligence is as
futile as the machine translation projects of the 60's, and for the
same reason; both require understanding on the part of the machine,
and of that there isn't a trace.

Obviously symbolic thinking is significant; look at the success of our
species. There are two world-changing advantages to symbolic thought.
One advantage is the ability to think about the relationships among
things and events without the confusing details of real things and
events; "content-free" or "context-independent" "reasoning" leading to
mathematics and logic and giving us a measure of control over our
environment, and our destiny. Symbol systems are tools which assist
and enhance human minds, not replacements for those minds. Production
rules are an externalization of knowledge. They are how we explain our
behavior to other people.

The other advantage lies in the fundamental difference between
"symbolize" and "represent". Consider how natural language works.
Through training, you come to associate "words" with experiences.
The immediate motive for this accomplishment is communication; when
you can say "wawa" or "no!", the use of language becomes your best
tool for satisfying your desires and needs. But a more subtle and
significant thing happens. The association between any symbol and
that which it symbolizes is arbitrary, and imprecise. Also, in any
human experience, there is *so much* context that it is practically
the case that every experience is associated with every other, even
if somewhat indirectly.

So please imagine a brain, in some instantaneous state of excitation
due to external stimuli. Part of the "context" (or experience) will
be (representations of) symbols previously associated. Now imagine
the internal loop which presents internal events to the brain as if
they were external events, presenting those symbols as if you "saw"
or "heard" them. But, since the association is imprecise, the
experience evoked by those symbols will very likely not be identical
to that which evoked the symbols. A changed pattern of activity in
the nervous system will result, possibly with different associated
symbols, in which case the cycle repeats.

The function of all this activity is to "converge" on the "appropriate"
behavior for the organism, which is to say to continue the organism's
existence. There is extreme "parallelism"; immense numbers of events
are occurring simultaneously, and all associations are stimulated
"at once". Also, none of this is "computation" in the traditional
sense; it is the operation of an analog "device", which is the central
nervous system, in its function of producing "appropriate" behavior.

Imagine an experience represented in hundreds of millions of CNS
connections. Another experience, whatever the source, (that is from
external sensors, from memory or wholly created) will be represented
in the same (identical) neurons, in point-for-point registration, all
half-billion points at once. Any variation in correspondence will be
immediately conspicuous. The field (composite) is available for the
same contrast enhancement and figure/ground "processing" as in visual
(or any) input.

Multiple experiences will reinforce at points of correspondence, and
cancel elsewhere. Tiny children are shown instances of things; dogs,
kittens, cows, fruits, and expected to generalize and to demonstrate
their generalization, so adults can correct them if necessary.
Generalization is the shift in figure / ground percentage which comes
from "thresholding" out the weaker sensations. The resultant is the
"intersection" of qualities of two or more experiences. This whole
operation, comparing millions of sensation details with corresponding
sensation details in another experience can happen in parallel in a
very few cycles or steps.

Informed by Maturana's ideas of autopoeic systems, mind can be
considered as an emergent phenomenon of the complexity which has
evolved in the central nervous systems of Terrestrial organisms
(that's us). This view has fundamental philosophical implications
concerning whether minds are likely to exist elsewhere in the Universe
due to "natural causes", and whether we can aspire to create minds.

Much "thinking" is of the sort described by the Nobel Prize winner
in "The Search for Solutions" who thinks of DNA as a rope which, when
stretched will break at certain "weak" points. That "tool", the
visualization, is guided by physical experience, his personal
experience of ropes and their behavior. Einstein said he often
thought in images; certainly his thought was guided, and perhaps
the results judged, by his personal experience with the things
represented. We also need "... the ability to generalize, the
ability to strip to the essential attributes of some actor in the
process..."
"We are not ready to write equations, for the most part,
and we still rely on mechanical and chemical or other physical models."

Josua Lederberg - Nobel Prize geneticist - President of Rockefeller U.
"The Search for Solutions".

The internal loop can use motor action (intents) to re-stimulate
associated sensory input (results) and entire sequences of sensory
input to motor output to sensory input can occur without interacting
with the external environment. Here is the basis for imagination and
planning. Experiences need not be original; they may be created
entirely from abstractions. And this is called *imagination*.

The ability to construct internal imaginary events and situations is
fundamental to symbolic communication: where symbols evoke and are
derived from internal state. Planning is the process of reviewing a
set of experiences, which may be recalled, or may be constructed
imaginary experiences. Planning requires imagination (see above) of
actions and consequences. The success and effectiveness of the
resulting plan depends on the quality and quantity of experiences
available to the planner. He benefits from a rich repertoire of
experience from which to choreograph his dance of events. The novelty
in the present theory is that most of the planning process is
essentially and necessarily analog in nature, and symbol processing
is only part of it. Symbols are critical to make the process
explicit, but the planning process itself is not only, or even
primarily, symbol processing.

If we agree that our minds are an effect of our CNS, then we must
accept that the structure of our mind is determined by the structure
of our CNS. Sure there's a "deep structure" in linguistic ability;
it's our physical implementation (embodiment). The "meaning" of
language is that state which it evokes in us.

"A new meaning is born whenever the mind uses a word or other symbol
in a new way. If you think of a key as something to open a lock and
then speak of hard work as the key to success, you are using the word
key in a new way. It no longer means simply a metal implement for
opening a lock; it has acquired a much richer sense in your mind:
"
necessary prerequisite for attaining a desired goal." If the word
key were not free to shift its sense, the new concept probably could
not emerge. All thinkers, whether artists, philosophers, scientists,
businessmen, or laborers, can create new thoughts if they use words
in new ways."
["The Mind Builder", Richard W. Samson, 1965.]

Samson identified seven mental "faculties" which make an interesting
list of target capabilities for "intelligent machines". These are:
1. Words: We let words (together with numbers and other symbols)
mean things.
2. Thing Making: We make mental pictures of things when we
interpret sensations.
3. Qualification: We notice the qualities of things: how things
are alike and how they differ.
4. Classification: We mentally sort things into classes, types or
families.
5. Structure Analysis: We observe how things are made: break
structural wholes into component parts.
6. Operation Analysis: We notice how things happen: in what
successive stages.
7. Analogy: We see how seemingly unconnected situations are
alike, forming parallel relations in different "worlds of
thought"
.

When you are ready, try your system on the SAT test:

Which word (a, b, c, or d) best completes the sentence,
in your opinion? There is no "right" answer; pick the
word which seems best to you.
Poverty and hatred are ---------- of war.
(a) roots (b) leaves (c) seeds (d) fruits

We might be well advised to imitate a real example intelligence
(ours). Later we can improve on the implementation, and possibly
the performance.

Certainly we will use mathematics to analyze and predict the system's
behavior; or rather subsets and abstractions, models of the system.
But we may not be able to construct any model less complex than the
system itself, which will produce the desired behavior; its behavior
must be understood through simulation.

"Computational irreducibility is a phenemenon that seems to arise in
many physical and mathematical systems. The behavior of any system
can be found by explicit simulation of the steps in its evolution.
When the system is simple enough, however, it is always possible to
find a short cut to the procedure: once the initial state of the system
is given, its state at any subsequent step can be found directly from
a mathematical formula."
"For a system such as (illus.), however, the
behavior is so complicated that in general no short-cut description of
the evolution can be given. Such a system is computationally
irreducible, and its evolution can effectively be determined only by
the explicit simulation of each step. It seems likely that many
physical and mathematical systems for which no simple description is
now known are in fact computationally irreducible. Experiment, either
physical or computational, is effectively the only way to study such
systems."

[Stephen Wolfram, Computer Software in Science and Mathematics,
Scientific American, Sept., 1984]

A mind is an effect which probably cannot be sustained at a lesser
level of complexity than in our own case; any abstraction which
simplifies will also destroy the very capabilities we wish to
understand. There are trillions of components and connections in the
human brain. No reasonable person can expect to model a mind in any
significant way using a few tens or hundreds of components. Since
there is a threshold of complexity below which the behavior of
interest will not occur, and the complexity of models is generally
deliberately reduced below this level, models will not produce the
phenomena of interest.

"Yet recall John von Neumann's warning that a complete description of
how we perceive may be far more complicated than this complicated
process itself - that the only way to explain pattern recognition
may be to build a device capable of recognizing pattern, and then,
mutely, point to it.

How we think is still harder, and almost certainly we are not yet
breaking this problem down in solvable form."


Horace Freeland Judson, "The Search for Solutions", 1980.

In spite of the tone of that last quote, I believe we can and should
build, now, things which will prove or disprove these ideas, so we
can either quit wasting energy or get going on building other minds.

I'm not going to be at this mail address after March 1, but probably
someone will forward my mail. The Boeing Advanced Technology Center
just closed down all its robotics projects, including mobility and
stereo vision, my work in induction, and all other work not "directly
supporting Boeing programs"
. So twenty-plus of us are scrambling to
find other places to work. I don't know what access to any networks
I might have next month.

Ray

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT