Copy Link
Add to Bookmark
Report

AIList Digest Volume 4 Issue 276

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest            Tuesday, 2 Dec 1986      Volume 4 : Issue 276 

Today's Topics:
Administrivia - Proposed Split of This Group,
Philosophy - Searle, Turing, Symbols, Categories

----------------------------------------------------------------------

Date: 1 Dec 86 09:24:05 est
From: Walter Hamscher <hamscher@ht.ai.mit.edu>
Subject: Proposed: a split of this group

I empathize with the spirit of the motion. But is it really
neccessary to split the list? I think Ken does a really good
thing by putting warning labels on the philosophical
discussions: they're easy to skip over if you're not interested.
As long as he's willing to put the time into doing that, there's
no need for a split.

------------------------------

Date: Mon 1 Dec 86 10:10:19-PST
From: Stephen Barnard <BARNARD@SRI-IU.ARPA>
Subject: One vote against splitting the list

I for one would not like to see the AI-list divided into two --- one
for "philosophising about" AI and one for "doing" AI. Even those of
us who do AI sometimes like to read and think about philosophical
issues. The problem, if there is one, is that certain people have
been abusing the free access to the list that Ken rightfully
encourages. Let's please keep our postings to a reasonable volume
(per contributor). The list is not supposed to be anyone's personal
soapbox.

------------------------------

Date: 1 Dec 86 18:48:31 GMT
From: ihnp4!houxm!houem!marty1@ucbvax.Berkeley.EDU (M.BRILLIANT)
Subject: Re: Proposed: a split of this group

Just suggested by jbn@glacier.UUCP (John Nagle):

> I would like to suggest that this group be split into two groups;
>one about "doing AI" and one on "philosophising about AI", the latter
>to contain the various discussions about Turing tests, sentient computers,
>and suchlike.

Good idea. I was beginning to think the discussions of "when is an
artifice intelligent" might belong in "talk.ai." I was looking for
articles about how to do AI, and not finding any. The trouble is,
"comp.ai.how-to" might have no traffic at all.

We seem to be trying to "create artificial intelligence," with the
intent that we can finally achieve success at some point (if only we
knew how to define success). Why don't we just try always to create
something more intelligent than we created before? That way we can not
only claim nearly instant success, but also continue to have further
successes without end.

Would the above question belong in "talk.ai" or "comp.ai.how-to"?

Marty
M. B. Brilliant (201)-949-1858
AT&T-BL HO 3D-520 houem!marty1

------------------------------

Date: Sun, 30 Nov 1986 22:27 EST
From: MINSKY%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU
Subject: Searle, Turing, Symbols, Categories


Lambert Meertens asks:

If some things we experience do not leave a recallable trace,
then why should we say that they were experienced consciously?

I absolutely agree. In my book, "The Society of Mind", which will be
published in January, I argue, with Meertens, that the phenomena we
call consciousness are involved with our short term memories. This
explains why, as he Meertens suggests, it makes little sense to
attribute consciousness to rocks. It also means that there are limits
to what consciousness can tell us about itself. In order to do
perfect self-experiments upon ourselves, we would need perfect records
of what happens inside our memory machinery. But any such machinery
must get confused by self-experiments that try to find out how it
works - since such experiments must change the very records that
they're trying to inspect! This doesn't mean that consciousness
cannot be understood, in principle. It only means that, to study it,
we'll have to use the methods of science, because we can't rely on
introspection.

Below are a few more extracts from the book that bear
on this issue. If you want to get the book itself, it is
being published by Simon and Schuster; it will be printed
around New Year but won't get to bookstores until mid-February.
If you want it sooner, send me your address and I should be
able to send copies early in January. (Price will be 18.95 or
less.) Or send name of your bookstore so I can get S&S to
lobby the bookstore. They don't seem very experienced at books
in the AI-Psychology-Philosophy area.

In Section 15.2 I argue that although people usually assume that
consciousness is knowing what is happening in the minds, right at the
present time, consciousness never is really concerned with the
present, but with how we think about the records of our recent
thoughts. This explains why our descriptions of consciousness are so
queer: whatever people mean to say, they just can't seem to make it
clear. We feel we know what's going on, but can't describe it
properly. How could anything seem so close, yet always keep beyond
our reach? I answer, simply because of how thinking about our short
term memories changes them!

Still, there is a sense in which thinking about a thought is like from
thinking about an ordinary thing. Our brains have various agencies
that learn to recognize to recognize - and even name - various
patterns of external sensations. Similarly, there must be other
agencies that learn to recognize events *inside* the brain - for
example, the activities of the agencies that manage memories. And
those, I claim, are the bases of the awarenesses we recognize as
consciousness. There is nothing peculiar about the idea of sensing
events inside the brain; it is as easy for an agent (that is, a small
portion of the brain) to be wired to detect a *brain-caused
brain-event*, as to detect a world-caused brain-event. Indeed only a
small minority of our agents are connected directly to sensors in the
outer world, like those that sense the signals coming from the eye or
skin; most of the agents in the brain detect events inside of the
brain! IN particular, I claim that to understand what we call
consciousness, we must understand the activities of the agents that
are engaged in using and changing our most recent memories.

Why, for example, do we become less conscious of some things when we
become more conscious of others? Surely this is because some resource
is approaching some limitation - and I'll argue that it is our limited
capacity to keep good records of our recent thoughts. Why, for
example, do thoughts so often seem to flow in serial streams? It is
because whenever we lack room for both, the records of our recent
thoughts must then displace the older ones. And why are we so unaware
of how we get our new ideas? Because whenever we solve hard problems,
our short term memories become so involved with doing *that* that they
have neither time nor space for keeping detailed records of what they,
themselves, have done.

To think about our most recent thoughts, we must examine our recent
memories. But these are exactly what we use for "thinking," in the
first place - and any self-inspecting probe is prone to change just
what it's looking at. Then the system is likely to break down. It is
hard enough to describe something with a stable shape; it is even
harder to describe something that changes its shape before your eyes;
and it is virtually impossible to speak of the shapes of things that
change into something else each time you try to think of them. And
that's what happens when you try to think about your present thoughts
- since each such thought must change your mental state! Would any
process not become confused, which alters what it's looking at?

What do we mean by words like "sentience," "consciousness," or
"self-awareness? They all seem to refer to the sense of feeling one's
mind at work. When you say something like "I am conscious of what I'm
saying," your speaking agencies must use some records about the recent
activity of other agencies. But, what about all the other agents and
activities involved in causing everything you say and do? If you were
truly self-aware, why wouldn't you know those other things as well?
There is a common myth that what we view as consciousness is
measurelessly deep and powerful - yet, actually, we scarcely know a
thing about what happens in the great computers of our brains.

Why is it so hard to describe your present state of mind? One reason
is that the time-delays between the different parts of a mind mean
that the concept of a "present state" is not a psychologically sound
idea. Another reason is that each attempt to reflect upon your mental
state will change that state, and this means that trying to know your
state is like photographing something that is moving too fast: such
pictures will be always blurred. And in any case, our brains did not
evolve primarily to help us describe our mental states; we're more
engaged with practical things, like making plans and carrying them
out.

When people ask, "Could a machine ever be conscious?" I'm often
tempted to ask back, "Could a person ever be conscious?" I mean this
as a serious reply, because we seem so ill equipped to understand
ourselves. Long before we became concerned with understanding how we
work, our evolution had already constrained the architecture of our
brains. However we can design our new machines as we wish, and
provide them with better ways to keep and examine records of their own
activities - and this means that machines are potentially capable of
far more consciousness than we are. To be sure, simply providing
machines with such information would not automatically enable them to
use it to promote their own development and until we can design more
sensible machines, such knowledge might only help them find more ways
to fail: the easier to change themselves, the easier to wreck
themselves - until they learn to train themselves. Fortunately, we
can leave this problem to the designers of the future, who surely
would not build such things unless they found good reasons to.

(Section 25.4) Why do we have the sense that things proceed in
smooth, continuous ways? Is it because, as some mystics think, our
minds are part of some flowing stream? think it's just the opposite:
our sense of constant steady change emerges from the parts of mind
that manage to insulate themselves against the continuous flow of
time! In other words, our sense of smooth progression from one mental
state to another emerges, not from the nature of that progression
itself, but from the descriptions we use to represent it. Nothing can
*seem* jerky, except what is *represented* as jerky. Paradoxically,
our sense of continuity comes not from any genuine perceptiveness, but
from our marvelous insensitivity to most kinds of changes. Existence
seem continuous to us, not because we continually experience what is
happening in the present, but because we hold to our memories of how
things were in the recent past. Without those short-term memories,
all would seem entirely new at every instant, and we would have no
sense at all of continuity, or of existence.

One might suppose that it would be wonderful to possess a faculty of
"continual awareness." But such an affliction would be worse than
useless because, the more frequently your higher-level agencies change
their representations of reality, the harder for them to find
significance in what they sense. The power of consciousness comes not
from ceaseless change of state, but from having enough stability to
discern significant changes in your surroundings. To "notice" change
requires the ability to resist it, in order to sense what persists
through time, but one can do this only by being able to examine and
compare descriptions from the recent past. We notice change in spite
of change, and not because of it. Our sense of constant contact with
the world is not a genuine experience; instead, it is a form of what I
call the "Immanence illusion". We have the sense of actuality when
every question asked of our visual systems is answered so swiftly that
it seems as though those answers were already there. And that's what
frame-arrays provide us with: once any frame fills its terminals, this
also fills the terminals of the other frames in its array. When every
change of view engages frames whose terminals are already filled,
albeit only by default, then sight seems instantaneous.

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT