Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 004

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest            Thursday, 8 Jan 1987       Volume 5 : Issue 4 

Today's Topics:
Philosophy - Intentions & Mind Modeling & Response to Minsky on Mind(s)

----------------------------------------------------------------------

Date: Sun, 4 Jan 87 19:51:22 EST
From: "Keith F. Lynch" <KFL%MX.LCS.MIT.EDU@MC.LCS.MIT.EDU>
Subject: Intentions

From: DAVIS%EMBL.BITNET@WISCVM.WISC.EDU
Subject: unlikely submission to the ai-list...

Of course, all this chat from computer chess players is meaningless -
nobody *really* believes in the will of the machine.

True.

This ascription of intentionality [to people] is not, I believe, a
mistake, simply on the grounds that intentionality simply does not exist.
It is an explanatory construct which creates an arbitrary class
(`intentional objects'), but has no real existence in the world ...

One minor flaw. I know that *I* have intentions. So there is at
least one thing in the world with intentions.
Given that I intend things, I find it plausible that other humans do
so as well. And given that human beings have intentions, I don't find
it totally impossible that machines might ever have intentions.

...Keith

------------------------------

Date: 7 Jan 87 03:28:09 GMT
From: sdcc6!calmasd!dbm@sdcsvax.ucsd.edu (Brian Millar)
Subject: mind modeling

I believe Stevan Harnad when he says he has a mind. The
alternative theory is that a mindless automaton is telling me.
How well does that fit with the preponderance of data? Very
poorly, considering that no AI program can yet generate the kind
of complex & original testimony he exhibits (despite his being
restricted to text displays). Therefore the rational, scientific
model for me to hold is that he has a mind with subjective
awareness just as I do.

Point: Testimony about subjective experience is a valid type of
data upon which reasonable scientific models of mind can be based.
The highly regular data which has accumulated in the area of
perception is almost entirely this type.

------------------------------

Date: Wed, 7 Jan 87 11:53:03 EST
From: princeton!mind!harnad@seismo.CSS.GOV
Subject: Response to Minsky on Mind(s)


On mod.ai, MINSKY%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU (Marvin Minsky) wrote:

> the phenomena we call consciousness are involved with our
> short term memories. This explains why... it makes little sense to
> attribute consciousness to rocks.

I agree that rocks probably don't have short-term memories. But I
don't see how having a short-term memory explains why we're conscious
and rocks aren't. In particular, why are any of our short-term
memories conscious, rather than all being unconscious?

The extracts from Marvin Minsky's new book look as if they will be
very insightful correlative accounts of the phenomenology of (i) subjective
experience and (ii) the objective processes going on in machines that can be
interpreted as analogous to (i) in various ways. What none of the
extracts he has presented even hints at is (a) why interpreting any of
these processes (and the performance they subserve) as conscious is
warranted, and (b) why even our own processes and performance should
be conscious, rather than completely unconscious. That (as I've
regrettably had to keep recalling whenever it seems to be overlooked
or side-stepped) is called the mind/body problem.

Two constraints are useful in this sort of enterprise (just to keep
the real problem in focus and to prevent one from going off into
metaphor and hermeneutics):

(1) Before tackling the 2nd-order (and in many respects much easier)
problem of self-consciousness, it would be well to test whether one's
proposal has made any inroads on the problem of consciousness simpliciter.
To put it another way: Before claiming that one's account captures the
phenomenology of being aware that you're experiencing (or have
just felt), say, pain, one should show that one's account has captured
experiencing pain in the first place. (That's the little detail that
keeps slipping in the back door for free, as it does in attempts to
build perpetual motion machines or trisect the angle...)

(2) Before claiming with conviction that one has shown "why" a certain
performance is accomplished by a process that is validly interpreted
as a conscious one, one should indicate why the very same performance could
not be accomplished by the very same process, perfectly UNconsciously
(thereby rendering the conscious interpretation supererogatory).

> although people usually assume that consciousness is knowing
> what is happening in the minds, right at the
> present time, consciousness never is really concerned with the
> present, but with how we think about the records of our recent
> thoughts... how thinking about our short term memories changes them!

What does this have to do with, say, having a toothache now? Is there
anything in the short-term memory scenario that says (1) how my
immediate experience of the pain is a memory-function? (Note, I'm not
saying that subjective experience doesn't always involve some pasting
together of instants that, amongst others, probably requires memory.
My question concerns how the memory hypothesis -- or any other --
accounts for the fact that what is going on there in real time is
conscious rather than unconscious; how does it account for my
EXPERIENCE of pain?) And once that's answered, the second question is
(2) why couldn't all that have been accomplished completely
unconsciously? (E.g., if the "function" of toothache is to warn me of
tissue damage, to help me avoid it in future by learning from the past,
etc., why can't all that be accomplished without bothering to have
anything EXPERIENCE anything in the process?)

[In my view, by the way, this old conundrum about
thinking-perturbing-thinking is just another of the red herrings one
inherits when one focuses first on the 2nd-order awareness problem,
instead of the primary and much more profound 1st-order awareness problem.
This may make for the entertaining reflections about self-reference and
recursion in Doug Hofstadter's books or about the paradoxes of free
will in Donald MacKay's perorations, but it just circles around the mind/body
problem instead of confronting it head-on.]

[Let me also add that there are good reasons why it is called the
"mind/body" problem and not the "mindS/body" problem, as Marvin Minsky's
tacit pluralizations would seem to imply. The phenomenological fact is that,
at any instant, I (singular) have a toothache experience (singular).
Having this (singular) conscious experience is what one calls having a
(singular) mind. Now it may well be that one can INFER multiple processes
underlying the capacity to have such singular experiences. But the processes
are unconscious ones, not directly EXPERIENCED ones, hence they are not plural
minds, properly speaking. The fact that these processes may be INTERPRETABLE as
having local consciousnesses and intentions of their own is in fact yet
another argument against thus overinterpreting them, rather than an
argument for claiming we have more than one mind. Claims about minds
must rest exclusively on the phenomenological facts, which are,
without exception, singular. (This includes the putative problem cases
of multiple personality and altered states. Our contents of our
experiences can be varied, plural and bizarre in many ways, but it seems
inescapable that at any instant a person can only be the conscious subject
of one experience, not the subjects of many.)]

> Our brains have various agencies that learn to
> recognize - and even name - various patterns of external sensations.
> Similarly, there must be other agencies that learn to recognize
> events *inside* the brain - for example, the activities of the
> agencies that manage memories. And those, I claim, are the bases
> of the awarenesses we recognize as consciousness... I claim that to
> understand what we call consciousness, we must understand the
> activities of the agents that are engaged in using and changing our
> most recent memories.

You need an argument for (1) why any process you propose is correctly
interpreted as the basis of 1st-order awareness of anything --
external or internal -- rather than just a mindless process, and (2)
why the functions you describe it as accomplishing in the way it does
need to be accomplished consciously at all, rather than mindlessly.

> What do we mean by words like "sentience," "consciousness," or
> "self-awareness? They all seem to refer to the sense of feeling one's
> mind at work. When you say something like "I am conscious of what I'm
> saying," your speaking agencies must use some records about the recent
> activity of other agencies. But, what about all the other agents and
> activities involved in causing everything you say and do? If you were
> truly self-aware, why wouldn't you know those other things as well?

What about just a tooth-ache now? I'm not feeling my mind at work, I'm
feeling a pain. (I agree, of course, that there are many processes
going on in my brain that are not conscious; the real burden is to show
why ANY of them are conscious.)

> When people ask, "Could a machine ever be conscious?" I'm often
> tempted to ask back, "Could a person ever be conscious?"
> ...we can design our new machines as we wish, and
> provide them with better ways to keep and examine records of their
> own activities - and this means that machines are potentially capable
> of far more consciousness than we are.

"More" conscious than we are? What does that mean? I understand what
conscious "of" more means (more inputs, more sense modalities, more
memory, more "internal-process" monitoring) -- but "more conscious"?
[In its variant form, called the "other-minds" problem, by the way,
the question about (other) machines' consciousnesses and (other) persons'
consciousnesses are seen to be the same question. But that's no answer.]

> To "notice" change requires the ability to resist it, in order
> to sense what persists through time, but one can do this only
> by being able to examine and compare descriptions from the recent past.

Why should a process that allows a device to notice (respond to,
encode, store) change, resist it, examine, compare, describe, remember,
etc. be interpreted as (1) a conscious process, and (2) why couldn't it
accomplish the exact same things unconsciously?

I am not, by the way, a spokesman for the point of view advocated by
Dreyfus or by Searle. In asking these pointed question I am trying to
show that the mind/body problem is a red herring for cognitive
science. I recommend methodological epiphenomenalism and performance
modeling as (what I believe is) the correct research strategy. Instead
of spending our time trying to build metaphorical perpetual motion
machines, I believe we should try to build real machines that capture our
total performance capacity (the Total Turing Test).

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT