Copy Link
Add to Bookmark
Report

AIList Digest Volume 7 Issue 009

eZine's profile picture
Published in 
AIList Digest
 · 11 months ago

AIList Digest           Saturday, 28 May 1988       Volume 7 : Issue 9 

Today's Topics:

Free Will - The Saga Continues ...

----------------------------------------------------------------------

Date: 23 May 88 09:02:41 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net (Gilbert
Cockton)
Subject: Re: Free Will-Randomness and Question-Structure

In article <194@proxftl.UUCP> bill@proxftl.UUCP (T. William Wells) writes:
>(N.B. The mathematician's "true" is not the same thing as the
> epistemologist's "true".
Which epistemologist? The reality and truth of mathematical objects
has been a major concern in many branches of philosophy. Many would
see mathematics, when it succeeds in formalising proof, as one form of
truth. Perhaps consistency is a better word, and we should reserve
truth for the real thing :-)

This of course would make AI programs true models by correspondence,
rather than internal elegance. Verification of these models is an
important issue that I'm sure our mathematical idealists are pursuing
with great vigour to the neglect of all else :-)
--
Gilbert Cockton, Department of Computing Science, The University, Glasgow
gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

The proper object of the study of humanity is humans, not machines

------------------------------

Date: 23 May 88 09:14:15 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net (Gilbert
Cockton)
Subject: Re: Acting irrationally (was Re: Free Will & Self Awareness)

In article <32403@linus.UUCP> bwk@mbunix (Kort) writes:
>We are destined to remain unaware of vast portions of our civilization's
>collective information base.
And so therefore are computers! Still want to get mind on silicon?
Only if it's information free? Anyone remember GPS? Can you get
anything on without requiring some knowledge (remember expert systems?)

A computer will be beyond programming whereever its input bandwidth is
substantially narrower than ours. It will just take far too long to
get the knowledge base in, except for small well-defined areas of
technical knowledge with a high pay off (e.g. Prospector, genetic
engineering advisors). Thus the potential of AI is obviously limited
even ignoring formalisation problems related to the large area of
non-technical knowledge which underlies most social interaction outside
of high-tech. work.
--
Gilbert Cockton, Department of Computing Science, The University, Glasgow

------------------------------

Date: 23 May 88 13:28:44 GMT
From: geb@cadre.dsl.pittsburgh.edu (Gordon E. Banks)
Subject: Re: Free Will & Self Awareness

In article <1187@crete.cs.glasgow.ac.uk> Gilbert Cockton writes:
>In article <1176@cadre.dsl.PITTSBURGH.EDU> Gordon E. Banks writes:
>>Punishment can serve to "redesign" the human machine. If you have children,
>>you will probably know this. Unfortunately, it doesn't work with everyone.
>How can an artificial intelligence ever rise above recidivism? Are
>there any serious examples of re-programming systems, i.e. a system
>that redesigns itself in response to punishment.

Certainly! The back-propagation connectionist systems are "punished"
for giving an incorrect response to their input by having the weight
strengths leading to the wrong answer decreased. In most connectionist
learning systems such "punishment" is used exclusively, not rewards.
You may consider this a "crude" learning system, but it probably isn't
much cruder than that actual neural apparatus underlying most organic
brains.

------------------------------

Date: 23 May 88 20:16:24 GMT
From: !crocus!rmpinchback@uunet.uu.net (Reid M. Pinchback)
Subject: Re: Free Will, Quantum computers, determinism, randomness,
modelling

In article <1032@klipper.cs.vu.nl> J. A. "Biep" Durieux writes:

>In several newsgroups there are (independent) discussions on the nature
>of free will. It seems natural to merge these. Since these discussions
>mostly turn around physical, logical or mathematical notions, the natural
>place for this discussion seems to be sci.philosophy.tech (which newsgroup
>is devoted to technical philosophy, i.e. logic, philosophical mathematics
>(like intuitionism, formalism, etc.), philosophical physics (the time
>problem, interpretations of quantum mechanics, etc) and the like).
>Here follows a bit of the discussion in sci.math, plus some of my
>comments.
>
Merging a subject is a nice idea... but perhaps, instead of effectively
continuing the subject of NINE newsgroups, maybe you could just post
a little note to folks about where to continue the discussion? In other
words, if you want to merge it, actually merge it! All this free will
discussion has been cluttering up so many newsgroups, its becoming a
real pain, instead of an interesting topic.

I agree... sci.philosophy.tech sounds like a good group for this
discussion. Maybe the USENET folks out there would like to start
focusing the talk in that group. Its getting to the point where people
are posting messages to justify why they are continually posting into
places like comp.ai.

Who knows, if there is enuf interest, we could start a new
newsgroup. How about sci.philosophy.free ? :-)


Ok, flame off. Ciao!


Reid

------------------------------

Date: 24 May 88 02:20:40 GMT
From: mcvax!ukc!its63b!aiva!jeff@uunet.uu.net (Jeff Dalton)
Subject: Re: Free will does not require nondeterminism.

In article <185@proxftl.UUCP> T. William Wells writes:
]The absolute minimum required for free will is that there exists
]at least one action a thing can perform for which there is no
]external phenomena which are a sufficient cause.

I have a suspicion that you may be getting too much from this
"external". If I decide to eat some chocolate ice cream when
I go home, there may well not be anything currently outside myself
that causes me to do so (although the presence of ice cream allows
me to do so). Nonetheless, it might be that entirely deterministic
events inside myself caused me to eat the ice cream and that my
impression that I made the decision freely was just an illusion.

It must also be considered that everything internal to me might
ultimately be caused by things external.

]For if this is the case, then it is invalid to say that
]everything that the thing does is caused by external phenomena.
]Therefore, the thing itself must be considered as the cause of
]what it does (unless you admit the existence of uncaused
]action).

I am also doubtful about "the thing itself". What is that
when referring to a person? If you include the body and not
just the consciousness, say, you may well be right that "the
thing itself"
was the cause, but it would not be a case of
free will in the way people normally understand it. If I
have some inbuilt liking for chocolate ice cream it is no
more a case of free will than that I'm nearsighted.

]This does not require that the thing could have done other than
]what it did, though it does not prohibit this, either. Thus free
]will (a specific kind of action supposed to be not determined by
]external phenomena) could exist even in a deterministic
]universe.

If the action is not determined by external causes, and it
is not uncaused, what is the nature of the internal cause that
you suppose might remain and how does it count as free will?

-- Jeff

------------------------------

Date: 24 May 88 03:19:04 GMT
From: mcvax!ukc!its63b!aiva!jeff@uunet.uu.net (Jeff Dalton)
Subject: Re: Free Will and Self-Awareness

In article <8805182232.AA29965@ucbvax.Berkeley.EDU> Eyal Mozes writes:
>In article <445@aiva.ed.ac.uk> jeff@aiva.ed.ac.uk (Jeff Dalton) writes:
>>I would be interested in knowing what you think *isn't* under a
>>person's volitional control. One would normally think that having
>>a sore throat is not under conscious control even though one can
>>chose to do something about it or even to try to prevent it.

>Which is why I used the word "indirectly".

You introduced the word "indirectly" in response to an earlier
message that suggested that obsessive thoughts might be a
counterexample to your claim that people could always control
their thoughts, which seemed to be a much stronger claim than
what you are saying now.

If the sense in which people can control their thoughts is no
stronger than that in which they can control whether or not
they have a sore throat, than your claim is a rather weak one.

>[...] the choice to focus your consciousness
>or not is the only fundamental free-will choice. However, this choice
>leads to other, derivative choices, which include the choices of what
>to think about, what views and values to hold, and what actions to
>take.

I would like to hear what you would say about some of the other
things I said in my previous message, specifically points like the
following: Why do you chose to focus on one thing rather than
another? Did you decide to focus on that thing, and if so how
did you decide to decide to focus on that thing, and so on.
Merely deciding to focus (and why that decision rather than
another? Did you decide to decide to focus, etc?) does not
determine what to think about, and so does not lead to it in
a very stong sense. But see my other messages for a better attempt
at explaining this.

At some point, something other than a conscious decision must
be made. And it is hard to see how that can count as free will
since the decision is essentially given to us by some non conscious
part of our mind. And it isn't actually "at some point": every
thought just appears; we don't decide to have it first.

What I've just said is wrong in that I've made it sound like we're
something separate observing our thoughts as they "appear", that we
might wish to control them but can't. It would be better to say that
we just are these thoughts. We can't go to some independent level
where we control them.

Nonetheless, it is possible to have thoughts that are essentially
instructions (to ourselves) to think certain things, for example
when we try to remember something. And so we do have a certain
amount of control in that some of the links are conscious. This
ability is limited, however, and one of the limits is due to the
problems of infinite regress I've mentioned before.

I would recommend (again) Dennett's Elbow Room: The Varieties of
Free Will Worth Wanting and also Julian Jaynes' The Origin of
Consciousness in the Breakdown of the Bichameral Mind. Even if
Jaynes's central thesis is wrong (as it may well be), he makes
many useful and interesting observations along the way.

>Whenever a person faces a decision about what view to hold on some
>issue, or about what action to take, that person may focus his thoughts
>on all relevant information and thus try to make an informed, rational
>decision; or he may make no such effort, deciding on the basis of what
>he happens to feel like at that moment; or he may deliberately avoid
>considering some of the relevant information, and make a decision based
>on evasion.

Often the most rational choice is to deliberately avoid considering
all relevant information in order to make a decision within a certain
time. It is not necessarily evasion. I also see nothing irrational
in, for example, deciding to eat some chocolate ice cream just because
that's the flavor I happen to feel like having.

-- Jeff

------------------------------

Date: 24 May 88 08:18:21 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net (Gilbert
Cockton)
Subject: Re: Free Will & Self-Awareness

In article <205@proxftl.UUCP> bill@proxftl.UUCP (T. William Wells) writes:
>interest) eventually be doing so on talk.philosophy.misc,
>Please direct any followups to that group.
Can I please just ask one thing here, as it is relevant?
>
>The Objectivist version of free will asserts that there are (for
>a normally functioning human being) no sufficient causes for what
>he thinks. There are, however, necessary causes for it.
Has this any bearing on the ability of a machine to simulate human
decision making? It appears so, but I'd be interested in how you think it
can be extended to yes/no/don't know about the "pure" AI endeavour.
--
Gilbert Cockton, Department of Computing Science, The University, Glasgow

------------------------------

Date: 24 May 88 10:46:35 GMT
From: mcvax!ukc!its63b!aipna!rjc@uunet.uu.net (Richard Caley)
Subject: Re: Social Construction of Reality


In-reply-to: nelson_p@apollo.uucp's message of 20 May 88 17:43:00 GMT
>Gilbert Cockton says:
>>The idea that there is ONE world, ONE physical reality is flawed . .
>>Thus, to ask if the world would change, this depends on whether you
>>see it as a single immutable physical entity, or an ideology, a set of
>>ideas held by a social group (e.g. Physicists whose ideas are often
>>different to engineers).
>
> To the extent that we try to map the world onto the limited
> resources of our central nervous systems there is bound to
> be a major loss of precision. This still doesn't provide any
> basis for assuming that the world is not a physical entity, . . .

I would like to suggest that at the centre of this disagreement is
an ambiguity in the concept of 'reality' or 'the world'. The 'reality'
which may or may not be out there, of which physicists try to build a
model, which may be made up of quarks or wave functions or whatever is
singular and immutable more or less by definition. The 'reality' which
people confront in their day to day existance is something quite different;
it contains chairs, elephants and some things ( for instance the british
constitution ) which have no _physical_ existance at all and so exist only
as social constructs.

> What does this discussion have to do with computers and artificial
> intelligence? I think that this topic would go better in one
> of the 'talk' groups where fuzzy thinking (not to be confused with
> fuzzy set theory) is more appropriate.
> --Peter Nelson

Any AI system must perform its function in the world of people's everyday
experience. A system which modeled the world as a system of wave functions
might make an interesting Expert System for a physicist, but it would not
be able to cope with going to the supermarket to buy lunch.

------------------------------

Date: 24 May 88 19:28:09 GMT
From: bwk@mitre-bedford.arpa (Barry W. Kort)
Subject: Re: Free Will & Self Awareness

I appreciated Gordon Banks' commentary on the psychopathic personality.

His article reminded me of the provocative (but depressing) book by
M. Scott Peck, _People of the Lie: The Hope for Healing Human Evil_.

I'm actually more in favor of banishment than execution. But I do
believe that healing is a possibility for the future. But given the
difficulty of driving bugs out of software, I can appreciate the
challenge of expunging demons from the human psyche.

I agree that prison is not an effective environment for learning morality.
It's hard to learn morality and ethics in an environment devoid of role
models.

--Barry Kort

------------------------------

Date: 24 May 88 21:07:35 GMT
From: mcvax!ukc!its63b!aiva!jeff@uunet.uu.net (Jeff Dalton)
Subject: Re: Free Will & Self-Awareness

In article <205@proxftl.UUCP> bill@proxftl.UUCP (T. William Wells) writes:
]The Objectivist version of free will asserts that there are (for
]a normally functioning human being) no sufficient causes for what
]he thinks. There are, however, necessary causes for it.

That is, as you indicated, just an assertion. It does not seem
a particularly bad account of what having free will might mean.
The question is whether the assertion is corret. How do you know
there are no sufficient causes?

]In terms of the actual process, what happens is this: various
]entities provide the material which you base your thinking on
](and are thus necessary causes for what you think), but an
]action, not necessitated by other entities, is necessary to
]direct your thinking. This action, which you cause, is
]volition.

Well, how do I cause it? Am I caused to cause it, or does it
just happen out of nothing? Note that it does not amount to
having free will just because some of the causes are inside
my body. (Again, I am not sure what you mean by "other entities".)

]] But where does the "subject of your own choice" come from? I wasn't
]] thinking of letting one's thoughts wander, although what I said might
]] be interpreted that way. When you decide what to think about, did
]] you decide to decide to think about *that thing*, and if so how did
]] you decide to decide to decide, and so on?
]
]Shades of Zeno! One does not "decide to decide" except when one
]does so in an explicit sense.

My point was precisely that one could not decide to decide, and so
on, so that the initial step (and it might just be the decision,
without any decision to decide) was not something arrived at by
conscious reasoning.

]("I was waffling all day; later
]that evening I put my mental foot down and decided to decide once
]and for all."
) Rather, you perform an act on your thoughts to
]direct them in some way; the name for that act is "decision".

Yes, but what determines the way in which I direct them, or even
whether I bother to direct them (right then) at all? I have no
problem (or at least not one I can think of right now) with calling
that act a decision. But why do I make that decision rather than
do something else?

By the way, we do not get talk.philosophy.misc, so if you answer
me there I will never see it.

-- Jeff

------------------------------

Date: 24 May 88 21:46:46 GMT
From: xanth!wet@ames.arpa (Warren E. Taylor)
Subject: Re: Free Will & Self Awareness

In article <1176@cadre.dsl.PITTSBURGH.EDU>, Gordon E. Banks writes:
In article <31024@linus.UUCP> bwk@mbunix (Barry Kort) writes:
>It is not uncommon for a child to "spank" a machine which misbehaves.
>But as adults, we know that when a machine fails to carry out its
>function, it needs to be repaired or possibly redesigned. But we
>do not punish the machine or incarcerate it.
>
>Why then, when a human engages in undesirable behavior, do we resort
>to such unenlightened corrective measures as yelling, hitting, or
>deprivation of life-affirming resources?
>

"Spanking" IS, I repeat, IS a form of redesigning the behavior of a child.
Many children listen to you only when they are feeling pain or are anticipating
the feeling of pain if they do not listen. Many "modern" types do not agree
with this. I am certainly not for spanking a child every time he/she turns
around, but there are also times when it is the only appropriate action. This
occurs when a child repeatedly misbehaves even after being repeatedly informed
of the consequences of his behavior. I have an extremely hard-headed nephew
who "deserves" a spanking quite often because he is doing something that is
dangerous or cruel or simply socially unacceptable. He is also usually
maddeningly defiant.

You only need to observe a baby for a short while to see a very nearly
unadulterated human behavior. Young children have not yet been "socialized".
They are very self-centered, and most of the time care only for themselves.
This is why they scream "mine" and pitch fits at the slightest resistance to
their will. Many people do not realize that a child has learned to manipulate
their parents pretty well about the time they have learned to toddle. They know
just how far they can push a parent. Haven't you ever seen a child keep doing
something you are telling him "no" for until you give the slightest indication
that you are going to get up to deliver that spanking you've promised? They
then promptly quit. Lots of parents then sit back down. This has allowed the
child to learn your tolerance point. There is a 100% chance he will do it
again soon. In such a case, you need to get up and spank the child, even after
he has stopped because you got up. I remember similar occurrences with my
nephew when I babysat him. His "no-no" was trying to play with an electrical
outlet. I solved the problem by giving him 2 warnings, and if that did not
do it, an unavoidable paddling followed. He learned quickly that it was not
worth the pain to defy me.

Adults understand what a child needs. A child, on his own, would quickly kill
himself. Also, pain is often the only teacher a child will listen to. He
learns to associate a certain action with undesirable consequences. I am not
the least bit religious, but the old Biblical saying of "spare the rod..." is
a very valid and important piece of ancient wisdom.


Flame away.
Warren.

------------------------------

Date: 25 May 88 0732 PDT
From: John McCarthy <JMC@SAIL.Stanford.EDU>
Subject: free will

The following propositions are elaborated in

{\bf McCarthy, John and P.J. Hayes (1969)}: ``Some Philosophical Problems from
the Standpoint of Artificial Intelligence'', in D. Michie (ed), {\it Machine
Intelligence 4}, American Elsevier, New York, NY.

I would be grateful for discussion of them - especially technical discussion.

1. For AI, the key question concerning free will is "What view should
we program a robot to have of its own free will?"
. I believe my
proposal for this also sheds light on what view we humans should take
of our own free will.

2. We have a problem, because if we put the wrong assertions in our
database of common sense knowledge, a logic-based robot without a
random element might conclude that since it is a deterministic robot,
it doesn't make sense for it to consider alternatives. It might reason:
"Since I'm a robot, what I will do is absolutely determined, so any
consideration of whether one course of action or another would
violate (for example) Asimov's suggestion that robots shouldn't
harm human beings is pointless"
.

3. Actually (McCarthy and Hayes 1969) considered an even more
deterministic system than a robot in the world - namely a system
of interconnected finite automata and asked the question: "When
should we say that in a given initial situation, automaton 1
can put automaton 7 in state 3 by time 10?"


4. The proposed answer makes this a definite question about
another automaton system, namely a system in which automaton
1 is removed from the original system, and its output lines
are replaced by external inputs to the revised system. We
then say that automaton 1 can put automaton 7 in state 3
by time 10 provided there is a sequence of signals on the
external inputs to the revised system that will do it.

5. I claim this is how we want the robot to reason. We should program it
to decide what it can do, i.e. the variety of results it can achieve, by
reasoning that doesn't involve its internal structure but only its place
in the world. Its program should then decide what to do based on
what will best achieve the goals we have also put in its database.

6. I claim that my own reasoning about what I can do proceeds similarly.
I model the world as a system of interacting parts of which
I am one. However, when deciding what to do, I use a model in
which my outputs are external inputs to the system.

7. This model says that I am free to do those things that suitable
outputs will do in the revised system. I recommend
that any "impressionable students" in the audience take the same
view of their own free will. In fact, I'll claim they already do;
unless mistaken philosophical considerations have given them
theories inferior to the most naive common sense.

8. The above treats "physical ability". An elaboration involving
knowledge, i.e. that distinguishes my physical ability to dial
your phone number from my epistemological ability that requires
knowing the number, is discussed in the paper.

These views are compatible with Dennett's and maybe Minsky's.
In my view, McDermott's discussion would be simplified if he
incorporated discussion of the revised automaton system.

------------------------------

Date: Wed, 25 May 88 11:26:54 EDT
From: Drew McDermott <mcdermott-drew@YALE.ARPA>
Subject: free will

I would like to suggest a more constrained direction for the discussion
about free will. In response to my proposal, Harry Plantinga wrote:

As an argument that people don't have free will in the common sense,
this would only be convincing to ... someone who already thinks people
don't have free will.

I believe most of the confusion about this concept comes from there not
being any agreed-upon "common sense" of the term "free will." To the
extent that there is a common consensus, it is probably in favor of
dualism, the belief that the absolute sway of physical law
stops at the cranium. Unfortunately, ever since the seventeenth century,
the suspicion has been growing among the well informed that this kind of
dualism is impossible. And that's where the free-will problem comes
from; we seem to make decisions, but how is that possible in a world
completely describable by physics?

If we want to debate about AI versus dualism (or, to be generous to
Mr. Cockton et al., AI versus something-else-ism), we can. I don't view
the question as at all settled. However, for the purposes of this
discussion we ought to pretend it is settled, and avoid getting
bogged down in a general debate about whether AI is possible at
all. Let's assume it is, and ask what place free will would have
in the resulting world view. This attitude will inevitably require
that we propose technical definitions of free will, or propose dispensing
with the concept altogether. Such definitions must do violence to
the common meaning of the term, if only because they will lack the
vagueness of the common meaning. But science has always operated this
way.

I count four proposals on the table so far:

1. (Propose by various people) Free will has something to do with randomness.

2. (McCarthy and Hayes) When one says "Agent X can do action A," or
"X could have done A," one is implicitly picturing a situation in which X
is replaced by an agent X' that can perform the same behaviors as X, but
reacts to its inputs differently. Then "X can do A" means "There is an X'
that would do A."
It is not clear what free will comes to in this theory.

3. (McDermott) To say a system has free will is to say that it is
"reflexively extracausal," that is, that it is sophisticated enough
to think about its physical realization, and hence (to avoid inefficacy)
that it must realize that this physical realization is exempt from
causal modeling.

4. (Minsky et al.) There is no such thing as free will. We can dispense
with the concept, but for various emotional reasons we would rather not.

I will defend my theory at greater length some other time. Let me confine
myself here to attacking the alternatives. The randomness theory has
the problem that it presents a necessary, but presumably not sufficient,
condition for a system to have free will. It is all very well to say
that a coin "chose to come up heads," but I would prefer a theory that
would actually distinguish between systems that make decisions and those
that don't. This is not (prima facie) a mystical distinction; a stock-index
arbitrage program decides to buy or sell, at least at first blush, whereas
there is no temptation to say a coin decides anything. The people in the
randomness camp owe us an account of this distinction.

I don't disagree with McCarthy and Hayes's idea, except that I am not
sure exactly whether they want to retain the notion of free will.

Position (4) is to dispense with the idea of free will altogether. I
am half in favor of this. I certainly think we can dispense with the
notion of "will"; having "free will" is not having a will that is free,
as opposed to brutes who have a will that is not free. But it seems
that it is incoherent to argue that we *should* dispense with the idea
of free will completely, because that would mean that we shouldn't use
words like "should." Our whole problem is to preserve the legitimacy
of our usual decision-making vocabulary, which (I will bet any amount)
everyone will go on using no matter what we decide.

Furthermore, Minsky's idea of a defense mechanism to avoid facing the
consequences of physics seems quite odd. Most people have no need for
this defense mechanism, because they don't understand physics in the
first place. Dualism is the obvious theory for most people. Among
the handful who appreciate the horror of the position physics has put
us in, there are plenty of people who seem to do fine without the
defense mechanism (including Minsky himself), and they go right on
talking as if they made decisions. Are we to believe that sufficient
psychotherapy would cure them of this?

To summarize, I would like to see discussion confined to technical
proposals regarding these concepts, and what the consequences of adopting
one of them would be for morality. Of course, what I'll actually see
is more meta-discussion about whether this suggestion is reasonable.

By the way, I would like to second the endorsement of Dennett's book
about free will, "Elbow Room," which others have recommended. I thank
Mr. Rapoport for the reading list. I'll return the favor with a reference
I got from Dennett's book:

D.M. Mackay 1960 On the logical indeterminacy of a free choice. {\it Mind
\bf 69}, pp. 31--40

Mackay points out that someone could predict my behavior, but that
(a) It would be misleading to say I was "ignorant of the truth" about
the prediction, because I couldn't be told the truth without
changing it.
(b) Any prediction would be conditional on the predictor's decision
not to tell me about it.

------------------------------

Date: Wed, 25 May 88 12:49:52 EDT
From: "Bruce E. Nevin" <bnevin@cch.bbn.com>
Subject: the mind of society

A confession first about _The Society of Mind_: I have not yet read all
of this remarkable linearized hypertext document. I do not think this
matters for what I am saying here, but I certainly could be mistaken and
would welcome being corrected.

SOM gives a useful and even illuminating account of what might be going
on in mental process. I think this account misses the mark in the
following way:

While SOM demonstrates in an intellectually convincing way that the
experience of being a unitary self or an ego is illusory, it still
assumes that individual persons are separate. It even appears to
identify minds with brains in a one-one correspondence. But the logic
of interacting intelligent agents applies in obvious ways to interacting
persons. What then, for example, of the mind of society?

(I bring to this training and experience in family therapy. Much of the
subconscious mental processes and communications of family members serve
to constitute and maintain the family as a homeostatic system--analogous
to the traffic of SOM agents maintaining the ego, see below.
Individuals--especially those out of communication with family members--
recreate relationships from their families in their relationships
outside the family; and vice versa, as any parent knows. I concur with
Gilbert Cockton's remarks about social context. If in alienation we
pretend social context doesn't matter, we just extend the scope of that
which we ignore, i.e. that which we relegate to subconsciousness.)

Consider the cybernetic/constructivist view that mind is not
transcendent but rather is immanent (an emergent property) in the
cybernetic loop structure of what is going on. I believe SOM is
consistent with this view as far as it goes, but that the SOM account
could and should go farther--cf e.g. writings of Gregory Bateson, Paul
Watzlawick, Maturana, Varela, and others; and in the AI world, some
reaching in this direction in Winograd & Flores _Understanding
Computers_.

Minsky cites Freud ("or possibly Poincare'") as introducing serious
consideration of subconscious thought. However, the Buddhists, for
example, are pretty astute students of the mind, and have been
investigating these matters quite systematically for a long time.

What often happens when walking, jogging, meditating, laughing, just
taking a deep sighing breath, etc is that there are temporarily no
decisions to be made, no distinctions to be discriminated, and the
reactive mind, the jostling crowd of interacting agents, quiets down a
bit. It then becomes possible to observe the mental process more
"objectively". The activity doesn't stop ("impermanence" or ceaseless
change is the byword here). It may become apparent that this activity
is and always was out of control. What brings an activity of an agent
(using SOM terms) above the threshold between subconscious and conscious
mental activity? What lets that continuing activity slip out of
awareness? Not only are the activities out of control--most of them
being most of the time below the surface of the ocean, so to speak, out
of awareness--but even the constant process of ongoing mental and
emotional states, images, and processes coming to the surface and
disappearing again below the surface turns out also to be out of
control.

This temporary abeyance in the need to make decisions has an obvious
relation to Minsky's speculation about "how we stop deciding" (in
AIList V6 #98):

MM> I claim that we feel free when we decide to not try further to
MM> understand how we make the decisions: the sense of freedom comes from a
MM> particular act - in which one part of the mind STOPs deciding, and
MM> accepts what another part has done. I think the "mystery" of free will
MM> is clarified only when we realize that it is not a form of decision
MM> making at all - but another kind of action or attitude entirely, namely,
MM> of how we stop deciding.

The report here is that if you stop deciding voluntarily--hold the
discrimination process in abeyance for the duration of sitting in
meditation, for example, not an easy task--there is more to be
discovered than the subjective feeling of freedom. Indeed, it can seem
the antithesis of freedom and free will, at times!

So the agents of SOM are continually churning away, and if they're
predetermined it's not in any way that amounts to prediction and control
as far as personal awareness is concerned. And material from this
ongoing chatter continually rises into awareness and passes away out of
awareness, utterly out of personal control. (If you don't believe me,
look for yourself. It is a humbling experience for one wedded to
intellectual rigor and all the rest, I can tell you.)

Evidently, this fact of impermanence is due to there being no ego there
to do any controlling. Thus, one comes experientially to the same
conclusion reached by the intellectual argumentation in SOM: that there
is no self or ego to control the mind. Unless perhaps it be an emergent
property of the loop structures among the various agents of SOM.
Cybernetics has to do with control, after all. (Are emergent properties
illusory? Ilya Prigogine probably says no. But the Buddhists say the
whole ball of wax is illusory, all mental process from top to bottom.)

Here, the relation between "free will" and creativity becomes more
accessible. Try substituting "creativity" for "free will" in all the
discussion thus far on this topic and see what it sounds like. It may
not be so easy to sustain the claim that "there is no creativity because
everything is either determined or random."
And although there is a
profound relation between creativity and "reaching into the random" (cf
Bateson's discussions of evolution and learning wrt double-bind theory),
that relation may say more about randomness than it does about
creativity.

If the elementary unit of information and of mind is a difference that
makes a difference (Bateson), then we characterize as random that in
which we can find no differences that make a difference. Randomness is
dependent on perspective. Changes in perspective and access to new
perspectives can instantly convert the random to the non-random or
structured. As we have seen in recent years, "chaos" is not random,
since we can discern in its nonlinearity differences that make a
difference. (Indeed, a cardinal feature of nonlinearity as I understand
it is that small differences of input can make very large differences of
output, the socalled "butterfly effect".) From the point of view of
personal creativity, "reaching into the random" often means reaching
into an irrelevant realm for analogy or metaphor, which has its own
structure unrelated in any obvious or known way to the problem domain.
(Cf. De Bono.) "Man's reach shall e'er exceed his grasp,/Else what's a
meta for?"
(Bateson, paraphrasing Browning.)

It is interesting that the Buddhist experience of the Void--no ego, no
self, no distinctions to be found (only those made by mind)--is logically
equivalent to the view associated with Vedanta, Qabala, neoplatonism,
and other traditions, that there is but one Self, and that the personal
ego is an illusory reflection of That i.e. of God. ("No distinctions
because there is no self vs other"
is logically equivalent to "no
distinctions because there is but one Self and no other"
, the
singularity of zero.) SOM votes with the Buddhists, if it matters, once
you drop the presumed one-one correspondence of minds with brains.

On the neoplatonist view, there is one Will, and it is absolutely free.
It has many centers of expression. You are a center of expression for
that Will, as am I. Fully expressing your particular share of (or
perspective on) that Will is the most rewarding and fulfilling thing you
can do for yourself; it is in fact your heart's desire, that which you
want to be more than anything else. It is also the most rewarding and
beneficial thing you can possibly do for others; this follows directly
from the premise that there is but one Will. (This is thus a
perspective that is high in synergy, using Ruth Benedict's 1948 sense of
that much buzzed term.) You as a person are of course free not to
discover and not to do that which is your heart's desire, so the
artifactual, illusory ego has free will too. It's "desire" is to
continue to exist, that is, to convince you and everyone else that it is
real and not an illusion.

Whether you buy this or not, you can still appreciate and use the
important distinction between cleverness (self acting to achieve desired
arrangement of objectified other) and wisdom (acting out of the
recognition of self and other as one whole). I would add my voice to
others asking that we develop not just artificial cleverness, but
artificial wisdom. Winograd & Flores again point in this direction.

Bruce Nevin
bn@cch.bbn.com
<usual_disclaimer>

------------------------------

Date: Wed 25 May 88 13:30:21-PDT
From: Conrad Bock <BOCK@INTELLICORP.ARPA>
Subject: Re: free will and introspection


For those interested in non-AI theories about symbols, the following is
a very quick (and only slightly incorrect) summary of Freud and Marx.

In Freud, the prohibition on incest forces children to express their
sexuality through symbolic means. Sexual desire is repressed in the
unconscious, leaving the symbols to be the center of people's attention.
People begin to be concerned with things for which there seems no
justification.

Marx observed that the act of exchanging objects in an economy forces us
to abstract from our individual labors to a symbol of labor in general
(money). The abstraction becomes embodied in capital, which people
constantly try to accumulate, forgetting about the value of the products
themselves.

Both Marx and Freud use term `fetish' to refer to the process in which
symbols (of sex and labor) begin to form systems that operate
autonomously. In Freud's fetishism, someone may be obsessed with
looking at the opposite sex instead of actual love; in Marx, people are
interested in money instead of actual work. In both cases, we lose
control of something of our own creation (the symbol) and it dominates
us.

In both Freud and Marx, these processes operate unconsciously. The
notion of an unconscious (both Freudian and Marxian) bears directly on
our free will and trust in our perceptions, but not as absolutes. It
presents a different challenge than ``do we have it or do we not''. We
are partially free, but can become more free through an understanding of
unconscious processes. It's the same as becoming more ``free'' by being
able to manipulate our physical environment, except that the unconscious
refers to our psychical environment.

------------------------------

Date: 25 May 88 22:51:49 GMT
From: quintus!ok@unix.sri.com (Richard A. O'Keefe)
Subject: Re: DVM's request for definitions

In article <894@maize.engin.umich.edu>, Brian Holtz writes:
> 3. volition: the ability to identify significant sets of options and to
> predict one's future choices among them, in the absence of any evidence
> that any other agent is able to predict those choices.
>
> There are a lot of implications to replacing free will with my notion of
> volition, but I will just mention three.
>
> - If my operationalization is a truly transparent one, then it is easy
> to see that volition (and now-dethroned free will) is incompatible with
> an omniscient god. Also, anyone who could not predict his behavior as
> well as someone else could predict it would no longer be considered to
> have volition.

Why does the other predictor have to be an AGENT?

"easy to see that ...?" Nope. You have to show that volition is
imcompatible with perfect PAST knowledge first...

"no longer considered to have volition ..." I've just been reading a
book called "Predictable Pairing" (sorry, I've forgotten the author's)
name, and if he's right it seems to me that a great many people do
not have volition in this sense. If we met Hoyle's "Black Cloud", and
it with its enormous computational capacity were to predict our actions
better than we did, would that mean that we didn't have volition any
longer, or that we had never had it?

What has free will to do with prediction? Presumably a dog is not
self conscious or engaged in predicting its activities, but does that
mean that a dog cannot have free will?

One thing I thought AI people were taught was "beware of the homunculus".
As soon as we start identifying parts of our mental activity as external
to "ourselves" we're getting into homunculus territory.

For me, "to have free will" means something like "to act in accord with
my own nature"
. If I'm a garrulous twit, people will be able to predict
pretty confidently that I'll act like a garrulous twit (even though I
may not realise this), but since I will then be behaving as I wish I
will correctly claim free will.

------------------------------

Date: 26 May 88 04:55:13 GMT
From: tektronix!sequent!mntgfx!msellers@bloom-beacon.mit.edu (Mike
Sellers)
Subject: Laws of Robotics (AI?) (was Re: Free Will & Self-Awareness)

I've always like Asimov's Laws of Robotics; I suspect that they will remain
firmly entrenched in the design and engineering of ambulatory AI systems for
some time (will they ever be obsolete?). I have some comments on the proposed
variations Barry proposed, however...

In article <31738@linus.UUCP>, bwk@mbunix.UUCP writes:
>
> I propose the following variation on Asimov:
>
> I. A robot may not harm a human or other sentient being,
> or by inaction permit one to come to harm.
>
> II. A robot may respond to requests from human beings,
^^^
> or other sentient beings, unless this conflicts with
> the First Law.

Shouldn't "may" be "must" here, to be imperitive? Otherwise it would seem
to be up to the robot's discretion whether to respond to the human's requests.

>
> III. A robot may act to protect its own existence, unless this
> conflicts with the First Law.

Or the Second Law. Otherwise people could tell robots to destruct themselves
and the robot would obey. Of course, if the destruction was necessary to keep
a human from harm, it would obey to be in keeping with the First Law.

>
> IV. A robot may act to expand its powers of observation and
> cognition, and may enlarge its knowledge base without limit.

Unless such expansion conflicts with the First, Second, or Third (?) Laws.
This is a worthy addition, but unless constrained by the other rules it
contains within it the seeds of Prometheus (from the movie "Demon Seed" --
ick, what a title :-) or Colossus (from "The Forbin Project"). The last
thing we want is a robot that learns and cogitates at the expense of humans.

> Can anyone propose a further refinement to the above?
>
> --Barry Kort

In addition to what I've said above, I think that all references to generic
"sentient beings" should be removed. Either this is too narrow in meaning,
providing only for humans (which are already explicitly stated in the Law),
or it is too general, easily encompassing *artificial* sentient beings, i.e.
robots. This is precisely what the Laws were designed to prevent. I like
the intent, and hopefully some way of engendering general pacifism and
deference to humans, animals, and to some degree other robots can be found.
Perhaps a Fifth Law:
"A robot may not harm another robot, or by inaction permit
one to come to harm, unless such action or inaction would
conflict with the First Law."


Note that by only limiting the conflict resolution to the First Law, a robot
could not respond to a human's request to harm another robot unless by not
responding a human would come to harm (V takes precedence over II), and a
robot might well sacrifice its existence for that of another (V takes
precedence over III). Of course, this wouldn't necessarily prevent a military
commander from convincing a warbot that destroying a bunch of other warbots
was necessary to keep some humans from harm... I guess this is what is done
with human armies nowadays anyway.

Comments?

--
Mike Sellers ...!tektronix!sequent!mntgfx!msellers
Mentor Graphics Corp., EPAD msellers@mntgfx.MENTOR.COM
"Hi. So, can any of you make animal noises?"
-- the first thing Francis Coppola ever said to me

------------------------------

Date: 26 May 88 09:34:21 GMT
From: Gilbert Cockton <mcvax!cs.glasgow.ac.uk!gilbert@uunet.UU.NET>
Reply-to: Gilbert Cockton
<mcvax!cs.glasgow.ac.uk!gilbert@uunet.UU.NET>
Subject: Re: AIList Digest V7 #6 Reply to McCarthy from a minor menace


>There are three ways of improving the world.
>(1) to kill somebody
>(2) to forbid something
>(3) to invent something new.
This is a strange combination indeed. We could at least add
(4) to understand ourselves.
Not something I would ever put in the same list as murder, but I'm
sure there's a cast iron irrefutable logical reason behind it.

But perhaps this is what AI is doing, trying to improve our understanding
of ourselves. But it may not do this because of
(2) it forbids something
that is, any approach, any insight, which does not have a computable expression.
This, for me, is anathema to academic liberal traditions and thus
>as long as Mrs. Thatcher is around; I wouldn't be surprised if
>Cockton could persuade Tony Benn.
is completely inaccurate. Knowledge of contemporary politics needs to
be added to the AI ignorance list. Mrs. Thatcher has just cut a lot of IT
research in the UK, and AI is one area which is going to suffer. Tony Benn on
the other hand, was a member of the government which backed the Transputer
initiative. The Edinburgh Labour council has used the AI department's sign in
some promotional literature for industries considering locating in Edinburgh.
They see the department's expertise as a strength, which it is.

Conservatives such as Thatcher look for immediate value for money in research.
Socialists look for jobs. Academic liberals look for quality. I may only have
myself to blame if this has not been realised, but I have never advocated an
end to all research which goes under the heading of AI. I use some of it in my
own research, and would miss it. I have only sought to attack the arrogance of
the computational paradigm, the "pure" AI tradition where tinkerers play at the
study of humanity. Logicians, unlike statisticians, seem to lack the humility
required to serve other disciplines, rather than try to replace them. There is
a very valuable role for discrete mathematical modelling in human activities,
but like statistics, this modelling is a tool for domain specialists and not
an end in itself. Logic and pure maths, like statistics, is a good servant
but an appalling master.

>respond to precise criteria of what should be suppressed
Mindless application of the computational paradigm to
a) problems which have not yielded to stronger methods
b) problems which no other paradigm has yet provided any understanding of.
For b), recall my comment on statistics. If no domain specialism has
any empirical corpus of knowledge, AI has nothing to test itself
against. It is unfalsifiable, and thus likely to invent nothing.
On a), no one in AI should be ignorant of the difficulties in relating
formal logic to ordinary language, never mind non-verbal behaviour and
kinaesthetic reasoning. AI has to make a case for itself based on a
proper knowledge of existing alternative approaches and their problems.
It usually assumes it will succeed spectacularly where other very bright
and dedicated people have failed (see the intro. to Winograd and Flores).

> how they are regarded as applying to AI
"Pure" AI is the application of the computational paradigm to the study of
human behaviour. It is not the same as computational modelling in
psychology, as here empirical research cannot be ignored. AI, by isolating
itself from forms of criticism and insight, cannot share in the
development on an understanding of humanity, because its raison d'etre,
the adherence to a single paradigm, without question, without
self-criticism, without a humble relationship to non-computational
paradigms, prevents it ever disappearing in the face of its impotence.

>and what forms of suppression he considers legitimate.
It may be partly my fault if anyone has thought otherwise, but you should
realise that I respect your freedom of association, speech and publication.
If anyone has associated my arguments with ideologies which would sanction
repression of these freedoms, they are (perhaps understandably) mistaken.
There are three legitimate forms of "suppression"
a) freely willed diversion of funding to more appropriate disciplines
b) run down of AI departments with distribution of groups across
established human disciplines, with service research in maths. This
is how a true discipline works. It leads to proper humility,
scholarship and ecleticism.
c) proper attention to methodological issues (cf the Sussex
tradition), which will put an end to the sillier ideas.
AI needs to be more self-critical, like a real discipline.
Social activities such as (a) and (b) will only occur if the arguments
with which I agree (they are hardly original) get the better of "pure"
AI's defence that it has something to offer (in which case answer that guy's
request for three big breaks in AI research, you're not doing very well on
this one). It is not so much suppression, as withdrawal of encouragement.

>similar to Cockton's inhabit a very bad and ignorant book called "The Question
> of Artificial Intelligence"
edited by Stephen Bloomfield, which I
>will review for "Annals of the History of Computing".
Could we have publishing information on both the book and the review please?
And why is it that AI attracts so many bad and ignorant books against it? If
you dive into AI topics, don't expect an easy time. Pure AI is attemping a
science of humanity and it deserves everything it gets. Sociobiology and
behaviourism attacted far more attention. Perhaps it's AI's turn. Every
generation has its narrow-minded theories which need broadening out.

AI is forming an image of humanity. It is a political act. Expect opposition.
Skinner got it, so will AI.

>The referee should prune the list of issues and references to a size that
> the discussants are willing to deal with.
And, of course, encoded in KRL! Let's not have anything which takes effort to
read, otherwise we might as well just go and study instead of program.

>The proposed topic is "AI and free will".
Then AI and knowledge-respresentation, then AI and Hermeneutics (anyone
read Winograd and Flores properly yet), then AI and epistemology, then ..
--
Gilbert Cockton, Department of Computing Science, The University, Glasgow

------------------------------

Date: Thu 26 May 88 14:16:09-EDT
From: Carl DeFranco <DEFRANCO@RADC-TOPS20.ARPA>
Subject: Free Will


The long standing discussion of Free Will has left me a little
confused. I had always considered Free Will a subject for the
philosophers and theologians, as in their knowledge domain,
the concept allows attribution of responsibility to
individuals for their actions. Thus, theologians may discuss
the concept of Sin as being a freely chosen action contrary to
some particular moral code. Without Free Will, there is no
sin, and no responsibility for action, since the alternatives
are total determinism, total randomness, or some mix of the
two to allow for entropy.

For my two cents, I have accepted Free Will as the ability to
make a choice between two (or possibily more) available
courses of action. This precludes such silly notions as the
will to defy gravity. Free will applies to choice, not to
plausibility. These choices will be guided by the indivdual's
experience and environment, a knowledge base if you would,
that provide some basis for evaluating the choices and the
consequences of choosing. To the extent that an individual
has been trained toward a particular behavior pattern, his/her
choices may be predicted with some probability. In other
circumstances, where there is no experience base or training,
choices will appear to be more random.

In general, people do what they please OR what pleases them.
It is this background guidance that changes from time to time,
and inserts the mathematical randomity into whatever model
used to predict behavior. Today it may please me to follow
the standard way of thinking in exploring some concept.
Tomorrow I may be more pleased to head off in some oddball
direction. It is this "Free Will" choice, in my view, that
creates the intelligence in human beings. We may take in
information and examine it from several points of view,
selecting a course of action from those points, and adding the
results to our experience., i.e. we learn.

As I follow AI from the sideline in my job, I won't presume to
prescribe The Answer, but it would appear that true Artificial
Intelligence can be given to a computer when:
1. It can learn from its experience.
2. It can test a "What If" from its knowledge.
3. There is some limited range of allowable random
selection.
Perhaps I take a simplistic view, but there appear to be a
number of one-sided viewpoints, either philosophic or
technical.

Carl DeFranco
defranco@radc-tops20.arpa

------------------------------

Date: 26 May 88 18:27:47 GMT
From: uflorida!novavax!proxftl!bill@gatech.edu (T. William Wells)
Subject: Re: The Social Construction of Reality

In article <1157@crete.cs.glasgow.ac.uk>, Gilbert Cockton writes:
> In article <1666@pt.cs.cmu.edu> Anurag Acharya writes:

> >Come on, does anyone really believe that if he and his pals reach a
> >consensus on some aspect of the world - the world would change to suit
> >them ? That is the conclusion I keep getting out of all these nebulous
> >and hazy stuff about 'reality' being a function of 'social processes'.

No, they do not really believe it, but they would like you to
believe it.

> The idea that there is ONE world, ONE physical reality is flawed, and
> pace Kant, this 'noumenal' world in unknowable anyway, if it does exist.
> Thus, to ask if the world would change, this depends on whether you
> see it as a single immutable physical entity, or an ideology, a set of
> ideas held by a social group (e.g. Physicists whose ideas are often
> different to engineers).

Oh, yeah. First, Kant is a dead horse. Second, you are only
able to get away with this multiple reality nonsense by twisting
words all out of relationship to any real meaning.

> When a new consensus about the world is reached (scientific discovery
> to give it ONE ritual title), the world does change, as more often
> than not it changes the way that groups party to the knowledge
> interact with the world. We are in this world, not apart from it.

OK, answer me this: how in the world do they reach a consensus
without some underlying reality which they communicate through.

And this: logic depends on the idea of noncontradiction. If you
abandon that, you abandon logic. You assert that consensus
determines reality (whatever that means). Now, among those with
whom I hold a consensus, we agree that there is only one
reality. Yours agree that there are multiple realities. Either
logic is valid in which case the contradiction is not allowed or
logic is not valid in which case your proposition has no evidence
to validate itself with. (And do not bother with "non-standard"
logics. I can construct the same paradox in any one which
presumes to assert anything about anything.)

> Yes it is all nebulous and hazy, but it's an intellectual handicap to
> only be able to handle things which are clear cut and 'scientific'.

It is the desire to abdicate the requirement to "handle things
which are clear cut"
which motivates sickos who propose consensus
reality schemes.

> There's something desparately insecure about scientists who need
> everything tied down before the think interaction with the socio-physical
> world is possible.

There is something desperately insecure about pseudo-philosophers
who need everything nebulous and hazy.

> Virtuall everything we do, outside of technical production, is nebulous
> and hazy.

And that ("outside of technical production") contradicts
everything else you have said.

***

DO NOT BOTHER TO REPLY TO ME IF YOU WANT TO DEFEND

CONSENSUS 
REALITY. The idea is so sick that I am not even willing to reply
to those who believe in it.

As you have noticed, this is not intended as a counter argument
to consensus reality. Instead, it is intended as a red-hot
flame. My indended audience is not the consensus reality
perverts but those of you who have been mislead by this kind of
s**t into wondering whether there is something to consensus
reality.

If you have any questions about the game the consensus reality
freaks are playing, you can send me e-mail. I have no intention
of further polluting the net by speaking about them. (At least
until next year when I will probably have to repost something
like this message again. Sigh.)

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT