Copy Link
Add to Bookmark
Report

AIList Digest Volume 7 Issue 014

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest            Thursday, 2 Jun 1988      Volume 7 : Issue 14 

Today's Topics:

Still More Free Will

----------------------------------------------------------------------

Date: 30 May 88 14:41:18 GMT
From: mcvax!ruuinf!piet@uunet.uu.net (Piet van Oostrum)
Subject: Re: More Free Will

In article <532@wsccs.UUCP> dharvey@wsccs.UUCP (David Harvey) writes:

If free will has created civilization as we know it, then it must be
accepted with mixed emotions. This means that Hitler, Stalin, some of
the Catholic Popes during the middle ages and others have created a
great deal of havoc that was not good. One of the prime reasons for AI
is to perhaps develop systems that prevent things like this from
happening. If we with our free will (you said it, not me) can't seem to
create a decent world to live in, perhaps a machine without free will
operating within prescribed boundaries may do a better job. We sure
haven't done too well.

I agree we haven't done too well, but if these same persons (i.e. WE) are
going to design a machine, what make you think this machine will do a
better job???

If the machine doesn't have a free will, the designers must decide what
kind of decisions it will make, and it will be based upon their insights,
ideas, moral etc.

Or would you believe AI researchers (or scientists in general) are
inherently better than rulers, popes, nazi's, communists or catholics, to
name a few?

Hitler and Stalin had scientists work for them, and there are now AI
researchers working on war-robots and similar nasty things. That doesn't give
ME much hope from that area.

--
Piet van Oostrum, Dept of Computer Science, University of Utrecht
Padualaan 14, P.O. Box 80.089, 3508 TB Utrecht, The Netherlands
Telephone: +31-30-531806 UUCP: ...!mcvax!ruuinf!piet

------------------------------

Date: 30 May 88 16:40:28 GMT
From: dvm@yale-zoo.arpa (Drew Mcdermott)
Subject: Free will

More on the self-modeling theory of free will:

Since no one seems to have understood my position on this topic,
I will run the risk that no one cares about my position, and try
to clarify.

Sometimes parties to this discussion talk as if "free will" were
a new kind of force in nature. (As when Biep Durieux proposed that
free will might explain probability rather than vice versa.) I am
sure I misrepresent the position; the word "force" is surely wrong
here (as is the word "new"). The misrepresentation is unavoidable;
this kind of dualism is simply not a live option for me. Nor can
I see why it needs to be a perenially live option on an AI discussion
bulletin board.

So, as I suggested earlier, let's focus on the question of free will
within the framework of Artificial Intelligence. And here it
seems to me the question is, How would we tell an agent with free
will from an agent without it? Two major strands of the discussion
seem completely irrelevant from this standpoint:

(1) Determinism vs. randomness. The world is almost
certainly not deterministic, according to quantum mechanics. Quantum
mechanics may be false, but Newtonian mechanics is certainly false,
so the evidence that the world is deterministic is negligible.
(Unless the Everett-Wheeler interpretation of quantum mechanics is true,
in which case the world is a really bizarre place.) So, if determinism
is all that's bothering you, you can relax. Actually, I think what's
really bothering people is the possibility of knowledge (traditionally,
divine knowledge) of the outcomes of their future decisions, which has
nothing to do with determinism.

(2) My introspections about my ability to control my thoughts or
whatnot. There is no point in basing the discussion on such evidence,
until we have a theory of what conscious thoughts are. Such a theory
must itself start from the outside, looking at a computational agent
in the world and explaining what it means for it to have conscious
thoughts. That's a fascinating topic, but I think we can solve the
free will problem with less trouble.

So, what makes a system free? To the primitive mind, free decisions
are ubiquitous. A tornado decides to blow my house down; it is worth
trying to influence its decision with various rewards or threats.
But nowadays we know that the concept of decision is just out of place
in reasoning about tornados. The proper concepts are causal; if we
can identify enough relevant antecedent factors, we can predict (and
perhaps someday control) the tornado's actions. Quantum mechanics
and chaos set limits to how finely we can predict, but that is
irrelevant.

Now we turn to people. Here it seems as if there is no need to do
away with the idea of decision, since people are surely the paradigmatic
deciders. But perhaps that attitude is "unscientific." Perhaps the
behaviorists are right, and the way we think about thunderstorms is
the right way to think about people. If that's the actual truth, then
we should be tough-minded and acknowledge it.

It is *not* the truth. Freedom gets its toehold from the fact that
it is impossible for an agent to think of itself in terms of causality.
Contrast my original bomb scenario with this one:

R sees C wander into the blast area, and go up to the bomb. R knows
that C knows all about bombs, and R knows that C has plenty of time to
save itself, so R decides to do nothing. (Assume that preventing the
destruction of other robots gets big points in R's utility function.)

In this case, R is reasoning about an agent other than itself. Its problem
is to deduce what C will actually do, and what C will actually suffer. The
conclusion is that C will prosper, so R need do nothing. It would
be completely inappropriate for R to reason this way about itself. Suppose
R comes to realize that it is standing next to a bomb, and it reasons as
follows:

R knows all about bombs, and has plenty of time to save itself, so I need
do nothing.

Its reasoning is fallacious, because it is of the wrong kind. R is not
being called on to deduce what R will do, but to be a part of the causal
fabric that determines what R will do, in other words: to make a
decision. It is certainly possible for a robot to engage in a reasoning
pattern of this faulty kind, but only by pretending to make a decision,
inferring that the decision will be made like that, and then not
carrying it out (and thus making the conclusion of the inference false).
Of course, such a process is not that unusual; it is called "weakness of
the will" by philosophers. But it is not the sort of thing one would be
tempted to call an actual decision. An actual decision is a process of
comparative evaluation of alternatives, in a context where the outcome
of the comparison will actually govern behavior. (A robot cannot decide
to stop falling off a cliff, and an alcoholic or compulsive may not
actually make decisions about whether to cease his self-destructive
behavior.)

This scenario is one way for a robot to get causality wrong when
reasoning about itself, but there is a more fundamental way, and that is
to just not notice that R is a decision maker at all. With this
misperception, R could tally its sources of knowledge about all
influences on R's behavior, but it would miss the most important one,
namely, the ongoing alternative-evaluation process. Of course, there are
circumstances in which this process is in fact not important. If R is
bound and gagged and floating down a river, then it might as well
meditate on hydrodynamics, and not work on a decision. But most of the
time the decision-making process of the robot is actually one of the
causal antecedents of its future. And hence, to repeat the central
idea, *there is no point in trying to think causally about oneself while
making a decision that is actually part of the causal chain. Any system
that realizes this has free will.*

This theory accounts for why an agent must think of itself as outside
the causal order of things when making a decision. However, it need not
think of other agents this way. An agent can perfectly well think of
other agents' behavior as caused or uncaused to the same degree the
behavior of a thunderstorm is caused or uncaused. There is a
difference: One of the best ways to cause a decision-making agent to do
something is to give him a good reason to do it, whereas this strategy
won't work with thunderstorms. Hence, an agent will do well to sort
other systems into two categories, those that make free decisions and
those that don't, and deal with them differently.

By the way, once a decision is made there is no problem with its maker
thinking of it purely causally, in exactly the same way it thinks about
other decision makers. An agent can in principle see *all* of the
causal factors going into its own past decisions, although in practice
the events of the past will be too random or obscure for an exhaustive
analysis. It is surely not dehumanizing to be able to bemoan that if
only such-and-such had been brought to my attention, I would have
decided otherwise than I did, but, since it wasn't, I was led
inexorably to a wrong decision.

Now let me deal with various objections:

(1) Some people said I had neglected the ability of computers to do
reflexive meta-reasoning. As usual, the mention of meta-reasoning makes
my head swim, but I shall try to respond. Meta-reasoning can mean
almost anything, but it usually means escaping from some confining
deductive system in order to reason about what that system ought to
conclude. If this is valuable, there is no reason not to use it. But
my picture is of a robot faced with the possibility of reasoning about
itself as a physical system, which is in general a bad thing to do.
The purpose of causal-exemption flagging is to shut pointless reasoning
down, meta or otherwise.

So, when O'Keefe says:

So the mere possibility of an agent having to appear to simulate itself
simulating itself ... doesn't show that unbounded resources would be
required: we need to know more about the nature of the model and the
simulation process to show that.

I am at a loss. Any system can simulate itself with no trouble. It
could go over past or future decisions with a fine-tooth comb, if it
wanted to. What's pointless is trying to simulate the present period
of time. Is an argument needed here? Draw a mental picture: The robot
starts to simulate, and finds itself simulating ... the start of a
simulation. What on earth could it mean for a system to figure out
what it's doing by simulating itself?

(2) Free will seems on this theory to have little to do with
consciousness or values. Indeed it does not. I think a system could
be free and not be conscious at all; and it could certainly be free and
not be moral.

What is the minimal level of free will? Consider a system for
scheduling the movement of goods into and out of a warehouse. It has to
synchronize its shipments with those of other agents, and let us
suppose that it is given those other shipments in the form of various
schedules that it must just work around. From its point of view, the
shipments of other agents are caused, and its own shipments are to be
selected. Such a system has what we might call *rudimentary* free
will. To get full-blown free will, we have to suppose that the system
is able to notice the discrepancy between boxes that are scheduled to
be moved by someone else, and boxes whose movements depend on its
decisions. I can imagine all sorts of levels of sophistication in
understanding (or misunderstanding) the discrepancy, but just noticing
it is sufficient for a system to have full-blown free will. At that
point, it will have to realize that it and its tools (the things it
moves in the warehouse) are exempt from causal modeling.

(3) Andrew Gelsey has pointed out that a system might decide what to
do by means other than simulating various alternative courses of action.
For instance, a robot might decide how hard to hit a billiard ball by
solving an equation for the force required. In this case, the
asymmetry appears in what is counted as an independent variable (i.e.,
the force administered). And if the robot notices and appreciates the
asymmetry, it is free.

(4) David Sher has objected

If I understand [McDermott's theory] correctly it runs like this:
To plan one has a world model including future events.
Since you are an element of the world then you must be in the model.
Since the model is a model of future events then your future actions
are in the model.
This renders planning unnecessary.
Thus your own actions must be excised from the model for planning to
avoid this "singularity."

Taken naively, this analysis would prohibit multilevel analyses such
as is common in game theory. A chess player could not say things like
if he moves a6 then I will move Nc4 or Bd5 which will lead ....

The response to this misreading should be obvious. There are two ways
to think about my future actions. One way is to treat them as
conditional actions, begun now, and not really future actions at all.
(Cf. the notion of strategy in game theory.)

The more interesting insight is that an agent can reason about its
future actions as if they were those of another agent. There is no
problem with doing this; the future is much like the past in this
respect, except we have less information about it. A robot could
reason at its leisure about what decision it would probably make if
confronted with some future situation, and it could use an arbitrarily
detailed simulation of itself to do this reasoning, provided it has
time to run it before the decision is to be made. But all of this
self-prediction is independent of actually making the decision. When
the time comes to actually make it, the robot will find itself free
again. It will not be bound by the results of its simulation. This
may seem like a nonsequitur; how could a robot not faithfully execute
its program the same way each time it is run? There is no need to
invoke randomness; the difference between the two runs is that the
second one is in a context where the results of the simulation are
available. Of course, there are lots of situations where the decision
would be made the same way both times, but all we require is that the
second be correctly classified as a real -- free -- decision.

I find Sher's "fix" to my theory more dismaying:

However we can still make the argument that Drew was making its just
more subtle than the naive analysis indicates. The way the argument
runs is this:
Our world model is by its very nature a simplification of the real
world (the real world doesn't fit in our heads). Thus our world model
makes imperfect predictions about the future and about consequence.
Our self model inside our world model shares in this imperfection.
Thus our self model makes inaccurate predictions about our reactions
to events. We perceive ourselves as having free will when our self
model makes a wrong prediction.

This is not at all what I meant, and seems pretty shaky on its own
merits. This theory makes an arbitrary distinction between an agent's
mistaken predictions about itself and its mistaken predictions about
other systems. I think it's actually a theory of why we tend to
attribute free will to so many systems, including thunderstorms. We
know our freedom makes us hard to predict, and so we attribute freedom
to any system we make a wrong prediction about. This kind of paranoia
is probably healthy until proven false. But the theory doesn't explain
what we think free will is in the first place, or what its explanatory
force is in explaining wrong predictions.

Free will is not due to ignorance. Imagine that the decision maker is a
robot with a very routine environment, so that it often has complete
knowledge both of its own listing and of the external sensory data it
will be receiving prior to a decision. So it can simulate itself to any
level of detail, and it might actually do that, thinking about decisions
in advance as a way of saving time later when the actual decision had
to be made. None of this would allow it to avoid making free
decisions.
-- Drew McDermott

------------------------------

Date: Mon, 30 May 88 15:05:47 EDT
From: Bharat.Dave@CAD.CS.CMU.EDU
Subject: reconciling free will and determinism


Perhaps following quote from "Chaos" by J. Gleick may help reconcile
apparently dichotomous concepts of determinism and free will. This is
more of a *metaphor* rather than a rigorous argument. Discussing the
work of a group of people at UCSC on chaotic systems, the author quotes
Doyne Farmer[pg.251].


Farmer said, "On a philosophical level, it struck me as an
operational way to define free will, in a way that allowed you
to reconcile free will with determinism. The system is
deterministic, but you can't say what it's going to do next...

Here was one coin with two sides. Here was order, with
randomness emerging, and then one step further away was
randomness with its own underlying order."


Two other themes that pop up quite often in this book are apparent
differences between behavior at micro and macro scales, and sensitive
dependence on initial conditions on the behavior of dynamical systems.
Choice of the vantage point significantly affects what you see. At
molecular level, we are all a stinking mess with a highly boring and
uniform behavior; on a different level, I can't predict with certainty
how most people will respond to this message.

Seems like you can't be free unless you acknowledge determinism :-)

------------------------------

Date: 31 May 88 03:24:17 GMT
From: glacier!jbn@labrea.stanford.edu (John B. Nagle)
Subject: Re: Free will


Since this discussion has lost all relevance to anything anybody
is likely to actually implement in the AI field in the next twenty years
or so, could this be moved to talk.philosophy?

John Nagle

------------------------------

Date: 31 May 88 14:10:08 GMT
From: bwk@mitre-bedford.arpa (Barry W. Kort)
Subject: Self Simulation

Drew McDermott's lengthy posting included a curious nugget. Drew
paints a scenario in which a robot engages in a simulation which
includes the robot itself as a causal agent in the simulation. Drew
asks, "What on earth could it mean for a system to figure out what
it's doing by simulating itself?"

I was captured by the notion of self-simulation, and started day-dreaming,
imagining myself as an actor inside a simulation. I found that, as the
director of the day-dream, I had to delegate free will to my simulated
self. The movie free-runs, sans script. It was just like being asleep.

So, perhaps a robot who engages in self-simulation is merely dreaming
about itself. That's not so hard. I do it all the time.

--Barry Kort

------------------------------

Date: 31 May 88 14:35:33 GMT
From: uhccux!lee@humu.nosc.mil (Greg Lee)
Subject: Re: Free will

Edward Lasker, in his autobiographical book about his experiences as
a chess master, describes a theory and philosophical tract
by his famous namesake, Emmanuel Lasker, who was world chess
champion for many years. It concerned a hypothetical being,
the Macha"ide, which is so advanced and profound in its thought
that its choices have become completely constrained. It can
discern and reject all courses of action that are not optimal,
and therefore it must. It is so evolved that it has lost
free will.
Greg, lee@uhccux.uhcc.hawaii.edu

------------------------------

Date: 31 May 88 14:39:41 GMT
From: uvaarpa!virginia!uvacs!cfh6r@umd5.umd.edu (Carl F. Huber)
Subject: Re: Free Will & Self Awareness

In article <5323@xanth.cs.odu.edu> Warren E. Taylor writes:
>In article <1176@cadre.dsl.PITTSBURGH.EDU>, Gordon E. Banks writes:
>
>"Spanking" IS, I repeat, IS a form of redesigning the behavior of a child.
>Many children listen to you only when they are feeling pain or are anticipating
>the feeling of pain if they do not listen.

> Also, pain is often the only teacher a child will listen to. He

>From what basis do you make this extraordinary claim? Experience? or do you
have some reputable publications to mention - I would like to see the studies.
I also assume that "some" refers to the 'average' child - not pathological
exceptions.

> I have an extremely hard-headed nephew
>who "deserves" a spanking quite often because he is doing something that is
>dangerous or cruel or simply socially unacceptable. He is also usually
>maddeningly defiant.
^^^^^^^^^^^^^^^^^^^
Most two to six year olds are. How old is this child? 17? What other methods
have been tried? Spanking most generally results from frustrated parents
who beleive they have "tried everything", while they actually haven't
begun to scratch the surface of what works.

>learns to associate a certain action with undesirable consequences. I am not
>the least bit religious, but the old Biblical saying of "spare the rod..." is

right, so let's not start something we'll regret, ala .psychology.
>
>Flame away.
> Warren.

voila. cheers.

-carl

------------------------------

Date: 31 May 88 15:27:11 GMT
From: mailrus!caen.engin.umich.edu!brian@ohio-state.arpa (Brian
Holtz)
Subject: Re: DVM's request for definitions


In article <1020@cresswell.quintus.UUCP>, ok@quintus.UUCP (Richard A.
O'Keefe) writes:
> In article <894@maize.engin.umich.edu>, brian@caen.engin.umich.edu (Brian
> Holtz) writes:
> > 3. volition: the ability to identify significant sets of options and to
> > predict one's future choices among them, in the absence of any evidence
> > that any other agent is able to predict those choices.
> >
> > There are a lot of implications to replacing free will with my notion of
> > volition, but I will just mention three.
> >
> > - If my operationalization is a truly transparent one, then it is easy
> > to see that volition (and now-dethroned free will) is incompatible with
> > an omniscient god. Also, anyone who could not predict his behavior as
> > well as someone else could predict it would no longer be considered to
> > have volition.

[proceeding excerpts not in any particular order]

> For me, "to have free will" means something like "to act in accord with
> my own nature". If I'm a garrulous twit, people will be able to predict
> pretty confidently that I'll act like a garrulous twit (even though I
> may not realise this), but since I will then be behaving as I wish I
> will correctly claim free will.

Recall that my definition of free will ("the ability to make at least some
choices that are neither uncaused nor completely determined by physical
forces") left little room for it to exist. Your definition (though I doubt
you will appreciate being held to it this strictly) leaves too much room:
doesn't a falling rock, or the average computer program, "act in accord with
[its] own nature"?

> One thing I thought AI people were taught was "beware of the homunculus".
> As soon as we start identifying parts of our mental activity as external
> to "ourselves" we're getting into homunculus territory.

I agree that homunculi are to be avoided; that is why I relegated "the
ability to make at least some choices that are neither uncaused nor
completely determined by *external* physical forces" to being a definition
not of free will, but of "self-determination". The free will that you are
angling for sounds a lot like what I call self-determination, and I would
welcome any efforts to sharpen the definition so as to avoid the
externality/internality trap. So until someone comes up with a definition
of free will that is better than yours and mine, I think the best course is
to define free will out of existence and take my "volition" as the
operationalized designated hitter for free will in our ethics.

> What has free will to do with prediction? Presumably a dog is not
> self conscious or engaged in predicting its activities, but does that
> mean that a dog cannot have free will?

Free will has nothing to do with prediction; volition does. The question of
whether a dog has free will is a simple one with either your definition *or*
mine. By my definition, nothing has free will; by yours, it seems to me
that everything does. (Again, feel free to refine your definition if I've
misconstrued it.) A dog would seem to have self-determination as I've
defined it, but you and I agree that my definition's reliance on
ex/in-ternality makes it a suspect categorization. A dog would clearly not
have volition, since it can't make predictions about itself. And since
volition is what I propose as the predicate we should use in ethics, we are
happily exempt from extending ethical personhood to dogs.

> "no longer considered to have volition ..." I've just been reading a
> book called "Predictable Pairing" (sorry, I've forgotten the author's)
> name, and if he's right it seems to me that a great many people do
> not have volition in this sense. If we met Hoyle's "Black Cloud", and
> it with its enormous computational capacity were to predict our actions
> better than we did, would that mean that we didn't have volition any
> longer, or that we had never had it?

A very good question. It would mean that we no longer had volition, but
that we had had it before. My notion of volition is contingent, because it
depends on "the absence of any evidence that any other agent is able to
predict" our choices. What is attractive to me about volition is that it
would be very useful in answering ethical questions about the "free will"
(in the generic ethical sense) of arbitrary candidates for personhood: if
your AI system could demonstrate volition as defined, then your system would
have met one of the necessary conditions for personhood. What is unnerving
to me about my notion of volition is how contingent it is: if Hoyle's "Black
Cloud" or some prescient god could foresee my behavior better than I could,
I would reluctantly conclude that I do not even have an operational
semblence of free will. My conclusion would be familiar to anyone who
asserts (as I do) that the religious doctrine of predestination is
inconsistent with believing in free will. I won't lose any sleep over this,
though; Hoyle's "Black Cloud" would most likely need to use analytical
techniques so invasive as to leave little of me left to rue my loss of
volition.

------------------------------

Date: 31 May 88 15:51:48 GMT
From: mind!thought!ghh@princeton.edu (Gilbert Harman)
Subject: Re: Free will

In article <17470@glacier.STANFORD.EDU> jbn@glacier.UUCP (John B. Nagle) writes:
>
> Since this discussion has lost all relevance to anything anybody
>is likely to actually implement in the AI field in the next twenty years
>or so, could this be moved to talk.philosophy?
>
> John Nagle


Drew McDermott's suggestion seems highly relevant to
implementations while offering a nice approach to at least
one problem of free will. (It seems clear that people have
been worried about a number of different things under the
name of "free will".) How about keeping a discussion of
McDermott's approach here and moving the rest of the
discussion to talk.philosophy?

Gilbert Harman
Princeton University Cognitive Science Laboratory
221 Nassau Street, Princeton, NJ 08542

ghh@princeton.edu
HARMAN@PUCC.BITNET

------------------------------

Date: Tue, 31 May 88 17:08 EDT
From: GODDEN%gmr.com@RELAY.CS.NET
Subject: RE: Free Will and Self-Awareness


>From: bwk@mitre-bedford.arpa (Barry W. Kort)
[...]
>It is not clear to me whether aggression is instinctive (wired-in)
>behavior or learned behavior.

You might want to take a look at the book >On Aggression< by Konrad
Lorenz. I don't have the complete reference with me, but can supply
upon request. I read it back in 1979, but if I recall one of the
primary theses set forth in the book is that agression is indeed
instinctive in all animal life forms including humans and serves
both to defend and perpetuate the species. Voluminous citations of
purposeful and useful aggressive behavior in many species are provided.
I think he also philosophizes on how we as thinking and peace-loving
people (with free will (!)) can make use of our conscious recognition of
our innate aggression to keep it at appropriate levels of manifestation.
I became very excited about his ideas at the time.

-Kurt Godden
GM Research
godden@gmr.com

------------------------------

Date: Tue, 31 May 88 18:48:56 -0700
From: peck@Sun.COM
Subject: free will

First of all, i what to say that i'm probably in this camp:
> 4. (Minsky et al.) There is no such thing as free will. We can dispense
> with the concept, but for various emotional reasons we would rather not.

I haven't followed all this discussion, but it seems the free-will protagonists
basic claim is that "choices" are made randomly or by a homonculus,
and the physicists claim that the homonculus has no free will either.

Level 1 (psychological): an intelligent agent first assumes that it has
choices, and reasons about the results of each and determines predictively
the "best" choice. ["best" determined by some optimization criterion function]
This certainly looks like free-will, especially to those experiencing it
or introspecting on it.

Level 0 (physics): the process or computation which produces the reasoning
and prediction at level 1 is deterministic. (plus or minus quantum effects)
>From level 0 looking up to level 1, it's hard to see where free will comes in.
Do the free-will'ists contend that the mind retroactively controls the
laws of physics? More perspicuous would be that the mind distorts its own
perception of itself to believe that it is immune to the laws of physics.


My real questions is: Does it matter whether man or machine has free will?
In what way does the existance of free will make a more intelligent actor,
a better information processor, or a better controller of processes?

If an agent makes good decisions or choices, or produces control outputs,
stores or interprets information, or otherwise produces behaviors,
What does it matter (external to that agent) whether it has free will?
What does it matter *internal* to that agent?
Does it matter if the agent *believes* it has free will?

[For those systems sophisticated enough to introspect, *believing* to have
free will is useful, at least initially (was this Minsky's argument?).]

Are there any objective criteria for determining if an agent has free-will?

[If, as i suspect, this whole argument is based on the assumption that:
Free-will iff (not AI), then it seems more feasible to work on the
AI vs (not AI) clause, and *then* debate the free-will clause.]


My brief response to the moral implication of free will:
For those agents that can accept the non-existance of free will and still
function reliably and intelligently (ie the "enlightened few"), the concept is
not necessary. For others, (the "majority") the concept of free will,
like that of God, and sin, and laws, and purpose, and the rest of what
Vedanta/Budhists would call the "Illusion", is either necessary or useful.
In this case, it is important *not* to persuade someone that they do
not have free will. If they can not figure out the reason and ramifications
of that little gem of knowledge, it is probably better that they not be
burdened. yes, a little knowledge *is* a dangerous thing.

- ---
My other bias: the purpose of intelligence to produce more flexible, more
capable controllers of processes (the first obvious one is the biochemical
plant that is your body). These controllers of course are mostly
characterized by their control of information: how well they understand or
model the environment or the controlled process determines how well they can
predict or control that environment or process. So, intelligence is
inexorably tied up with information processing (interpreting, storing,
encoding/decodeing), and decision making.

From this point of view, the more interesting questions are: What is the
criterion function (utility function, feedback funtion, or whatever you call
it); how is it formed; how is it modified; how is it evaluated?

The question of evaluation is an important difference between artificial
intelligence and organic intelligence. In people the evaluation is done by
the "hardware", it is modified as the body is modified, bio-chemically.
It is the same biochemistry that the evaluation function is trying to control!

Thought for the day:
If it turns out that the criterion function is designed to self-perpetuate
*itself* (the function, not merely its agent), by arranging to have itself
be modified as the results of the actions based on its predictive evaluations,
[ie, it is self serving and self perpetuating just as genes/memes/virusus are]
would that help explain why choices seem indeterminate and "free"?

------------------------------

Date: 1 Jun 88 09:38:00 EDT
From: "CUGINI, JOHN" <cugini@icst-ecf.arpa>
Reply-to: "CUGINI, JOHN" <cugini@icst-ecf.arpa>
Subject: Free will - the metaphysics continues...


Drew McDermott writes:

> I believe most of the confusion about this concept comes from there
> not being any agreed-upon "common sense" of the term "free will." To
> the extent that there is a common consensus, it is probably in favor
> of dualism, the belief that the absolute sway of physical law stops at
> the cranium. Unfortunately, ever since the seventeenth century, the
> suspicion has been growing among the well informed that this kind of
> dualism is impossible. ...
>
> If we want to debate about AI versus dualism, ...we can. [Let's]
> propose technical definitions of free will, or propose dispensing with
> the concept altogether. ...
>
> I count four proposals on the table so far:
>
> 1. (Propose by various people) Free will has something to do with randomness.
>
> 2. (McCarthy and Hayes) When one says "Agent X can do action A," or
> "X could have done A,"
>
> 3. (McDermott) To say a system has free will is to say that it is
> "reflexively extracausal,"
>
> 4. (Minsky et al.) There is no such thing as free will.

I wish to respond somewhat indirectly by trying to describe the
"classical" position - ie to present a paradigm case of free will.
Thus, we will have an account of *sufficient* conditions for free
will. Readers may then consider whether these conditions are
necessary - whether we can back off to a less demanding case, and
still have free will. I suspect the correct answer is "no", but
I'll not argue that point too thoroughly.

*** *** *** *** *** ***

Brief version: Free will is the ability of a conscious entity to make
free decisions. The decision is free, in that, although the entity
causes the decision, nothing causes the entity to make the decision.

*** *** *** *** *** ***

(Rough analogy: an alpha particle is emitted (caused) by the decay of
a nucleus, but nothing caused the nucleus to decay and emit the
particle - the emitting [deciding] is uncaused).

There's an unfortunate ambiguity in the word "decision" - it can mean
the outcome (the decision was to keep on going), or the process (his
decision was swift and sure). Keeping these straight, it is the
decision-process, the making of the decision-outcome, that is
uncaused. The decision-outcome is, of course, caused (only) by the
decision-process.

Discussion: I'm going to opt for breadth at the expense of depth -
please read in non-nitpicking spirit.

1. Randomness - free will is related to randomness only in that both
are examples of acausality. True, my uncaused decision is "random"
wrt to the physical world, and/or the history of my consciousness - ie
not absolutely predictable therefrom. That doesn't mean it's random,
in the stronger sense of being a meaningless event that popped out of
nowhere - see next item.

2. Conscious entity - free will is a feature which only a conscious
entity (like you and me, and maybe your dog) can have - can anyone
credit an unconscious entity with free will, even if it "makes a
random decision" in some derivative sense (eg a chess-playing program
which uses QM processes to vary its moves) ? "Consciousness is a
problematic concept" you say? - well, yes, but not as much so as is
free will - and I think it's only problematic to those who insist that
it be reduced to more simple concepts. There ain't any.
Free decisions "pop out" of conscious entities, not nuclei. If
you don't know what a conscious entity is, you're not reading this.
No getting around it, free will brings us smack up to the problem of
the self - if there are no conscious selves, there is no free will.
While it may be difficult to describe what it takes to be a conscious
self, at least we don't doubt the existence of selves, as we may the
existence of free will. So the strategy here is to take selves as a
given (for now at least) and then to say under what conditions these
selves have free will.

3. Dualism - I believe we can avoid this debate; I maintain that free
will requires consciousness. Whether consciousness is physical, we
can leave aside. I can't resist noting that Saul Kripke is probably
as "well-informed" as anyone, and last I heard, he was a dualist. It's
quite fashionable nowadays to take easy verbal swipes at dualism as an
emblem of one's sophistication. I suspect that some swipers might be
surprised at how difficult it is to concoct a good argument against
dualism.

4. Physics - Does the requirement for acausality require the
violation of known physical laws? Very unclear. First, note that all
kinds of macro-events are ultimately uncaused in that they stem from
ontologically random quantum events (eg radiation causing birth
defects, cancer...). Whether brain events magnify QM uncertainty in
this way no one really knows, but it's not to be ruled out. Further,
very little is understood of the causal relations between brain and
consciousness (hence the dualism debate). At any rate, the position
is that, for the conscious decision-process to be free, it must be
uncaused. If this turns out to violate physical laws as presently
understood, and if the present understanding turns out to be correct,
then this just shows that there is no free will.

5. No denial of statistical probabilitities or influence - None of
the above denies that allegedly free deciders are, in fact, quite
predictable (probabilistically). It is highly unlikely that I will
decide to put up my house for sale tomorrow, but I could. My
conscious reasons for not doing so do not absolutely determine that I
won't. I could choose to in spite of these reasons.

6. Free will as "could have decided otherwise" - This formulation is
OK as long as the strength of the "could" includes at least physical
possibility, not just logical possibility. If one could show that
my physical brain-state today physically determines that I will
(consciously) decide to sell my house tomorrow, it's not a free
decision.

7. A feature, not an event - I guess most will agree that free will
is a capability (like strong arms) which is manifested in particular
events - in the case of free will, the events are free decisions. An
entity might make some caused decisions and some free - free will only
says he/she can make free ones, not that all his/her decisions are
free.

8. Rationality / Intelligence - It may well be true that rationality
makes free will worth having but there's no reason not to consider the
will free even in the absence of a whole lot of intelligence.
Rationality makes strong arms more valuable as well, but one can still
have strong arms without it. As long as one can acausally decide, one
has free will.

9. Finding out the truth - Need I mention that the above is intended
to define what free will is, not necessarily to tell one how to go
about determining whether it exists or not. To construct a test
reflecting the above considerations is no small task. Moreover, one
must decide (!) where the burden of proof lies: a) I feel free,
therefore it's up to you to prove my feelings are illusory and that
all my decision-processes are caused, or b) the "normal" scientific
assumption is that all macro-events have proximate macro-causes, and
therefore it's up to me to show that my conscious processes are a
"special case" of some kind.

John Cugini <Cugini@icst-ecf.arpa>

------------------------------

Date: 1 Jun 88 14:39:00 GMT
From: apollo!nelson_p%apollo.uucp@eddie.mit.edu
Subject: Free will and self-awareness


Gilbert Cockton posts:
>The test is easy, look at the references. Do the same for AAAI and
>IJCAI papers. The subject area seems pretty introspective to me.
>If you looked at an Education conference proceedings, attended by people who
>deal with human intelligence day in day out (rather than hack LISP), you
>would find a wide range of references, not just specialist Education
>references.
>You will find a broad understanding of humanity, whereas in AI one can
>often find none, just logical and mathematical references. I still
>fail to see how this sort of intellectual background can ever be
>regarded as adequate for the study of human reasoning. On what
>grounds does AI ignore so many intellectual traditions?

Because AI would like to make some progress (for a change!). I
originally majored in psychology. With the exception of some areas
in physiological pyschology, the field is not a science. Its
models and definitions are simply not rigorous enough to be useful.
This is understandable since the phenomena it attempts to address
are far too complex for the currently available intellectual and
technical tools. The result is that psychologists and sociologists
waste much time and money over essentially unresolvable philosophical
debates, sort of like this newsgroup! When you talk about an
'understanding of humanity' you clearly have a different use of
the term 'understanding' in mind than I do.

Let's move this topic to talk.philosophy!!

--Peter Nelson

------------------------------

Date: Wed, 01 Jun 88 12:48:12
From: ALFONSEC%EMDCCI11.BITNET@CUNYVM.CUNY.EDU
Subject: Free will et al.

>Thanasis Kehagias <ST401843%BROWNVM.BITNET@MITVMA.MIT.EDU>
>Subject: AI is about Death
> . . .
>SUGGESTED ANSWER: if AI is possible, then it is possible to create
>intelligence. all it takes is the know-how and the hardware. also, the
>following inference is not farfetched: intelligence -> life. so if AI
>is possible, it is possible to give life to a piece of hardware. no ghost
>in the machine. no soul. call this the FRANKENSTEIN HYPOTHESIS, or, for
>short, the FH (it's just a name, folks!).

The fact that we have "created intelligence" (i.e. new human beings)
since thousands of years ago, has not stopped the current controversy
or the discussion about the existence of the soul.
If, sometime in the future, we make artificial intelligence beings,
the discussions will go on the same as today. What is to prevent
a machine from having a soul?

The question cannot be decided in a discussion, because it comes
from totally different axioms, or starting points.
The (non-)existence of the soul (or of free will) is not a conclusion,
but an axiom. It is much more difficult to convince people to
change their axioms than to accept a fact.

Regards,

Manuel Alfonseca, ALFONSEC at EMDCCI11

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT