Copy Link
Add to Bookmark
Report

AIList Digest Volume 7 Issue 006

eZine's profile picture
Published in 
AIList Digest
 · 11 months ago

AIList Digest           Wednesday, 25 May 1988      Volume 7 : Issue 6 

Today's Topics:

Philosophy - Even More Free Will
(Last of three digests on this topic)

----------------------------------------------------------------------

Date: 13 May 88 21:57:50 GMT
From: dalcs!iisat!paulg@uunet.uu.net (Paul Gauthier)
Subject: Re: More Free Will


I'm sorry, but there is no free will. Every one of us is bound by the
laws of physics. No one can lift a 2000 tonne block of concrete with his
bare hands. No one can do the impossible, and in this sense none of us have
free will.

I am partially wrong there, as long as you don't WANT to do the impossible
you can have a sort of free will. But as soon as you feel that you want to
do something that cannot be done then your free will is gone.

Let me define my idea of free will: Free will is being able to take any
course of action which you want to take. So if you never want to take a
course of action which is forbidden to you, your free will is retained.

Free will is completely subjective. There is no 'absolute free will.' At
least that is how I look at free will. Since it is subjective to the person
whose free will is in question it follows that as long as this person THINKS
he is experiencing free will then he is. If he doesn't know that his decisions
are being made for him, and he THINKS they are his own free choices then he
is NOT being forced into a course of action he doesn't desire so he has free
will.

Anyways, I suppose there'll be a pile of rebuttles against this (gosh, I
hope so -- I love debates!).


--
==============================================================================
=== Paul Gauthier at International Information Services ===
=== {uunet, utai, watmath}!dalcs!iisat!paulg ===
========================================================================

------------------------------

Date: 13 May 88 21:58:15 GMT
From: bwk@mitre-bedford.arpa (Barry W. Kort)
Subject: Re: Raising Consciousness

I was heartened by Drew McDermot's well-written summary of the Free
Will discussion.

I have not yet been dissuaded from the notion that Free Will is
an emergent property of a decision system with three agents.

The first agent generates a candidate list of possible courses
of action open for consideration. The second agent evaluates
the likely outcome of pursuing each possible course of action,
and estimates it's utility according to it's value system. The
third agent provides a coin-toss to resolve ties.

Feedback from the real world enables the system to improve its
powers of prediction and to edit it's value system.

If the above model is at all on target, the decision system would
seem to have free will. And it would not be unreasonable to hold
it accountable for its actions.

On another note, I think it was Professor Minsky who wondered
how we stop deciding an issue. My own feeling is that we terminate
the decision-making process when a more urgent or interesting
issue pops up. The main thing is that our decision making machinery
chews on whatever dilemma captures its attention.

--Barry Kort

------------------------------

Date: 14 May 88 17:16:38 GMT
From: vu0112@bingvaxu.cc.binghamton.edu (Cliff Joslyn)
Subject: Re: More Free Will

In article <2@iisat.UUCP> paulg@iisat.UUCP (Paul Gauthier) writes:
> I am partially wrong there, as long as you don't WANT to do the impossible
>you can have a sort of free will. But as soon as you feel that you want to
>do something that cannot be done then your free will is gone.
>
> [ other good comments deleted ]

I'm totally perplexed why the concept of *RELATIVE FREEDOM* is so
difficult for people to adhere to.

Can someone *please* rebut the following:

1) Absolute freedom is theoretically impossible. Absolute freedom is
perhaps best characterized as a uniform distribution on the real line.
This distribution is not well formed. The concept of *absolute
randomness* is not well defined. For example, it is *determined* that
the six sided die cannot roll a seven.

2) Absolute determinism, while theoretically possible, is both
physically impossible and theoretically unobservable. Computers are one
of the most determined systems we have, but a variety of low-level
errors, up to and including quantum effects, pollute its pure
determinism. Further, any sufficiently large determined system will
yield to chaotic processes, so that its determinism is itself
undeterminable.

3) Therefore all real systems are *RELATIVELY FREE*, and *RELATIVELY
DETERMINED*, some more, some less, depending on their nature, and on how
they are observed and modeled. Certainly all organisms, including
people, fall into this range.

4) Since when we qualify an adjective, the adjective still holds
(something a little hot is still hot), therefore, it *IS TRUE* that
*PEOPLE ARE FREE* and it *IS TRUE* that *PEOPLE ARE DETERMINED*. No
problem.

5) As biological systems evolve, their freedom increases, so that, e.g.
people are more free than cats or snails. When people project this
relatively more freedom into their own absolute freedom they are
committing arrogance and folly. When people project ideological
(naive?) ideas about causality, and conclude that we are completely
determined, they are also commiting folly.

Any takers?

--
O---------------------------------------------------------------------->
| Cliff Joslyn, Cybernetician at Large
| Systems Science, SUNY Binghamton, vu0112@bingvaxu.cc.binghamton.edu
V All the world is biscuit shaped. . .

------------------------------

Date: 14 May 88 22:46:42 GMT
From: sunybcs!stewart@boulder.colorado.edu (Norman R. Stewart)
Subject: Re: Free Will


>From: paulg@iisat.UUCP (Paul Gauthier)

writes:
> I'm sorry, but there is no free will. Every one of us is bound by the
>laws of physics. No one can lift a 2000 tonne block of concrete with his
>bare hands. No one can do the impossible, and in this sense none of us have
>free will.

I don't believe we're concerned with what we are capable of doing,
but rather our capacity to desire to do it. Free will is a mental, not
a physical phenomenom. What we're concerned with is if the brain (nervous
system, organism, aggregation of organisms and objects) is just so many
atoms (sub-atomic particles?, sub-sub-atomic particles) bouncing around
according to the laws of physics, and behavior simply the unalterable
manifestion of the movement of these particles. /|\
|
Note: in a closed system.





Norman R. Stewart Jr. *
C.S. Grad - SUNYAB * If you want peace, demand justice.
internet: stewart@cs.buffalo.edu * (of unknown origin)
bitnet: stewart@sunybcs.bitnet *

------------------------------

Date: 15 May 88 19:03:53 GMT
From: COYOTE.STANFORD.EDU!eyal@ucbvax.berkeley.edu (Eyal Mozes)
Subject: Re: Free Will & Self-Awareness

In article <434@aiva.ed.ac.uk> jeff@aiva.ed.ac.uk (Jeff Dalton) writes:
>> Eyal Mozes writes:
>>all the evidence I'm familiar with points to the fact that it's
>>always possible for a human being to control his thoughts by a
>>conscious effort.
>
>It is not always possible. Think, if no simpler example will do, of
>obsessives. They have thoughts that persist in turning up despite
>efforts to eliminate them.

First of all, even an obsessive can, at any given moment, turn his
thoughts away from the obsession by a conscious effort. The problem of
obsession is in that this conscious effort has to be much greater than
normal, and also in that, whenever the obsessive is not consciously
trying to avoid those thoughts, they do persist in turning up.

Second, an obsession is caused by anxiety and self-doubt, which are the
result of thinking the obsessive has done, or failed to do, in the
past. And, by deliberately training himself, over a period of time, in
more rational thinking, sometimes with appropriate professional help,
the obsessive can eliminate the excessive anxiety and self-doubt and
thus cure the obsession. So, indirectly, even the obsession itself is
under the person's volitional control.

>Or, consider when you start thinking about something. An idea just
>occurs and you are thinking it: you might decide to think about
>something, but you could not have decided to decide, decided to
>decide to decide, etc. so at some point there was no conscious
>decision.

Of course, the point at which you became conscious (e.g. woke up from
sleep) was not a conscious decision. But as long as you are conscious,
it is your choice whether to let your thoughts wander by chance
association or to deliberately, purposefully control what you're
thinking about. And whenever you stop your thoughts from wandering and
start thinking on a subject of your choice, that action is by conscious
decision.

This is why I consider Ayn Rand's theory of free will to be such an
important achievement - because it is the only free-will theory
directly confirmed by what anyone can observe in his own thoughts.

Eyal Mozes

BITNET: eyal%coyote@stanford
ARPA: eyal@coyote.stanford.edu

------------------------------

Date: 16 May 88 08:11:37 GMT
From: TAURUS.BITNET!shani@ucbvax.berkeley.edu
Subject: Re: More Free Will

In article <2@iisat.UUCP>, paulg@iisat.BITNET writes:
> Let me define my idea of free will: Free will is being able to take any
> course of action which you want to take. So if you never want to take a
> course of action which is forbidden to you, your free will is retained.
>

Hmm... quite correct. But did it ever occured to you that we WANT to be limited
by the laws of physics, because this is the only way to form a realm?

Why do you think people are so willing to pay lot's of $ to TSR, just to play
with other limitations?...

O.S.

------------------------------

Date: 16 May 88 15:00:25 GMT
From: sunybcs!sher@boulder.colorado.edu (David Sher)
Subject: Re: Raising Consciousness

There is perhaps a minor bug in Drew Mcdermott's (who teaches a great
grad level ai class) analysis of free will. If I understand it
correctly it runs like this:
To plan one has a world model including future events.
Since you are an element of the world then you must be in the model.
Since the model is a model of future events then your future actions
are in the model.
This renders planning unnecessary.
Thus your own actions must be excised from the model for planning to
avoid this "singularity."

Taken naively, this analysis would prohibit multilevel analyses such
as is common in game theory. A chess player could not say things like
if he moves a6 then I will move Nc4 or Bd5 which will lead ....
Thus it is clear that to make complex plans we actually need to model
ourselves (actually it is not clear but I think it can be made clear
with sufficient thought).

However we can still make the argument that Drew was making its just
more subtle than the naive analysis indicates. The way the argument
runs is this:
Our world model is by its very nature a simplification of the real
world (the real world doesn't fit in our heads). Thus our world model
makes imperfect predictions about the future and about consequence.
Our self model inside our world model shares in this imperfection.
Thus our self model makes inaccurate predictions about our reactions
to events. We perceive ourselves as having free will when our self
model makes a wrong prediction.

A good example of this is the way I react during a chess game. I
generally develop a plan of 2-5 moves in advance. However sometimes
when I make a move and my opponent responds as expected I notice a
pattern that previously eluded me. This pattern allows me to make a
move that was not in my plans at all but would lead to greater gains
than I had planned. For example noticing a knight fork. When this
happens I have an intense feeling of free will.

As another example I had planned on writing a short 5 line note
describing this position. In fact this article is running several
pages. ...

-David Sher
ARPA: sher@cs.buffalo.edu BITNET: sher@sunybcs
UUCP: {rutgers,ames,boulder,decvax}!sunybcs!sher

------------------------------

Date: 16 May 88 15:55:12 GMT
From: mcvax!ukc!its63b!aiva!jeff@uunet.uu.net (Jeff Dalton)
Subject: Re: Free Will & Self-Awareness

In article <8805092354.AA05852@ucbvax.Berkeley.EDU> Eyal Mozes writes:
1 all the evidence I'm familiar with points to the fact that it's
1 always possible for a human being to control his thoughts by a
1 conscious effort.

In article <434@aiva.ed.ac.uk> jeff@aiva.ed.ac.uk (Jeff Dalton) writes:
2 It is not always possible. Think, if no simpler example will do, of
2 bsessives. They have thoughts that persist in turning up despite
2 efforts to eliminate them.

In article <8805151907.AA01702@ucbvax.Berkeley.EDU> Eyal Mozes writes:
>First of all, even an obsessive can, at any given moment, turn his
>thoughts away from the obsession by a conscious effort. The problem of
>obsession is in that this conscious effort has to be much greater than
>normal, and also in that, whenever the obsessive is not consciously
>trying to avoid those thoughts, they do persist in turning up.

That an obsessive has some control over his thoughts does not mean
he can always control his thoughts. If all you mean is that one can
always at least temporarily change what one is thinking about and
can eventually eliminate obsessive thoughts or the tune that's running
through one's head, no one would be likely to disagree with you,
except where you seem to feel that obsessions are just the result
of insufficiently rational thinking in the past.

>So, indirectly, even the obsession itself is under the person's
>volitional control.

I would be interested in knowing what you think *isn't* under a
person's volitional control. One would normally think that having
a sore throat is not under conscious control even though one can
chose to do something about it or even to try to prevent it.

2 Or, consider when you start thinking about something. An idea just
2 occurs and you are thinking it: you might decide to think about
2 something, but you could not have decided to decide, decided to
2 decide to decide, etc. so at some point there was no conscious
2 decision.

>Of course, the point at which you became conscious (e.g. woke up from
>sleep) was not a conscious decision. But as long as you are conscious,
>it is your choice whether to let your thoughts wander by chance
>association or to deliberately, purposefully control what you're
>thinking about. And whenever you stop your thoughts from wandering and
>start thinking on a subject of your choice, that action is by conscious
>decision.

But where does the "subject of your own choice" come from? I wasn't
thinking of letting one's thoughts wander, although what I said might
be interpreted that way. When you decide what to think about, did
you decide to decide to think about *that thing*, and if so how did
you decide to decide to decide, and so on?

Or suppose we start with a decision, however it occurred. I decide
read your message. As I do so, it occurs to me, at various points,
that I disagree and want to say something in reply. Note that these
"occurrences" are fairly automatic. Conscious thought is involved,
but the exact form of my reply is a combination of conscious revision
and sentences, phrases, etc. that are generated by some other part of
my mind. I think "he thinks I'm just talking about letting the mind
wander and thinking about whatever comes up." That thought "just
occurs". I don't decide to think exactly that thing. But my
consciousness has that thought and can work with it. It helps provide
a focus. I next try to find a reply and begin by reading the passage
again. I notice the phrase "subject of your own choice" and think
then write "But where does the...".

Of course, I might do other things. I might think more explicitly
about that I'm doing. I might even decide to think explicitly rather
than just do so. But I cannot consciously decide every detail of
every thought. There are always some things that are provided by
other parts of my mind.

Indeed, I am fortunate that my thoughts continue along the lines I
have chosen rather than branch off on seemingly random tangents. But
the thoughts of some people, schizophrenics say, do branch off. It is
clear in many cases that insufficient rationality did not cause their
problem: it is one of the consequences, not one of the causes.

As an example of "other parts of the mind", consider memory. Suppose
I decide to remember the details of a particular event. I might not
be able to, but if I can I do not decide what these memories will be:
they are given to me by some non-conscious part of my mind.

>This is why I consider Ayn Rand's theory of free will to be such an
>important achievement - because it is the only free-will theory
>directly confirmed by what anyone can observe in his own thoughts.

As far as you have explained so far, Rand's theory is little more
than simply saying that free will = the ability to focus consciousness,
which we can all observe. Since we can all observe this without the
aid of Rand's theory, all Rand seems to be saying is "that's all there
is to it".

-- Jeff

------------------------------

Date: 16 May 88 12:07:18
From: ALFONSEC%EMDCCI11.BITNET@CUNYVM.CUNY.EDU

Date: 16 May 1988, 12:02:14 HOE

From: ALFONSEC at EMDCCI11 (EARN, Manuel Alfonseca)
To: AILIST@AI.AI.MIT.EDU at EDU
Ref: Free will et al.

Previous appends have stated that all values are learned.
I believe that some are innate. For instance, the
crudest form of the justice value
"Why should I receive less than (s)he?"
seems to exist in babies as soon as they can perceive
and before anybody has tried to teach them.

Any comments? How does this affect free will in AI?

Regards,

Manuel Alfonseca, ALFONSEC at EMDCCI11
Usual disclaimer: My opinions are my own.

------------------------------

Date: 16 May 88 16:15:32 GMT
From: wlieberm@teknowledge-vaxc.arpa (William Lieberman)
Subject: Free Will-Randomness and Question-Structure

12-May-88 15:36:36-PDT,2503;000000000000
Date: Thu, 12 May 88 15:33:21 pdt
From: wlieberm@teknowledge-vaxc.ARPA (William Lieberman)
Message-Id: <8805122233.AA28641@teknowledge-vaxc.ARPA>
To: vu0112@bingvaxu.cc.binghamton.edu
Subject: Re: Free Will & Self Awareness
Newsgroups: comp.ai
In-Reply-To: <1179@bingvaxu.cc.binghamton.edu>
References: <770@onion.cs.reading.ac.uk>
Organization: Teknowledge, Inc., Palo Alto CA
Cc: wlieberm@vaxc


Re: Free Will and Determinism.


This most interesting kind of discussion reminds me of the old question,

" What happens when the irresistable cannonball hits the irremovable post? "

The answer lies in the question, not in other parts of the outside world.

If you remember your Immanual Kant and his distinction between analytic and
synthetic statements, the cannonball question would be an analytic statement,
of the form, " The red barn is red." - A totally useless statement, because
nothing new about the outside world is implied in the statement. Similarly,
I would say the cannonball question, since it is internally contradictory,
wastes the questioner's time if he tries to hook it to the outside world.

A concept like 'random' similarly may be thought of in terms simply of
worldly unpredictability TO THE QUESTIONER. If he comes from a society where
they get differing results every time they add two oranges to two oranges,
TO THEM addition of real numbers is random. (Also wouldn't an example of
a non-recurring expansion of decimals, but certainly not random, be any
irrational number, such as pi?)

The concept of inherent randomness implies there is no conceivable system
that will ever or can ever be found that could describe what will happen in
a given system with a predefined envelope of precision. Is it possible to
prove such a conjecture? It's almost like Fermat's Last Theorem.

To me, the concept of randomness has to do with the subject's ability to
descibe events forthcoming, not with the forthcoming events themselves.
That is, randomness only exists as long as there are beings around who
perceive their imprecise or limited predictions as incomplete. The events
don't care, and occur regardless. It's important to not forget that the
subjects themselves (us, e.g.) are part of the world, too.

My main point here is that very often, questions that seem impossible to
resolve often need to have the structure of the question looked at, rather
than the rest of the outside world for empirical data to support or refute
the question.

Bill Lieberman

------------------------------

Date: 19 May 88 09:05:38 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net (Gilbert
Cockton)
Subject: Re: Acting irrationally (was Re: Free Will & Self Awareness)

In article <180@proxftl.UUCP> tomh@proxftl.UUCP (Tom Holroyd) writes:
>Here's a simple rule: explicitly articulate everything, at least once.
Sorry, I didn't quite get that :-)
>
>A truly reasoning being doesn't hesitate to ask, either, if something
>hasn't been explicitly articulated, and it is necessary for continuing
>discussion.
Read Irvine Goffman's "The Presentation of Self in Everyday Life" and
you'll find that we do not correspond to your "truly reasoning being".
We let all sorts of ambiguities and incompleteness drop, indeed it's
rude not to, as well as displaying a lack of empathy, insight, intuition
and considerateness. Sometimes, you should ask, but certainly not
always, unless your a natural language front-end, then I insist :-)

This idealisation is riddled with assumptions about meaning which I
leave your AI programs to divine :-) Needless to say, this approach
to meaning results in infinite regresses and closures imposed by
contingency rather than a mathematical closure
n+1 n
information = information

where n is the number of clarifying exchanges between the tedious pedant
(TP) and the unwilling lexicographer (UL). i.e there exists an n such that
UL abuses TP, TP senses annoyance in UL, TP gives up, UL gives up, TP
agrees to leave it until tomorrow, or ...

TP and UL have a wee spot of social negotiation and agree on the meaning
(i.e. UL hits TP really hard)
--
Gilbert Cockton, Department of Computing Science, The University, Glasgow
gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

The proper object of the study of Mankind is Man, not machines

------------------------------

Date: 19 May 88 09:14:59 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net (Gilbert
Cockton)
Subject: Re: what is this thing called `free will'

In article <38@aipna.ed.ac.uk> sean@uk.ac.ed.aipna.UUCP (Sean Matthews) writes:
>2. perfect introspection is a logical impossibility[2]
That doesn't make it impossible, just beyond comprehension through logic.
Now, if you dive into Philosophy of Logic, you'll find that many other
far more mundane phenomena aren't capturable within FOPC, hence all
this work on non-standard logics. Slow progress here though.

Does anyone seriously hold with certainty that logical impossibility
is equivalent to commonsense notions of falsehood and impossibility?
Don't waste time with similarities, such as Kantian analytic statements
such as all "Bachelors are unmarried", as these rest completely on language
and can thus often be translated into FOPC to show that bachelor(X) AND
married(X) is logically impossible, untrue, really impossible, ...

Any physicists around here use logic?
--
Gilbert Cockton, Department of Computing Science, The University, Glasgow
gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

The proper object of the study of Mankind is Man, not machines

------------------------------

Date: 20 May 88 09:01:43 GMT
From: News <mcvax!reading.ac.uk!News.System@uunet.UU.NET>
Subject: Submission for comp-ai-digest

Path: reading!onion!henry!jadwa
From: jadwa@henry.cs.reading.ac.uk (James Anderson)
Newsgroups: comp.ai.digest,comp.ai
Subject: Free-Will and Purpose
Message-ID: <814@onion.cs.reading.ac.uk>
Date: 20 May 88 09:01:42 GMT
Sender: news@onion.cs.reading.ac.uk
Reply-To: jadwa@henry.cs.reading.ac.uk (James Anderson)
Distribution: comp
Organization: Comp. Sci. Dept., Reading Univ., UK.
Lines: 29

Free-will does not exclude purposive behaviour.

Suppose that the world is entirely deterministic, but that
intelligent, purposive creatures evolve. Determinism is no
hindrance to evolution: variety can be introduced systematically
and systematic, but perhaps very complex, selection will do the
rest. The question of free-will does not arise here.

In human societies, intelligence and purposive behaviour are good
survival traits and have allowed us to secure our position in the
natural world. (-: If you don't feel secure, try harder! :-)

* * *

I realise that this is a very condensed argument, but I think you
will understand my point. For those of you who like reading long
arguments you could try:

"Purposive Explanation in Psychology", Margaret A. Boden, The
Harvester Press, 1978, Hassocks, Sussex, England.

ISBN 0-85527-711-4

It presents a quite different rational for accepting the idea of
purposive behaviour in a deterministic world.

James

(JANET) James.Anderson@reading.ac.uk

------------------------------

Date: 20 May 88 16:48:00 GMT
From: killer!tness7!ninja!sys1!hal6000!trsvax!bryan@ames.arpa
Subject: Re: Acting irrationally (was Re: Fr


/* ---------- "Re: Acting irrationally (was Re: Fr" ---------- */

In article <5499@venera.isi.edu>, smoliar@vaxa.isi.edu (Stephen Smoliar) writes:
>> I think you are overlooking how great an extent we rely on implict
>> assumptions in any intercourse. If we had to articulate everything
>> explicitly, we would probably never get around to discussing what we
>> really wanted to discuss.

/* Written 9:44 am May 17, 1988 by proxftl.UUCP!tomh (Tom Holroyd)

> Here's a simple rule: explicitly articulate everything, at least once.
^^^^^^^^^^!!

That's the rub, "everything" includes every association you've ever had with
hearing or using every word, including all the events you've forgotten about
but which influence the "meaning" any particular word has for you, especially
the early ones while you were acquiring a vocabulary.

You seem to have some rather naive ideals about the meanings of words.

> A truly reasoning being doesn't hesitate to ask, either, if something
> hasn't been explicitly articulated, and it is necessary for continuing
> discussion.

A truly reasoning being often thinks that things WERE explicitly articulated
to a sufficient degree, given that both parties are using the same language.

------------------

Convictions are more dangerous enemies of truth than lies.
- Nietzsche

...ihnp4!convex!ctvax!trsvax!bryan (Bryan Helm)

------------------------------

Date: 21 May 88 01:03:13 GMT
From: mcvax!ukc!its63b!aipna!sean@uunet.uu.net (Sean Matthews)
Subject: Re: what is this thing called `free will'

in article <1193@crete.cs.glasgow.ac.uk>
gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes
>In article <38@aipna.ed.ac.uk> {I} write
>>2. perfect introspection is a logical impossibility[2]
>That doesn't make it impossible, just beyond comprehension through logic.
>Now, if you dive into Philosophy of Logic, you'll find that many other
>far more mundane phenomena aren't capturable within FOPC, hence all
>this work on non-standard logics. Slow progress here though.

Mr Cockton is confusing one fairly restricted logic with the whole
plethora I was referring to. There are logics specifically designed for
dealing with problems of self reference (cf Craig Smory\'nski in
Handbook of philosophical logic Vol2 `modal logic and self-reference')
and they place very clear restrictions on what is possible in terms of
self-referential systems and what is not; there has not been `Slow
progress here'.

> anyone seriously hold with certainty that logical impossibility
>is equivalent to commonsense notions of falsehood and impossibility?

I freely admit that I don't understand what he means here, unless he
is making some sort of appeal to metaphysical concepts of truth apart
from demonstrability and divorced from the concept of even analytic
falsehood in any way. There are Western philosophers (even good ones)
who invoke metaphysics to prove such things as `God exists' (I feel
that God exists, therefore God exists---Rousseau), or even `God does
not exist' (I feel that God does not exist, therefore God does not
exist---Nietztche).

Certainly facts may be `true' irrespective of whether we can `prove'
them (the classical example is `this statement is not provable')
though this again depends on what your idea of `truth' is. And there
are different types of `truth' as he points out; any synthetic `truth'
is always tentative, a black sheep can be discovered at any time,
disposing of the previous ``truth'' (two sets of quotation marks) that
all sheep were a sort of muddy light grey, whereas analytic `truth' is
`true' for all time (cf Euclids `Elements'). But introspective `truth's
are analytic, being purely mental; we have a finite base of knowledge
(what we know about ourselves), and a set of rules that we apply to
get new knowledge about the system; if the rules or the knowledge
change then the deductions change, but the change is like changing
Euclid's fifth postulate; the conclusions differ but the conclusions
from the original system, though they may contradict the new
conclusions, are still true, since they are prefixed with different
axioms, and any system that posits perfect introspection is going to
contain contradictions (cf Donald Perlis: `Meta in logic' in
`Meta-level reasoning and reflection', North Holland for a quick
survey).

What happens in formal logic is that we take a subset of possible
concepts (modus ponens, substitution, a few tautologies, some modal
operators perhaps) and see what happens; if we can generate a
contradiction in this (tiny) subset of accepted `truth's, then we can
generate a contradiction in the set of all accepted `truth's using
rational arguments this should lead us to reevaluate what we hold as
axioms. These arguments could be carried out in natural language, the
symbols, which perhaps seem to divorce the whole enterprise from
reality, are not necessary, they only make things easier; after all
Aristotle studied logic fairly successfully without them.

Se\'an Matthews
Dept. of Artificial Intelligence JANET:sean%sin@uk.ac.ed.aiva
University of Edinburgh ARPA: sean%uk.ac.ed.aiva@nss.cs.ucl.ac.uk
80 South Bridge UUCP: ...!mcvax!ukc!aiva!sean
Edinburgh, EH1 1HN, Scotland

PS I apologise beforehand for any little liberties I may have taken with
the finer points of particular philosophies mentioned above.

------------------------------

Date: 21 May 88 06:33:19 GMT
From: quintus!ok@sun.com (Richard A. O'Keefe)
Subject: McDermott's analysis of "free will"

I have been waiting for someone else to say this better than I can,
and unfortunately I've waited so long that McDermott's article has
expired here, so I can't quote him.

Summary: I think McDermott's analysis is seriously flawed.
Caveat: I have probably misunderstood him.

I understand his argument to be (roughly)
an intelligent planner (which attempts to predict the actions of
other agents by simulating them using a "mental model") cannot
treat itself that way, otherwise it would run into a loop, so it
must flag itself as an exception to its normal models of
causality, and thus perceives itself as having "free will".
[I'm *sure* I'm confusing this with something Minsky said years ago.
Please forgive me.]

1. From the fact that a method would get an intelligent planner into
serious trouble, we cannot conclude that people don't work that way.
To start with, people have been known to commit suicide, which is
disastrous for their future planning abilities. More seriously,
people live in a physical world, and hunger, a swift kick in the
pants, strange noises in the undergrowth &c, act not unlike the
Interrupt key. People could well act in ways that would have them
falling into infinite loops as long as the environment provided
enough higher-priority events to catch their attention.

2. It is possible for a finite computer program (with a sufficiently
large, but at all times finite) store to act *as if* it was an
one-way infinite tower of interpreters. Brian Cantwell Smith
showed this with his design for 3-Lisp. Jim desRiviers for one
has implemented 3-Lisp. So the mere possibility of an agent
having to appear to simulate itself simulating itself ... doesn't
show that unbounded resources would be required: we need to know
more about the nature of the model and the simulation process to
show that.

3. In any case, we only get the infinite regress if the planner
simulates itself *exactly*. There is a Computer Science topic
called "abstract interpretation", where you model the behaviour
of a computer program by running it in an approximate model.
Any abstract interpreter worth its salt can interpret itself
interpreting itself. The answers won't be precise, but they are
often useful.

4. At least one human being does not possess sufficient knowledge of
the workings of his mind to be able to simulate himself anything BUT
vaguely. I refer, of course, to myself. [Well, I _think_ I'm
human.] If I try to predict my own actions in great detail, I run
into the problem that I don't know enough about myself to do it,
and this doesn't feel any different from not knowing enough about
President Reagan to predict his actions, or not knowing enough
about the workings of a car. I do not experience myself as a
causal singularity, and the actions I want to claim as free are the
actions which are in accord with my character, so in some sense are
at least statistically predictable. Some other explanation must be
found for _my_ belief that I have "free will".

Some other issues:

It should be noted that dualism has very little to do with the
question of free will. If body and mind are distinct substances,
that doesn't solve the problem, it only moves the
determinism/randomness/ whatever else from the physical domain to
the mental domain. Minds could be nonphysical and still be
determined.

What strikes me most about this discussion is not the variety of
explanations, but the disagreement about what is to be explained.
Some people seem to think that their freest acts are the ones
which even they cannot explain, others do not have this feeling.
Are we really arguing about the same (real or illusory) phenomenon?

------------------------------

Date: 22 May 88 02:22:57 GMT
From: uflorida!novavax!proxftl!bill@gatech.edu (T. William Wells)
Subject: Re: Free Will & Self-Awareness

In article <445@aiva.ed.ac.uk>, jeff@aiva.ed.ac.uk (Jeff Dalton) writes:
> In article <8805092354.AA05852@ucbvax.Berkeley.EDU> Eyal Mozes writes:
>...
> As far as you have explained so far, Rand's theory is little more
> than simply saying that free will = the ability to focus consciousness,
> which we can all observe. Since we can all observe this without the
> aid of Rand's theory, all Rand seems to be saying is "that's all there
> is to it".
>
> -- Jeff

Actually, from what I have read so far, it seems that the two of
you are arguing different things; moreover, eyal@COYOTE.STANFORD
.EDU (Eyal Mozes) has committed, at the very least, a sin of
omission: he has not explained Rand's theory of free will
adequately.

Following is the Objectivist position as I understand it. Please
be aware that I have not included everything needed to justify
this position, nor have I been as technically correct as I might
have been; my purpose here is to trash a debate which seems to be
based on misunderstandings.

To those of you who want a justification, I will (given enough
interest) eventually be doing so on talk.philosophy.misc, where I
hope to be continuing my discussion of Objectivism. Please
direct any followups to that group.

Entities are the only causes: they cause their actions. Their
actions may be done to other entities, and this may require the
acted on entity to cause itself to act in some way. In that
case, one can use `cause' in a derivative sense, saying: the
acting entities (the agents) caused the acted upon entities (the
patient) to act in a certain way. One can also use `cause' to
refer to a chain of such. This derivative sense is the normal
use for the word `cause', and there is always an implied action.

If, in order that an entity can act in some way, other entities
must act on it, then those agents are a necessary cause for the
patient's action. If, given a certain set of actions performed
by some entities, a patient will act in a certain way, then those
agents are a sufficient cause for the patient's actions.

The Objectivist version of free will asserts that there are (for
a normally functioning human being) no sufficient causes for what
he thinks. There are, however, necessary causes for it.

This means that while talking about thinking, no statement of the
form "X(s) caused me to think..." is an valid statement about
what is going on.

In terms of the actual process, what happens is this: various
entities provide the material which you base your thinking on
(and are thus necessary causes for what you think), but an
action, not necessitated by other entities, is necessary to
direct your thinking. This action, which you cause, is
volition.

> But where does the "subject of your own choice" come from? I wasn't
> thinking of letting one's thoughts wander, although what I said might
> be interpreted that way. When you decide what to think about, did
> you decide to decide to think about *that thing*, and if so how did
> you decide to decide to decide, and so on?

Shades of Zeno! One does not "decide to decide" except when one
does so in an explicit sense. ("I was waffling all day; later
that evening I put my mental foot down and decided to decide once
and for all.") Rather, you perform an act on your thoughts to
direct them in some way; the name for that act is "decision".

Anyway, in summary, Rand's theory is not just that "free will =
the ability to focus consciousness" (actually, to direct certain
parts of one's consciousness), but that this act is not
necessitated by other entities.

------------------------------

Date: 22 May 88 0644 PDT
From: John McCarthy <JMC@SAIL.Stanford.EDU>
Subject: the free will discussion

Here are the meta remarks promised in my previous message
giving my substantive views. I hope the moderator will put
them in an issue subsequent to the one including the substantive
views.

There are three ways of improving the world.
(1) to kill somebody
(2) to forbid something
(3) to invent something new.

During World War II, (1) was appropriate, and it has occasionally
been appropriate since, but, in the main it's not appropriate now,
and few people's ideas for improvement take this form. However,
there may be more people in category (2) than in category (3).
Gilbert Cockton seems to be firmly in category (2), and I can't
help regarding him as a minor menace with his proposals that
institutions suppress AI research. At least the menace is minor
as long as Mrs. Thatcher is around; I wouldn't be surprised if
Cockton could persuade Tony Benn.

I would like to deal substantively with his menacing proposals, but
I find them vague and would prefer to respond to precise criteria
of what should be suppressed, how they are regarded as applying
to AI, and what forms of suppression he considers legitimate.

I find much of the discussion ignorant of considerations and references
that I regard as important, but different people have different ideas of
what information should be taken into account. I have read enough of
the sociological discussion of AI to have formed the opinion that it
is irrelevant to progress and wrong. For example, views that seem
similar to Cockton's inhabit a very bad and ignorant book called "The
Question of Artificial Intelligence" edited by Stephen Bloomfield, which I
will review for "Annals of the History of Computing". The ignorance is
exemplified by the fact the more than 150 references include exactly one
technical paper dated 1950, and the author gets that one wrong.

The discussion of free will has become enormous, and I imagine
that most people, like me, have only skimmed most of the material.
I am not sure that the discussion should progress further, but if
it does, I have a suggestion. Some neutral referee, e.g. the moderator,
should nominate principal discussants. Each principal discussant should
nominate issues and references. The referee should prune the list
of issues and references to a size that the discussants are willing
to deal with. They can accuse each other of ignorance if they
don't take into account the references, however perfunctorily.
Each discussant writes a general statement and a point-by-point
discussion of the issues at a length limited by the referee in
advance. Maybe the total length should be 20,000 words,
although 60,000 would make a book. After that's done we have another
free-for-all. I suggest four as the number of principal discussants
and volunteer to be one, but I believe that up to eight could
be accomodated without making the whole thing too unwieldy.
The principal discussants might like help from their allies.

The proposed topic is "AI and free will".

------------------------------

Date: 23 May 88 00:12:04 GMT
From: trwrb!aero!venera.isi.edu!smoliar@bloom-beacon.MIT.EDU
(Stephen Smoliar)
Subject: Re: Acting irrationally (was Re: Free Will & Self Awareness)

In article <180@proxftl.UUCP> tomh@proxftl.UUCP (Tom Holroyd) writes:
>In article <5499@venera.isi.edu>, smoliar@vaxa.isi.edu (Stephen Smoliar)
>writes:
>
>> I think you are overlooking how great an extent we rely on implict
>> assumptions in any intercourse. If we had to articulate everything
>> explicitly, we would probably never get around to discussing what we
>> really wanted to discuss.
>
>>The problem comes in deciding WHAT needs to be explicitly articulated
>>and what can be left in the "implicit background." That is a problem
>>which we, as humans, seem to deal with rather poorly, which is why
>>there is so much yeeling and hitting in the world.
>
>Here's a simple rule: explicitly articulate everything, at least once.
>
>The problem, as I see it, is that there are a lot of people who, for
>one reason or another, keep some information secret (perhaps the
>information isn't known).
>
No, the problem is that there is always TOO MUCH information to be explicitly
articulated over any real-time channel of human communication. If you don't
believe me, try explicitly articulating the entire content of your last
message.

------------------------------

Date: 23 May 88 14:47:46 GMT
From: bwk@mitre-bedford.arpa (Barry W. Kort)
Subject: Re: Free Will & Self-Awareness

Ewan Grantham has insightfully noted that our draft "laws of robotics"
begs the question, "How does one recognize a fellow sentient being?"

At a minimun, a sentient being is one who is able to sense it's environment,
construct internal maps or models of that environment, use those maps
to navigate, and embark on a journey of exploration. By that definition,
a dog is sentient. So the robot has no business killing a barking dog.
Anyway, the barking dog is no threat to the robot. A washing machine
isn't scared of a barking dog. So why should a robot fear one?

--Barry Kort

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT