Copy Link
Add to Bookmark
Report

AIList Digest Volume 6 Issue 095

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest             Monday, 9 May 1988       Volume 6 : Issue 95 

Today's Topics:
Philosophy - Free Will & Responsibility

----------------------------------------------------------------------

Date: 5 May 88 21:56:31 GMT
From: bwk@mitre-bedford.arpa (Barry W. Kort)
Subject: Re: Free Will & Self Awareness

I was intrigued by David Sher's comments about "machine responsibility".

It is not uncommon for a child to "spank" a machine which misbehaves.
But as adults, we know that when a machine fails to carry out its
function, it needs to be repaired or possibly redesigned. But we
do not punish the machine or incarcerate it.

Why then, when a human engages in undesirable behavior, do we resort
to such unenlightened corrective measures as yelling, hitting, or
deprivation of life-affirming resources?

--Barry Kort

------------------------------

Date: 5 May 88 20:09:02 GMT
From: sunybcs!bettingr@boulder.colorado.edu (Keith E. Bettinger)
Subject: Re: AIList V6 #86 - Philosophy

In article <3200016@uiucdcsm} channic@uiucdcsm.cs.uiuc.edu writes:
}In his article Brian Yamuchi (yamauchi@speech2.cs.cmu.edu) writes:
}} /* ---------- "Re: AIList V6 #86 - Philosophy" ---------- */
}} In article <368693.880430.MINSKY@AI.AI.MIT.EDU}, MINSKY@AI.AI.MIT.EDU
(Marvin Minsky) writes:
}} } Yamauchi, Cockton, and others on AILIST have been discussing freedom
}} } of will as though no AI researchers have discussed it seriously. May
}} } I ask you to read pages 30.2, 30.6 and 30.7 of The Society of Mind. I
}} } claim to have a good explanation of the free-will phenomenon.
}}
}} Actually, I have read The Society of Mind, where Minsky writes:
}}
}} | Everything that happens in our universe is either completely determined
}} | by what's already happened in the past or else depends, in part, on
}} | random chance. Everything, including that which happens in our brains,
}} | depends on these and only on these :
}} |
}} | A set of fixed, deterministic laws. A purely random set of accidents.
}} |
}} | There is no room on either side for any third alternative.
}}
}I see plenty of room -- my own subjective experience. I make mental
}decisions which are not random and are not completely determined (although
}certainly influenced) by past determinism.

How do you know that? Do you think that your mind is powerful enough to
comprehend the immense combination of effects of determinism and chance?
No one's is.

} [...] But this BEGS THE
}QUESTION of intelligent machines in the worst way. Show me the deterministic
}laws that create mind, Dr. Minsky, then I will believe there is no free will.
}Otherwise, you are trying to refute an undeniable human experience.

No one denies that we humans experience free will. But that experience says
nothing about its nature; at least, nothing ruling out determinism and chance.

}Do you believe your career was merely the result of some bizarre genetic
}combination or pure chance?
^^
The answer can be "yes" here, if the conjunction is changed to "and".

}
}The attack is over. The following is a plea to all AI researchers. Please
}do not try to persuade anyone, especially impressionable students, that s\he
}does not have free will. Everyone has the ability to choose to bring peace
}to his or her own life and to the rest of society, and has the ability to
}MAKE A DIFFERENCE in the world. Free will should not be compromised for the
}mere prospect of creating an intelligent machine.

Believe it or not, Minsky makes a similar plea in his discussion of free will
in _The Society of Mind_. He says that we may not be able to figure out where
free will comes from, but it is so deeply ingrained in us that we cannot deny
it or ignore it.

}
}Tom Channic
}University of Illinois
}channic@uiucdcs.cs.uiuc.edu
}{decvax|ihnp4}!pur-ee!uiucdcs!channic

-------------------------------------------------------------------------
Keith E. Bettinger "Perhaps this final act was meant
SUNY at Buffalo Computer Science To clinch a lifetime's argument
That nothing comes from violence
CSNET: bettingr@Buffalo.CSNET And nothing ever could..."

BITNET: bettingr@sunybcs.BITNET - Sting, "Fragile"
INTERNET: bettingr@cs.buffalo.edu
UUCP: ..{bbncca,decvax,dual,rocksvax,watmath,sbcs}!sunybcs!bettingr

------------------------------

Date: 5 May 88 15:49:12 GMT
From: bwk@mitre-bedford.arpa (Barry W. Kort)
Subject: Re: Sorry, no philosophy allowed here.

In an earlier article, I wrote:
>> Suppose I were able to inculcate a Value System into silicon. And in the
>> event of a tie among competing choices, I use a random mechanism to force
>> a decision. Would the behavior of my system be very much different from a
>> sentient being with free will? (--Barry Kort)

In article <1069@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk
(Gilbert Cockton) responds:
>Oh brave Science! One minute it's Mind on silicon, the next it's a little
>randomness to explain the inexplicable. Random what? Which domain? Does it
>close? Is it enumerable? Will it type out Shakespeare? More seriously
>'forcing decisions' is a feature of Western capitalist society
>(a historical point please, not a political one).
>There are consensus-based (small) cultures where
>decisions are never forced and the 'must do something now' phenomena is
>mercifully rare. Your system should prevaricate, stall, duck the
>issue, deny there's a problem, pray, write to an agony aunt, ask its
>mum, wait a while, get its friends to ring it up and ask it out ...

Random selection from a set of equal-value alternatives known to
the system. The domain is just the domain of knowledge possessed by
the decision maker. The domain is finite, incomplete, possibly
inconsistent, and evolving over time. It might type out Shakespeare,
especially if 1) it were asked to, 2) it had no other pressing
priorities, and 3) it knew some Shakespeare or knew where to find some.

One of the alternative decisions that the system could take is to emit
the following message:

"I am at a dilemma such that I am not aware of a good action
to take, other than to emit this message."


The above response is not particulary Western. (Well, I suppose it
could be translated into Western-style behavior: "[Panic mode]: I
don't know what to do!!!"
)

>Before you put your value system on Silicon, put it on paper. That's hard
>enough, so why should a dumb bit of constrained plastic and metal promise any
>advance over the frail technology of paper? If you can't write it down, you
>cannot possibly program it.

I actually tried to do this a few years ago. I ended up with a book-length
document which very few people were interested in reading.

>So come on you AI types, le't see your *DECLARATIVE TESTIMONIALS* on
>this news group by the end of the month. Lay out your value systems
>in good technical English. If you can't manage it, or even a little of it,
>should you really keep believing that it will fit onto silicon?

Gee Gilbert, I'm still trying to discover whether a good value system
will fit onto protoplasmic neural networks. But seriously, I agree
that we haven't a prayer of communicating our value systems to silicon
if we can't communicate them to ourselves.

--Barry Kort

------------------------------

Date: 6 May 88 04:01:53 GMT
From: glacier!jbn@labrea.stanford.edu (John B. Nagle)
Subject: Punishment of machines

In article <31024@linus.UUCP> bwk@mbunix (Barry Kort) writes:
>I was intrigued by David Sher's comments about "machine responsibility".
>
>It is not uncommon for a child to "spank" a machine which misbehaves.
>But as adults, we know that when a machine fails to carry out its
>function, it needs to be repaired or possibly redesigned. But we
>do not punish the machine or incarcerate it.
>
The concept of a machine which could be productively punished is
not totally unreasonable. It is, in fact, a useful property for some robots
to have. Robots that operate in the real world need mechanisms that implement
fear and pain to survive. Such machines will respond positively to punishment.

I am working toward this end, am constructing suitable hardware and
software, and expect to demonstrate such robots in about a year.


John Nagle

------------------------------

Date: 4 May 88 16:55:42 GMT
From: mcvax!ukc!dcl-cs!simon@uunet.uu.net (Simon Brooke)
Subject: Re: Social science gibber [Was Re: Various Future of AI

People, this is not a polite message; if polemic offends you, hit <n> now.
It is, however, a serious message, which reflects on attitudes which have
become rather too common on this mailing list. Some time ago, Thomas
Maddox, of Fort Lauderdale, Florida responded to a mailing by Gilbert
Cockton (Scottish HCI Centre, Glasgow), in a way which showed he was both
ignorant and contemptuous of Sociology. Now it isn't a crime to be
ignorant, or contemptuous - I am frequently both, about Unix for example.
But when I am, I'm not surprised to be corrected.

In any case, Tom's posting annoyed me, and I replied sharply. And he, in
turn, has replied to me. Whilst obviously there's no point in spinning
this out ad infinitum, there are a few points to be responded to. In his
first message, Tom wrote:

"Rigorous sociology/contemporary anthropology"? Ha ha....

I responded:

Are we to assume that the author doubts the rigour of Sociology,
or the contemporary nature of anthropology?

and Tom has clarified:

Yes, I think you could assume both, pal.

That anthropology is contemporary is a matter of fact, not debate.
Anthropologists are contemporarily studying contemporary cultures. If you
doubt that, you obviously are not reading any contemporary anthropology.

Tom's claim that he doubts the rigour of sociology, whilst more believable,
displays equal lack of knowledge of the field. What is more disturbing is
his apparent view that 'dogma' which 'plagues the social sciences', is
less prevalent in the sciences. Has he read Thomas Kuhn's work on
scientific revolutions?

Tom also takes issue with my assertion that:

AI (is) an experimental branch of Philosophy

AI has two major concerns: the nature of knowledge, and the nature of
mind. These have been the central subject matter of philosophy since
Aristotle, at any rate. The methods used by AI workers to address these
problems include logic - again drawn from Philosophy. So to summarise:
AI addresses philosophical problems using (among other things)
philosophers tools. Or to put it differently, Philosophy plus hardware -
plus a little computer science - equals what we now know as AI. The fact
that some workers in the field don't know this is a shameful idictment on
the standards of teaching in AI departments.

Too many AI workers - or, to be more accurate, too many of those now
posting to this newsgroup - think they can get away with arrogant
ignorance of the work on which their field depends.

Finally, with regard to manners, Tom writes:

My original diatribe came as a response to a particularly
self-satisfied posting by (apparently) a sociologist attacking
AI research as uninformed, peurile, &c.

If Tom doesn't know who Gilbert Cockton is, then perhaps he'd better not
waste time reading up sociology, anthropology, and so on. He's got enough
to do keeping up with the computing journals.

** Simon Brooke *********************************************************
* e-mail : simon@uk.ac.lancs.comp *
* surface: Dept of Computing, University of Lancaster, LA 1 4 YW, UK. *
*************************************************************************

------------------------------

Date: 6 May 88 12:55:00 EDT
From: cugini@icst-ecf.arpa
Reply-to: <cugini@icst-ecf.arpa>
Subject: replies on free will


larry@VLSI.JPL.NASA.GOV writes:

> I'm surprised that no one has brought up the distinction between will and
> free will. The latter (in the philosophy courses I took) implies complete
> freedom to make choices, which for humans seems debatable. For instance,
> I don't see how anyone can choose an alternative that they do not know
> exists.

Not in the philosophy courses I took. I think we all mean by "free will"
*some* freedom - it's not clear what "complete" freedom even means.
If I can freely (non-deterministically) choose between buying
vanilla or chocolate, then I have (some) freedom of the will.


yamauchi@SPEECH2.CS.CMU.EDU (Brian Yamauchi) writes:

> As to whether this is "free" or not, it depends on your definition of
> freedom. If freedom requires some force independent of genetics,
> experience, and chance, then I suppose this is not free. If freedom
> consists of allowing an individual to make his own decisions without
> coercion from others, then this definition is just as compatible with
> freedom as any other.

This confuses freedom of action with freedom of the will. No one doubts
that in the ordinary situation, there are no *external* constraints
forcing me to choose vanilla or chocolate. If "free will" is defined
as the mere absence of such constraints, then, trivially, I have free
will; but that is not the significant question. We all agree that
*if* I choose to buy chocolate, I will succeed; but this is better
called "effective will" not "free will". The issue is whether
indeed there are subtle internal constraints that make my choice
itself causally inevitable.


Spencer Star <STAR%LAVALVM1.BITNET@CORNELLC.CCS.CORNELL.EDU> writes:

> Free will seems to me to mean that regardless of state 0 the agent
> can choose which one of the possible states it will be in at time T. A
> necessary precondition for free will is that the world be indeterministic.
> This does not, however, seem to be a sufficient condition since radioactive
> decay is indeterministic but the particles do not have free will.
> Free will should certainly be more than just our inability to
> predict an outcome, since that is consistent with limited knowledge in
> a deterministic world. And it must be more than indeterminism.
> My questions:
>
> Given these definitions, (1) What is free will for a machine?
> (2) Please provide a test that will determine
> if a machine has free will. The test should
> be quantitative, repeatable, and unabiguous.

So far, so good ... Note that an assertion of free will is (at least)
a denial of causality, whose presence itself can be confirmed only
inferentially (we never directly SEE the causing, as D. Hume reminds
us). Well, in general, suppose we wanted to show that some event E
had no cause - what would constitute a test? We're in a funny
epistemological position because the usual scientific assumption
is that events do have causes and it's up to us to find them.
If we haven't found one yet, it's ascribed to our lack of cleverness,
not lack of causality. So one can *disprove* free will by exhibiting
the causal scheme it which our choice is embedded, just as one
disproves "freedom of the tides". But it doesn't seem that one has
any guarantee of being able to prove the absence of causality in a
positive way. The indeterminism of electrons, etc, is accepted
because the indeterminism is itself a consequence of an elaborate and
well-confirmed theory; but we should not expect that such a
theory-based rationale will always be available.

Note in general that testing for absences is a tricky business -
"there are no white crows". Assuming an exhaustive search is out of
the question, all you can do is keep looking - after a while, if you
don't find any, you just have to (freely?) decide what you want to
believe (cf. "there are no causes of my decision to choose
chocolate"
).

It's also worth pointing out that macro-indeterminism is not
sufficient (though necessary) for free will. If we rigged up a
robot to turn left or right depending on some microscopically
indeterministic event (btw, this "magnification" of micro- to
macro-indeterminism goes on all the time - nothing unusual),
most of us would hardly credit such a robot as having free will.


John Cugini <Cugini@icst-ecf.arpa>

------------------------------

Date: 6 May 88 01:12:12 GMT
From: bwk@mitre-bedford.ARPA (Barry W. Kort)
Reply-to: bwk@mbunix (Barry Kort)
Subject: Re: AIList V6 #87 - Queries, Causal Modeling, Texts

Spencer Star asks:

(1) What is free will for a machine?
(2) Please provide a test that will determine
if a machine has free will. The test should
be quantitative, repeatable, and unambiguous.

I suggest the following implementation of Free Will, which I believe
would engender behavior indistinguishable from a sentient being
with Free Will.

1) Imbue the machine with a Value System. This will enable the machine
to rank by preference or utility the desirability of the anticipated
outcomes of pursuing alternative courses of action.

2) Provide a random choice mechanism for selecting among equal-valued
alternatives.

3) Allow the Value System to learn from experience.

4) Seed the Value System with a) the desire to survive and b) the desire
to construct accurate maps of the state-of-affairs of the world
and accurate models for predicting future states-of-affairs from a
given state as a function of possible actions open to the machine.

--Barry Kort

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT