Copy Link
Add to Bookmark
Report
AIList Digest Volume 7 Issue 028
AIList Digest Tuesday, 14 Jun 1988 Volume 7 : Issue 28
Today's Topics:
Free Will
Positive and Negative Reinforcement
----------------------------------------------------------------------
Date: 12 Jun 88 16:36:40 GMT
From: uflorida!novavax!proxftl!bill@umd5.umd.edu (T. William Wells)
Subject: Re: Free will does not require nondeterminism.
In article <461@aiva.ed.ac.uk>, jeff@aiva.ed.ac.uk (Jeff Dalton) writes:
> In article <185@proxftl.UUCP> bill@proxftl.UUCP (T. William Wells) writes:
> ]The absolute minimum required for free will is that there exists
> ]at least one action a thing can perform for which there is no
> ]external phenomena which are a sufficient cause.
>
> I have a suspicion that you may be getting too much from this
> "external".
You (and several others) seemed to have missed the point. I did
not post that message in order to defend a particular view of why
free will does not require determinism. Rather, I posted it so
that those of various philosophical persuasions could adapt it to
their own system.
For example, I am an Objectivist. This measn that I have a
particular notion of what the difference between external and
internal is. I also can assign some coherent meaning to the rest
of the posting, and voila!, I have an assertion that makes sense
to an Objectivist.
You can do the same, but that is up to you.
> It must also be considered that everything internal to me might
> ultimately be caused by things external.
It is precisely the possibility that this does not have to be
true, even given that things can do only one thing, that makes
free will something to consider, even in a determinist
philosophy.
------------------------------
Date: 12 Jun 88 16:44:06 GMT
From: uflorida!novavax!proxftl!bill@umd5.umd.edu (T. William Wells)
Subject: Re: Free Will-Randomness and Question-Structure
In article <1214@crete.cs.glasgow.ac.uk>, Gilbert Cockton writes:
> In article <194@proxftl.UUCP> bill@proxftl.UUCP (T. William Wells) writes:
> >(N.B. The mathematician's "true" is not the same thing as the
> > epistemologist's "true".
> Which epistemologist? The reality and truth of mathematical objects
> has been a major concern in many branches of philosophy. Many would
> see mathematics, when it succeeds in formalising proof, as one form of
> truth. Perhaps consistency is a better word, and we should reserve
> truth for the real thing :-)
Actually, the point was just that: when I say that something is
true in a mathematical sense, I mean just one thing: the thing
follows from the chosen axioms; when I say that something is
epistemologically true (sorry about the neologism), I mean one
thing, someone else means something else, and a third declares
the idea meaningless.
Thus the two kinds of truth need to be considered separately.
------------------------------
Date: 12 Jun 88 16:54:44 GMT
From: uflorida!novavax!proxftl!bill@umd5.umd.edu (T. William Wells)
Subject: Re: Free Will & Self-Awareness
In article <1226@crete.cs.glasgow.ac.uk>, Gilbert Cockton writes:
> In article <205@proxftl.UUCP> bill@proxftl.UUCP (T. William Wells) writes:
> >The Objectivist version of free will asserts that there are (for
> >a normally functioning human being) no sufficient causes for what
> >he thinks. There are, however, necessary causes for it.
> Has this any bearing on the ability of a machine to simulate human
> decision making? It appears so, but I'd be interested in how you think it
> can be extended to yes/no/don't know about the "pure" AI endeavour.
If you mean by "pure AI endeavour" the creation of artificial
consciousness, then definitely the question of free will &
determinism is relevant.
The canonical argument against artificial consciousness goes
something like: humans have free will, and free will is essential
to human consciousness. Machines, being deterministic, do not
have free will; therefore, they can't have a human-like
consciousness.
Now, should free will be possible in a deterministic entity this
argument goes poof.
------------------------------
Date: 12 Jun 88 18:43:17 GMT
From: uflorida!novavax!proxftl!bill@umd5.umd.edu (T. William Wells)
Subject: Re: Free Will & Self-Awareness
I really do not want to further define Objectivist positions on
comp.ai. I have also seen several suggestions that we move the
free will discussion elsewhere. Anyone object to moving it to
sci.philosophy.tech?
In article <463@aiva.ed.ac.uk>, jeff@aiva.ed.ac.uk (Jeff Dalton) writes:
> ]In terms of the actual process, what happens is this: various
> ]entities provide the material which you base your thinking on
> ](and are thus necessary causes for what you think), but an
> ]action, not necessitated by other entities, is necessary to
> ]direct your thinking. This action, which you cause, is
> ]volition.
>
> Well, how do I cause it? Am I caused to cause it, or does it
> just happen out of nothing? Note that it does not amount to
> having free will just because some of the causes are inside
> my body. (Again, I am not sure what you mean by "other entities".)
OK, let's try to eliminate some confusion. When talking about an
action that an entity takes, there are two levels of action to
consider, the level associated with the action of the entity and
the level associated with the processes that are necessary causes
for the entity level action.
[Note: the following discussion applies only to the case where
the action under discussion can be said to be caused by the
entity.]
Let's consider a relatively uncontroversial example. Say I have
a hot stove and a pan over it. At the entity level, the stove
heats the pan. At the process level, the molecules in the stove
transfer energy to the molecules in the pan.
The next question to be asked in this situation is: is heat the
same thing as the energy transferred?
If the answer is yes then the entity level and the process level
are essentially the same thing, the entity level is "reducible"
to the process level. If the answer is no, then we have what is
called an "emergent" phenomenon.
Another characterization of "emergence" is that, while the
process level is a necessary cause for the entity level actions,
those actions are "emergent" if the process level action is not a
sufficient cause.
Now, I can actually try to answer your question. At the entity
level, the question "how do I cause it" does not really have an
answer; like the hot stove, it just does it. However, at the
process level, one can look at the mechanisms of consciousness;
these constitute the answer to "how".
But note that answering this "how" does not answer the question
of "emergence". If consciousness is emergent, then the only
answer is that "volition" is simply the name for a certain class
of actions that a consciousness performs. And being emergent,
one could not reduce it to its necessary cause.
I should also mention that there is another use of "emergent"
floating around, it simply means that properties at the entity
level are not present at the process level. The emergent
properties of neural networks are of this type.
------------------------------
Date: 13 Jun 88 01:38:14 GMT
From: wind!ackley@bellcore.bellcore.com (David Ackley)
Subject: Free will does exist, but it is finite
The more you use it, the less you have.
-David Ackley
ackley@bellcore.com
------------------------------
Date: Sat, 11 Jun 88 21:34:48 -0200
From: Antti Ylikoski <ayl%hutds.hut.fi%FINGATE.BITNET@MITVMA.MIT.EDU>
Subject: knowledge, power and AI
In AIList V7 #6, Professor John McCarthy <JMC@SAIL.Stanford.EDU>
writes:
>There are three ways of improving the world.
>(1) to kill somebody
>(2) to forbid something
>(3) to invent something new.
To them, I would add
(4) to teach someone.
It is not enough to *know*.
You also must be able to *do*.
I recommend that Professor McCarthy read Carlos Castaneda's book
Tales of Power. But be warned - it requires more than intelligence and
knowledge.
Andy Ylikoski
------------------------------
Date: 11 Jun 88 04:00:17 GMT
From: oodis01!uplherc!sp7040!obie!wsccs!terry@tis.llnl.gov (Every
system needs one)
Subject: Re: Free Will & Self Awareness
In article <566@wsccs.UUCP>, dharvey@wsccs.UUCP (David Harvey) writes:
> In article <5323@xanth.cs.odu.edu>, Warren E. Taylor writes:
>> Adults understand what a child needs. A child, on his own, would quickly kill
>> himself. Also, pain is often the only teacher a child will listen to. He
>> learns to associate a certain action with undesirable consequences.
>
> Spanking certainly is a form of behavior alteration, although it might
> not be the best one in all circumstances. It has been demonstrated in
> experiment after experiment that positive reinforcement of desired
> behaviors works much better than negative reinforcement of undesirable
> behavior patterns.
David:
But what about th "pseudo-observer effect" (my pseudo-terminology)?
If you beat your child, and then the child proceeds to behave in the manner
you desired him (or her, if she's your daughter and not your son :-) to behave,
then the beating worked (produced the desired effect). In this fashion, the
parent (or total stranger who beats children) has positive reinforcement of
the beating effecting the behavior.
Given that the child has responded to being beaten once, it is logical
to assume that he would do so again... this, coupled with the prior positive
reinforcement to the parent (or stranger), makes it more likely that they
will beat the child in the future, given a similar situation.
Consistent reinforcement is more effective than inconsistent
reinforcement, be it positive or negative.
Besides, you always have your hands; how often do you happen to have
ice-cream immediately available?
| Terry Lambert UUCP: ...{ decvax, ihnp4 } ...utah-cs!century!terry |
| @ Century Software OR: ...utah-cs!uplherc!sp7040!obie!wsccs!terry |
| SLC, Utah |
| These opinions are not my companies, but if you find them |
| useful, send a $20.00 donation to Brisbane Australia... |
| 'Signatures; it's not how long you make them, it's how you make them long!' |
------------------------------
Date: Sat, 11 Jun 88 15:07 EST
From: <INS_ATGE%JHUVMS.BITNET@MITVMA.MIT.EDU>
Subject: Free Will vs. Society
I believe that as machines and human created creatures begin to pop up
more and more in advanced jobs on our planet, we are going to have to
re-evaluate our systems of blame and punishment.
Some have said that the important parts of free will is making a choice
between good and evil. Unfortunately, these concepts are rather
ill-defined. I subscribe to the notion that there are not universal
'good' and 'evil' properties...I know that others definately disagree on
this point. My defense rests in the possibility of other extremely
different life systems, where perhaps things like murder and incest, and
some of the other common things we humans refer to as 'evil' are necessary
for that life form's survival.
Even today, there are many who do not think murder is evil under
cartain circumstances (captial punishment, war, perhaps abortion).
I feel that we need to develop heuristics to deal with changing
needs for our species, and needs with regard to interaction with
non-human and/or non-carbon based life forms.
Does determinism eradicate blame? Not neccessarily. Lets say system
X caused unwanted harm to system Y. Even if system X had no other choice
than to cause harm to Y due to its current input and current state,
system X must still be "blamed" for the incident and hopefully
system X can be "fixed" within acceptable guidelines.
Do our current criminal punishments actually "fix" the erring
systems? What are socially acceptable "fixes" (to many, captital
punishment is not acceptable).?
I am sure some may not like the idea of being "scientific" about
punishment of erring systems. But I think the key word should be
"fixes." Punishment by jailing may work on humans as a "fix", but
not on an IBM PC. The IBM PC will undoubtably be fixed better by
replacing bad chips on board. And this can be determined by
comparison and research.
------------------------------
Date: Fri, 10 Jun 88 15:38:43 +0100
From: mcvax!swivax!vierhout@uunet.UU.NET (Paul Vierhout)
Subject: Re: [DanPrice@HIS-PHOENIX-MULTICS.ARPA: Sociology vs Science
Debate]
An important component of an emotion-ratio model of human decisionmaking
should incorporate uncertainty handling, and a model of 'rational
expectations'-how to select information, choosing what to find out
and what to ignore- ignorance is perhaps unavoidable, and certainly
an important source of apparently irrational behavior (although the
decision to ignore may itself be rational). This provides a link
between ratio and apparently irrational behavior.
Of course one source of information is _experience_ (one can decide to
ignore experience, which accounts, a.o., for a rational-emotional system
capable of experiencing, and at the same time ignoring, punishment)
and another may be _imagination_ (one can decide not to ignore
some possiblities, and deduce, by imagining, some possible consequences which
otherwise would not have been considered). Both link to 'emotion'.
Of course, you could ignore this notice.
Visit Amsterdam this summer, the weather is fine ande the canals are more
beautiful than ever.
------------------------------
End of AIList Digest
********************