Copy Link
Add to Bookmark
Report
AIList Digest Volume 7 Issue 037
AIList Digest Friday, 17 Jun 1988 Volume 7 : Issue 37
Today's Topics:
And Yet More Free Will
----------------------------------------------------------------------
Date: 13 Jun 88 17:23:21 GMT
From: well!sierch@lll-lcc.llnl.gov (Michael Sierchio)
Subject: Re: Free Will & Self-Awareness
The debate about free will is funny to one who has been travelling
with mystics and sages -- who would respond by saying that freedom
and volition have nothing whatsoever to do with one another. That
volition is conditioned by internal and external necessity and
is in no way free.
The ability to make plans, set goals, to have the range of volition
to do what one wants and to accomplish one's own aims still begs the
question about the source of what one wants.
--
Michael Sierchio @ SMALL SYSTEMS SOLUTIONS
2733 Fulton St / Berkeley / CA / 94705 (415) 845-1755
sierch@well.UUCP {..ucbvax, etc...}!lll-crg!well!sierch
------------------------------
Date: 13 Jun 88 22:10:37 GMT
From: colin@CS.UCLA.EDU (Colin F. Allen )
Reply-to: lanai!colin@seismo.CSS.GOV (Colin F. Allen )
Subject: Re: Free Will vs. Society
In article <19880613194742.7.NICK@INTERLAKEN.LCS.MIT.EDU>
INS_ATGE@JHUVMS.BITNET writes:
>ill-defined. I subscribe to the notion that there are not universal
>'good' and 'evil' properties...I know that others definately disagree on
>this point. My defense rests in the possibility of other extremely
>different life systems, where perhaps things like murder and incest, and
>some of the other common things we humans refer to as 'evil' are necessary
>for that life form's survival.
But look, this is all rather naive.....you yourself are giving a criterion
(survival value) for the acceptability of certain practices. So even if
murder etc. are not universal evils, you do nonetheless believe that harming
others without good cause is bad. So, after all, you do accept some
general characterization of good and bad.
------------------------------
Date: Tue, 14 Jun 88 13:37 O
From: Antti Ylikoski tel +358 0 457 2704
<YLIKOSKI%FINFUN.BITNET@MITVMA.MIT.EDU>
Subject: RE: Free will
That would seem to make sense.
I'm a spirit/body dualist; humans have a spirit or a soul,
we have not so far made a machine with one; bue we can make
new souls (new humans).
Then the idea arises that one could model the human soul.
Antti Ylikoski
------------------------------
Date: Tue, 14 Jun 88 08:56 EST
From: <FLAHERTY%CTSTATEU.BITNET@MITVMA.MIT.EDU>
Subject: (1) Free will & (2) Reinforcement
Is it possible that the free will topic has run its course (again)?
After all, the subject has been pondered for millenia by some pretty
powerful minds with no conclusions, or even widely accepted working
definitions, in sight. Perhaps the behavioral approach of ignoring
(or at least "pushing") problems that seem to be intractable is not
crazy in this instance. Anyway, it's getting *boring* folks.
Now, re: reinforcement. It comes in (at least) two varieties --
positive and negative -- both of which are used to *increase* the
probability of a response. Positive reinforcement is just old
fashioned reward. Give your dog a treat for sitting up and it is more
likely to do it again.
Negative reinforcement consists in the removal of an aversive stimulus
(state of affairs) which leads to increased response probability. If
you take aspirin when you have a headache and the pain goes away, you
are more likely to take aspirin next time you have a headache. Thus,
negative reinforcement is the flip-side of positive reinforcement (and
often difficult to distinguish from it).
The effect of punishment is to *decrease* response probability. The
term is usually used to describe a situation where an aversive
stimulus is presented following the occurrence of an "undesirable"
behavior. So, Susie gets a spanking because she stuck her finger in
her little brother's eye (even though he probably did something to
deserve *his* punishment -- there is no real justice). The hope is
that Susie will "learn her lesson" and not do it again.
Point is, punishment and negative reinforcement are *not* equivalent.
See any introductory Psychology text for more (and probably better?)
examples.
--Tom <FLAHERTY@CTSTATEU.BITNET>
------------------------------
Date: 14 Jun 88 14:31:43 GMT
From: bc@media-lab.media.mit.edu (bill coderre)
Subject: Re: Free Will & Self-Awareness
In article <6268@well.UUCP> sierch@well.UUCP (Michael Sierchio) writes:
>The debate about free will is funny to one who has been travelling
>with mystics and sages -- who would respond by saying that freedom
>and volition have nothing whatsoever to do with one another....
(this is gonna sound like my just previous article in comp.ai, so you
can read that too if you like)
Although what free will is and how something gets it are interesting
philosophical debates, they are not AI.
Might I submit that comp.ai is for the discussion of AI: its
programming tricks and techniques, and maybe a smattering of social
repercussions and philosophical issues.
I have no desire to argue semantics and definitions, especially about
slippery topics such as the above.
And although the occasional note is interesting (and indeed my
colleague Mr Sierchio's is sweet), endless discussions of whether some
lump of organic matter (either silicon- or carbon-based) CAN POSSIBLY
have "free will" (which only begs the question of where to buy some and
what to carry it in) is best confined to a group where the readership
is interested in such things.
Naturally, I shall not belabour you with endless discussions of neural
nets merely because of their interesting modelling of Real(tm)
neurons. But if you are interested in AI techniques and their rather
interesting approaches to the fundamental problems of intelligence and
learning (many of which draw on philosophy and epistemology), please
feel free to inquire.
I thank you for your kinds attention.....................mr bc
------------------------------
Date: 14 Jun 88 14:17:27 GMT
From: bc@media-lab.media.mit.edu (bill coderre)
Subject: Re: Who else isn't a science?
In article <34227@linus.UUCP> marsh@mbunix (Ralph Marshall) writes:
>In article <10785@agate.BERKELEY.EDU> weemba@garnet.berkeley.edu writes:
>>Ain't it wonderful? AI succeeded by changing the meaning of the word.
......(lots of important stuff deleted)
>I'm not at all sure that this is really the focus of current AI work,
>but I am reasonably convinced that it is a long-term goal that is worth
>pursuing.
Oh boy. Just wonderful. We have people who have never done AI arguing
about whether or not it is a science and whether or not it CAN succeed
ever and the definition of Free Will and whether a computer can have
some.
It just goes on and on!
Ladies and Gentlemen, might I remind you that this group is supposed
to be about AI, and although there should be some discussion of its
social impact, and maybe even an enlightened comment about its
philosophical value, the most important thing is to discuss AI itself:
programming tricks, neat ideas, and approaches to intelligence and
learning -- not have semantic arguments or ones about whose dictionary
is bigger.
I submit that the definition of Free Will (whateverTHATis) is NOT AI.
I submit that those who wish to argue in this group DO SOME AI or at
least read some of the gazillions of books about it BEFORE they go
spouting off about what some lump of organic matter (be it silicon or
carbon based) can or cannot do.
May I also inform the above participants that a MAJORITY of AI
research is centered around some of the following:
Description matching and goal reduction
Exploiting constraints
Path analysis and finding alternatives
Control metaphors
Problem Solving paradigms
Logic and Theorem Proving
Laguage Understanding
Image Understanding
Learning from descriptions and samples
Learning from experience
Knowledge Acquisition
Knowledge Representation
(Well, my list isn't very good, since I just copied it out of the table
of contents of one of the AI books.)
Might I also suggest that if you don't understand the fundamental and
crucial topics above, that you refrain from telling me what I am doing
with my research. As it happens, I am doing simulations of animal
behavior using Society of Mind theories. So I do lots of learning and
knowledge acquisition.
And if you decide to find out about these topics, which are extremely
interesting and fun, might I suggest a book called "The Dictionary of
Artificial Intelligence."
And of course, I have to plug Society of Mind both since it is the
source of many valuable new questions for AI to pursue, and since
Marvin Minsky is my advisor. It is also simple enough for high school
students to read.
If you have any serious AI questions, feel free to write to me (if
they are simple) or post them (if you need a lot of answers). I will
answer what I can.
I realize much of the banter is due to crossposting from
talk.philosphy, so folks over there, could you please avoid future
crossposts? Thanks...
Oh and Have a Nice Day................................mr bc
------------------------------
Date: Wed, 15 Jun 88 23:37 EDT
From: SUTHERS%cs.umass.edu@RELAY.CS.NET
Subject: Why I hope Aaron HAS disposed of the Free Will issue
Just a few years ago, I would have been delighted to be able to
participate in a network discussion on free will. Now I skip over
these discussions in the AIList. Why does it seem fruitless?
Many of the arguments seem endless, perhaps because they are arguments
about conclusions rather than assumptions. We disagree about
conclusions and argue, while never stating our assumptions. If
we did the latter, we'd find we simply disagree, and there would
be nothing to argue about.
But Aaron Sloman has put his finger on the pragmatic side of why
these discussions (though engaging for some), seem to be without
progress. Arguments about generic, undefined categories don't impact
on *what we do* in AI: the supposed concept does not have an image
in the design decisions we must make in building computational models
of interesting behaviors.
So in the future, if these discussions must continue, I hope that
the participants will have the discipline to try to work out how
the supposed issues at stake and their positions on them impact
on what we actually do, and use examples of the same in their
communications as evidence of the relevancy of the discussion.
It is otherwise too easy to generate pages of heated discussion
which really tell us nothing more than what our own prejudices are
(and even that is only implicit). -- Dan Suthers
------------------------------
Date: 15 Jun 88 17:20:50 GMT
From: uflorida!novavax!proxftl!bill@gatech.edu (T. William Wells)
Subject: Re: Free Will & Self Awareness
In article <558@wsccs.UUCP>, rargyle@wsccs.UUCP (Bob Argyle) writes:
> We genetically are programmed to protect that child (it may be a
> relative...);
Please avoid expressing your opinions as fact. There is
insufficient evidence that we are genetically programmed for ANY
adult behavior to allow that proposition to be used as if it were
an uncontestable fact. (Keep in mind that this does NOT mean
that we are not structured to have certain capabilities, nor does
it deny phenomena like first-time learning.)
> IF we get some data on what 'free will' actually is out of AI, then let
> us discuss what it means. It seems we either have free will or we
> don't; finding out seems indicated after is it 3000 years of talk.vague.
I hate to call you philosophically naive, but this remark seems
to indicate that this is so. The real question debated by
philosophers is not "do we have free will", but "what does `free
will' mean". The question is not one of data, but rather of
interpretation. Generally speaking, once the latter question is
answered to a philosopher's satisfaction, the answer to the
former question is obvious.
Given this, one can see that it is not possible to test the
hypothesis with AI.
------------------------------
End of AIList Digest
********************