Copy Link
Add to Bookmark
Report

AIList Digest Volume 6 Issue 097

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest             Monday, 9 May 1988       Volume 6 : Issue 97 

Today's Topics:
Philosophy - Free Will

----------------------------------------------------------------------

Date: 6 May 88 22:48:09 GMT
From: paul.rutgers.edu!cars.rutgers.edu!byerly@rutgers.edu (Boyce Byerly )
Subject: Re: this is philosophy ??!!?

|In article <1069@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk
|(Gilbert Cockton) writes:
|>logical errors in an argument is not enough to sensibly dismiss it, otherwise
|>we would have to become resigned to widespread ignorance.
|
|To which acha@centro.soar.cs.cmu.edu (Anurag Acharya) replies: Just
|when is an assumption warranted ? By your yardstick (it seems ),
|'logically inconsistent' assumptions are more likely to be warranted
|than the logically consistent ones. Am I parsing you wrong or do you
|really claim that ?!

My feelings on this are that "hard logic", as perfected in first-order
predicate calculus, is a wonderful and very powerful form of
reasoning. However, it seems to have a number of drawbacks as a
rigorous standard for AI systems, from both the cognitive modeling and
engineering standpoints.

1) It is not a natural or easy way to represent probabalistic or
intuitive knowledge.

2) In representing human knowledge and discourse, it fails because it
does not recognize or deal with contradiction. In a rigorously
logical system, if

P ==> Q
~Q
P
Then we have the tautology ~Q and Q.

If you don't believe human beings can have the above deriveably
contradictory structures in their logical environments, I suggest you
spend a few hours listening to some of our great political leaders :-)
Mr. Reagan's statements on dealing with terrorists shortly before
Iranscam/Contragate leap to mind, but I am sure you can find equally
good examples in any political party. People normally keep a lot of
contradictory information in their minds, and not from dishonesty -
you simply can't tear out a premise because it causes a contradiction
after exhaustive derivation.

3) Logic also falls down in manipulating "belief-structures" about the
world. The gap between belief and reality ( whatever THAT is) is
often large. I am aware of this problem from reading texts on natural
language, but I think the problem occurs elsewhere, too.

Perhaps the logical deduction of western philosophy needs to take a
back seat for a bit and let less sensitive, more probalistic
rationalities drive for a while.

Boyce
Rutgers University DCS

------------------------------

Date: 5 May 88 10:54:23 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net (Gilbert
Cockton)
Subject: Arguments against AI are arguments against human formalisms

In article <1579@pt.cs.cmu.edu> yamauchi@speech2.cs.cmu.edu (Brian Yamauchi)
writes:
>Cockton seems to be saying that humans do have free will, but is totally
>impossible for AIs to ever have free will. I am curious as to what he bases
>this belief upon other than "conflict with traditional Western values".
Isn't that enough? What's so special about academia that it should be
allowed to support any intellectual activity without criticism from
the society which supports it? Surely it is the duty of all academics
to look to the social implications of their work? Having free will,
they are not obliged to pursue lines of enquiry which are so controversial.

I have other arguments, which have popped up now and again in postings
over the last few years:

1) Rule-based systems require fully formalised knowledge-bases.

Rule-based systems are impossible in areas where no written
formalisation exists. Note how scholars like John Anderson
restrict themselves to proper psycholgical data. I regard Anderson
as a psychologist, not as an AI worker. He is investigating
computational accounts of known phenomena. As such, his research
is a respectable confrontation with the boundaries of the
computational paradigm. His writing is candid and I have yet to
see him proceed confidently from assumptions, though he often has
to live with some.

Conclusion, AI as a collection of mathematicians and computer
scientists playing with machines, cannot formalise psychology where
no convincing written account exists. Advances here will come from
non-computational psychology first, as computational psychology has
to follow in the wake of the real thing.

The real thing unfortunately cuts a slow and shallow bow-wave.

[yes, I know about connectionism, but then you have to formalise the
inputs. Furthermore, you don't know what a PDP network does know]

2) Formal accounts of nearly every area of human activity are rare.

I have a degree in Education. For it I studied Philosophy, Psychology
and Sociology. My undegraduate dissertation was on Curriculum design
- an interdisciplinary topic which has to draw on inputs from a
number of disciplines. What I learnt here was which horse was best
suited for which course, and thus when not to use mathematics, which
was most of the time. I did philosophy with a (ex-)mathematician BTW

I know of few areas in psychology where there is a WRITTEN account of
human decision making which is convincing. If no written account
exists, no computational account, a more restrictive representation,
is possible. Computability adds nothing to 'writability', and many
things in this world have not been well represented using written
language. Academics are often seduced by the word, and forget that
the real decisions in life are rarely written down, and when they are
(laws, treaties) they seem worlds apart from what originally was said

AI depends on being able to use written language (physical symbol
hypothesis) to represent the whole human and physical universe. AI
and any degree of literate-ignorance are incompatible. Humans, by
contrast, may be ignorant in a literate sense, but knowlegeable in
their activities. AI fails as this unformalised knowledge is
violated in formalisation, just as the Mona Lisa is indescribable.

Philosophically, this is a brand of scepticism. I'm not arguing that
nothing is knowable, just that public, formal knolwedge accounts for
a small part of our effective everyday knowledge (see Heider).

So, AI person, you say you can compute it. Let's forget the Turing
Test and replace it with the Touring Test. Write down what you did
on your holidays, in English, then come up with a computational model
to account for everything you did. There is a warm-up problem which
involves the first 10-minutes as you step out of bed in the morning.
After 10 minutes, write down EVERYTHING you did (from video?). Then
elaborate what happened. This writing will be hard enough.

Get my point? The world's just too big for your head. The arrogance
of AI lies in its not grasping this. AI needs everything formalised
(world-knowledge problem). BTW, Robots aren't AI. Robots are robots.

3) The real world is social, not printed.

Because so little of our effective knowledge is formalised, we learn
in social contexts, not from books. I presume AI is full of relative
loners who have learnt more of what they publicly interact with from
books rather than from people. Well I didn't, and I prefer
interaction to reading.

Learning in a social context is the root of our humanity. It is
observations of this social context that reveal our free will in
action. Note that we become convinced of our free will, we do not
formalise accounts of it. This is the humanity which is beyond AI.
Feigenbaum & McCorduck (5th Gen) mention this 'socialisation'
objection to AI in passing, but produce no argument for rejecting it.

It is the strongest argument against AI. Look at language
acquisition in its social context. AI people cannot program a system
at the same rate as humans acquire language. OK, perhaps 'n'
generations of AI workers could slowly program a NLP system up to
competence. But as more gets added, there is more to learn, and there
would come a point that the programmers wouldn't understand the
system until they were a few years from retirement.

We spend enough of our time growing in this world to ever have time
to formalise it. The moment we grasp ourselves, we are already out
of date, for this grasping is now part of the self that was grasped.

Anyway, you did ask. Hope this makes sense.

------------------------------

Date: 6 May 88 18:00:42 GMT
From: bwk@MITRE-BEDFORD.ARPA (Barry W. Kort)
Subject: Re: Free Will & Self Awareness

James Anderson writes:

>If the world is deterministic I am denied free will because I can
>not determine the outcome of a decision. On the other hand, if
>the world is random, I am denied free will because I can not
>determine the outcome of a decision. Either element, determinancy
>or randomness, denies me free will, so no mixture of a
>deterministic world or a non-deterministic world will allow me
>free will.

It is not clear to me that a mixture of determinism and randomness
could not jointly create free will.

A Thermostat with no Furnace cannot control the room temperature.
A Furnace with no Thermostat cannot control the room temperature.
But join the two in a feedback loop, and together they give rise
to an emergent property: the ability to control the room temperature
to a desired value, notwithstanding unpredicted changes in the outside
weather.

Similarly, could it not be the case that Free Will emerges from
a balanced mixture of determinism (which permits us to predict the
likely outcome of our choices) and freedom (which allows us to
make arbitrary choices)? Just as the Furnace+Thermostat can drive
the room temperature to a desired value, Cause+Chance gives us
the power to drive the future state-of-affairs toward a desired
goal.

If you buy this line of reasoning, then perhaps we can get on to
the next level, which is: How do we select goal states which we
imagine to be desirable?

--Barry Kort

------------------------------

Date: 7 May 88 02:31:52 GMT
From: quintus!ok@unix.sri.com (Richard A. O'Keefe)
Subject: Re: Free Will & Self Awareness

In article <10942@sunybcs.UUCP>, sher@sunybcs (David Sher) writes:
> It seems that people are discussing free will and determinism by
> trying to distinguish true free will from random behavior. There is a
> fundamental problem with this topic. Randomness itself is not well
> understood. If you could get a good definition of random behavior you
> may have a better handle on free will.

In particular, consider the difference between _random_ behaviour
and _chaotic_ behaviour. A phsyical system may be completely described
by simple deterministic laws and yet be unpredictable in principle
(unpredictable by bounded computational mechanisms, that is).
[Pseudo-random numbers are really pseudo-chaotic.]

> Consider this definition of random behavior:
> X is random iff its value is unknown.

I do not known David Sher's telephone number, but I do not find it useful
to regard it as random (nor as chaotic). Conversely, when listening to a
Geiger counter, I am quite sure whether or not I have heard a click, but
I believe that the clicks are random events.

------------------------------

Date: 7 May 88 02:57:46 GMT
From: quintus!ok@unix.sri.com (Richard A. O'Keefe)
Subject: Re: Free Will & Self Awareness

In article <1179@bingvaxu.cc.binghamton.edu>,
vu0112@bingvaxu.cc.binghamton.edu (Cliff Joslyn) writes:
> In article <4543@super.upenn.edu> lloyd@eniac.seas.upenn.edu.UUCP
(Lloyd Greenwald) writes:
> >This is a good point. It seems that some people are associating free will
> >closely with randomness.
>
> Yes, I do so. I think this is a necessary definition.
>
> Consider the concept of Freedom in the most general sense. It is
> opposed by the concept of Determinism.

For what it is worth, my "feeling" of "free will" is strongest when I
act in accord with my own character / value system &c. That is, when I
act in a relatively predictable way. There is a strong philosophical
and theological tradition of regarding free will and some sort of
determinism as compatible. If I find myself acting in "random" or
unpredictable ways, I look for causes outside myself ("oh, I brought the
wrong book because someone has shuffled the books on my shelf").
Randomness is *NOT* freedom, it is the antithesis of freedom.
If something else controls my behaviour, the randomness of the
something else cannot make _me_ free.

I suppose the philosophical position I can accept most readily is the
one which identifies "free will" with being SELF-determined. That is,
an agent possesses free will to the extent that its actions are
explicable in terms of the agent's own beliefs and values.
For example, I give money to beggars. This is not at all random; it is
quite predictable. But I don't do it because someone else makes me do it,
but because my own values and beliefs make it appropriate to do so.
A perfectly good person, someone who always does the morally appropriate
thing because that's what he likes best, might well be both predictable
and as free as it is possible for a human being to be.

> There has been a great debate as to whether quantum uncertainty was
> subjective or objective. The subjectivists espoused "hidden variables"
> theories (i.e.: there are determining factors going on, we just don't
> know them yet, the variables are hidden). These theories can be tested.
> Recently they have been shown to be false.

Hidden variables theories are not "subjectivist" in the usual meaning of
the term; they ascribe quantum uncertainty to objective physical processes.
They haven't been shown false. It has been shown that **LOCAL** hidden
variables theories are not consistent with observation, but NON-local
theories (I believe there are at least two current) have not been falsified.

------------------------------

Date: 6 May 88 23:05:54 GMT
From: oliveb!tymix!calvin!baba@sun.com (Duane Hentrich)
Subject: Re: Free Will & Self Awareness

In article <31024@linus.UUCP> bwk@mbunix (Barry Kort) writes:
>It is not uncommon for a child to "spank" a machine which misbehaves.
>But as adults, we know that when a machine fails to carry out its
>function, it needs to be repaired or possibly redesigned. But we
>do not punish the machine or incarcerate it.

Have you not bashed or kicked a vending machine until it gave up the
junk food you paid for. It is my experience that swatting a non-solid
state TV tuner sometimes results in clearing up the picture. Indeed we
do "punish" our machines.

Sometimes with results which clearly outweigh the damage done by
such punishment.

>Why then, when a human engages in undesirable behavior, do we resort
>to such unenlightened corrective measures as yelling, hitting, or
>deprivation of life-affirming resources?

For the same reason that the Enter/Carriage Return key on many keyboards
is hit repeatedly and with great force, i.e. frustration with an
inefficient/ineffective interface which doesn't produce the desired results.

No value judgements re results or punishments here.

If this doesn't belong here, please forgive.

d'baba Duane Hentrich ...!hplabs!oliveb!tymix!baba

Claimer: These are only opinions since everything I know is wrong.
Copyright notice: If you're going to copy it, copy it right.

------------------------------

Date: 7 May 88 06:05:46 GMT
From: quintus!ok@sun.com (Richard A. O'Keefe)
Subject: Re: Free Will & Self-Awareness

In article <2070015@otter.hple.hp.com>, cwp@otter.hple.hp.com (Chris Preist)
writes:
> The brain is a product of the spinal chord, rather than vice-versa.

I'm rather interested in biology; if this is a statement about human
ontogeny I'd be interested in having a reference. If it's a statement
about phylogeny, it isn't strictly true. In neither case do I see the
implications for AI or philosophy. It is not clear that "develops
late" is incompatible with "is fundamental". For example, the
sociologists hold that our social nature is the most important thing
about us. In any case, not all sensation passes through the spinal
cord. The optical nerve comes from the brain, not the spinal cord.
Or isn't vision "sensation"?

> For this reason, I believe that the goals of strong AI can only be
> accomplished by techniques which accept the importance of sensation.
> Connectionism is the only such technique I know of at the moment.

Eh? Now we're really getting to the AI meat. Connectionism is about
computation; how does a connectionist network treat "sensation" any
differently from a Marr-style vision program? Nets are interesting
machines, but there's still no ghost in them.

------------------------------

Date: Sat, 07 May 88 18:50:03 +0100
From: "Gordon Joly, Statistics, UCL"
<gordon%stats.ucl.ac.uk@NSS.Cs.Ucl.AC.UK>
Subject: Hapgood.

Intelligence comes in both continuous and discrete forms,
(cf Vol 6 # 91).
Light is both waves and particles; anybody see a problem?

Gordon Joly,
Department of Statistical Sciences,
University College London,
Gower Street,
LONDON WC1E 6BT,
U.K.

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT