Copy Link
Add to Bookmark
Report

AIList Digest Volume 6 Issue 098

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest             Monday, 9 May 1988       Volume 6 : Issue 98 

Today's Topics:
Philosophy - Free Will

----------------------------------------------------------------------

Date: 7 May 88 02:14:26 GMT
From: yamauchi@SPEECH2.CS.CMU.EDU (Brian Yamauchi)
Subject: Re: Arguments against AI are arguments against human formalisms

In article <1103@crete.cs.glasgow.ac.uk>, gilbert@cs.glasgow.ac.uk
(Gilbert Cockton) writes:
> In article <1579@pt.cs.cmu.edu> yamauchi@speech2.cs.cmu.edu
(Brian Yamauchi) writes:
> >Cockton seems to be saying that humans do have free will, but is totally
> >impossible for AIs to ever have free will. I am curious as to what he bases
> >this belief upon other than "conflict with traditional Western values".
> Isn't that enough? What's so special about academia that it should be
> allowed to support any intellectual activity without criticism from
> the society which supports it? Surely it is the duty of all academics
> to look to the social implications of their work? Having free will,
> they are not obliged to pursue lines of enquiry which are so controversial.

These are two completely separate issues. Sure, it's worthwile to consider
the social consequences of having intelligent machines around, and of
course, the funding for AI research depends on what benefits are anticipated
by the government and the private sector.

This has nothing to do with whether it is possible for machines to have
free will. Reality does not depend on social consensus.
--------------------------------------------

Or do you believe that the sun revolved around the earth before Copernicus?
After all, the heliocentric view was both controversial and in conflict
with the social consensus.

In any case, since when is controversy a good reason for not doing
something? Do you also condemn any political or social scientist who has
espoused controversial views?

> I have other arguments, which have popped up now and again in postings
> over the last few years:
>
> 1) Rule-based systems require fully formalised knowledge-bases.

This is a reasonable criticism of rule-based systems, but not necessary
a fatal flaw.

> Conclusion, AI as a collection of mathematicians and computer
> scientists playing with machines, cannot formalise psychology where
> no convincing written account exists. Advances here will come from
> non-computational psychology first, as computational psychology has
> to follow in the wake of the real thing.

I am curious what sort of non-computational psychology you see as having had
great advances in recent years.

> [yes, I know about connectionism, but then you have to formalise the
> inputs.

For an intelligent robot (see below), you can take inputs directly from the
sensors.

> Furthermore, you don't know what a PDP network does know]

This is a broad overgeneralization. I would recommend reading Rumelhart &
McClelland's book. You can indeed discover what a PDP network has learned,
but for very large networks, the process of examining all of the weights
and activations becomes impractical. Which, at least to me, is
suggestive of an analogy with human/animal brains with regard to the
complexity of the synapse/neuron interconnections (just suggestive, not
conclusive, by any means).

> AI depends on being able to use written language (physical symbol
> hypothesis) to represent the whole human and physical universe.

Depends on which variety of AI.....

> BTW, Robots aren't AI. Robots are robots.

And artificially intelligent robots are artificially intelligent robots.

> 3) The real world is social, not printed.

The real world is physical -- not social, not printed. Unless you consider
it to be subjective, in which case if the physical world doesn't objectively
exist, then neither do the other people who inhabit it.

> Anyway, you did ask. Hope this makes sense.

Well, you raise some valid criticisms of rule-based/logic-based/etc systems,
but these don't preclude the idea of intelligent machines, per se. Consider
Hans Moravec's idea of building intelligence from the bottom up (starting
with simple robotic animals and working your way up to humans).

After all, suppose you could replace every neuron in a person's brain with
an electronic circuit that served exactly the same function, and afterwards,
the individual acted like exactly the same person. Wouldn't you still
consider him to be intelligent?

So, if it is possible -- or at least conceivable -- in theory to build an
intelligent being of some type, the real question is how.

______________________________________________________________________________

Brian Yamauchi INTERNET: yamauchi@speech2.cs.cmu.edu
Carnegie-Mellon University
Computer Science Department
______________________________________________________________________________

------------------------------

Date: 7 May 88 02:46:11 GMT
From: yamauchi@speech2.cs.cmu.edu (Brian Yamauchi)
Subject: Re: Free Will & Self-Awareness

In article <1099@crete.cs.glasgow.ac.uk>, gilbert@cs.glasgow.ac.uk
(Gilbert Cockton) writes:
> In article <5100@pucc.Princeton.EDU> RLWALD@pucc.Princeton.EDU writes:
> > Are you saying that AI research will be stopped because when it ignores
> >free will, it is immoral and people will take action against it?
> Research IS stopped for ethical reasons, especially in Medicine and
> Psychology. I could envisage pressure on institutions to limit its AI
> work to something which squares with our ideals of humanity.

I can envisage pressure on institutions to limit work on sociology and
psychology to limit work to that which is compatible with orthodox
Christianity. That doesn't mean that this is a good idea.

> If the
> US military were not using technology which was way beyond the
> capability of its not-too-bright recruits, then most of the funding
> would dry up anyway. With the Pentagon's reported concentration on
> more short-term research, they may no longer be able to indulge their
> belief in the possibility of intelligent weaponry.

Weapons are getting smarter all the time. Maybe soon we won't need the
not-too-bright recruits.....

> > When has a 'doctrine' (which, by the way, is nothing of the sort with
> >respect to free will) any such relationship to what is possible?
> From this, I can only conclude that your understanding of social
> processes is non-existent. Behaviour is not classified as deviant
> because it is impossible, but because it is undesirable.

>From this, I can only conclude that either you didn't understand the
question or I didn't understand the answer. What do the labels that society
places on certain actions have to do with whether any action is
theoretically possible? Anti-nuke activists may make it practically
impossible to build nuclear power plants -- they cannot make it physically
impossible to split atoms.

> The question is, do most people WANT a computational model of human
> behaviour? In these days of near 100% public funding of research,
> this is no longer a question that can be ducked in the name of
> academic freedom.

100% public funding????? Haven't you ever heard of Bell Labs, IBM Watson
Research Center, etc? I don't know how it is in the U.K., but in the U.S.
the major CS research universities are actively funded by large grants from
corporate sponsors. I suppose there is a more cooperative atmosphere here --
in fact, many of the universities here pride themselves on their close
interactions with the private research community.

Admittedly, too much of all research is dependent on government funds, but
that's another issue....

> Everyone is free to study what they want, but public
> funding of a distasteful and dubious activity does not follow from
> this freedom. If funding were reduced, AI would join fringe areas such as
> astrology, futorology and palmistry. Public funding and institutional support
> for departments implies a legitimacy to AI which is not deserved.

A modest proposal: how about a cease-fire in the name-calling war? The
social scientists can stop calling AI researchers crackpots, and the AI
researchers can stop calling social scientists idiots.

______________________________________________________________________________

Brian Yamauchi INTERNET: yamauchi@speech2.cs.cmu.edu
Carnegie-Mellon University
Computer Science Department
______________________________________________________________________________

------------------------------

Date: 7 May 88 05:54:20 GMT
From: vu0112@bingvaxu.cc.binghamton.edu (Cliff Joslyn)
Subject: Formal Systems and AI


I have recently been thinking some about formal systems and AI, and have
been prompted by our recent conversations, as well as by an *excellent*
article by Chris Cherniak ("Undebuggability and Cognitive Science",
_Comm. ACM_, 4/88) to make some comments.

It seems patently obvious to me at this point that the following
statements are false:
1) The mind is a formal system.
2) Attempts to construct AI as a formal system can succeed.

These ideas seem to me to be the heart of "classical Cartesian Cognitive
Science" (e.g. Fodor, Chomsky). I assert that these positions are
based on an old, false view of a deterministic, deductive, reducible
world. The relative failure of strong, theoretical AI seems, in
hindsight, terribly obvious.

Cherniak takes the following stance: "A complete computational
approximation of the mind would be a huge, 'branchy,' holistically
structurred, quick-and-dirty (i.e. computationally tractable, but
formally incorrect/incomplete), kludge. . .[as opposed to] a small set
of elegant, powerful, general principles, on the model such as classical
mechanics."

This view is not only common-sensical, but is well-motivated by some
gross approximations about *real* intelligent systems and *real* physics
of information system. For example, let's say that I have a computer so
small that it could calculate a line in a truth-table in the time it
takes for light to cross the diameter of the proton. Cherniak concludes
that there is then an upper bound of ~ 138 independent logical
propositions that can be solved by the truth-table method. A tiny
number! More quotations: "Our basic methodological instinct. . .seems
to be to work out a model of the mind for 'typical' cases - most
importantly, very small cases - and then implicitly to suppose a grand
induction to the full-scale case of a complete human mind."

Instead, we see large software systems (e.g. Star Wars), rather than
being elegant, correct/complete/verifiable formal systems, as being huge
unintelligible bug-ridden masses. It is well known that programmers
quickly lose the ability to understand their own code, let alone verify
it. Visualization past three dimensions is practically impossible, yet
real information systems have thousands of dimensions.

This move away from formalism as a valid paradigm for AI seems perfectly
in step with non-von Neumann arhitecture (i.e. connectionism), as well
other academic trends away from deterministic, deductive, reducible
theories towards the science of fuzzy, uncertain, multi-dimensional
information system.

--
O---------------------------------------------------------------------->
| Cliff Joslyn, Cybernetician at Large
| Systems Science, SUNY Binghamton, vu0112@bingvaxu.cc.binghamton.edu
V All the world is biscuit shaped. . .

------------------------------

Date: 2 May 88 14:33:05 GMT
From: trwrb!aero!venera.isi.edu!smoliar@ucbvax.Berkeley.EDU (Stephen
Smoliar)
Subject: Re: Free Will & Self-Awareness

In article <912@cresswell.quintus.UUCP> ok@quintus.UUCP
(Richard A. O'Keefe) writes:
>In article <1029@crete.cs.glasgow.ac.uk>, gilbert@cs.glasgow.ac.uk (Gilbert
>Cockton) writes:
>> For AI workers (not AI developers/exploiters who are just raiding the
>> programming abstractions), the main problem they should recognise is
>> that a rule-based or other mechanical account of cognition and decision
>> making is at odds with the doctrine of free will which underpins most
>>Western morality.
>
>What about compatibilism? There are a lot of arguments that free will is
>compatible with strong determinism. (The ones I've seen are riddled with
>logical errors, but most philosophical arguments I've seen are.)
>When I see how a decision I have made is consistent with my personality,
>so that someone else could have predicted what I'd do, I don't _feel_
>that this means my choice wasn't free.


Here, here! Cockton's statement is the sort of doctrinaire proclamation which
is guaranteed to muddy the waters of any possible dialogue between those who
practice AI and those who practice the study of philosophy. He should either
prepare a brief substantiation or relegate it to the cellar of outrageous
vacuities crafted solely to attract attention!

------------------------------

Date: Sat, 7 May 88 23:39:41 EDT
From: Marvin Minsky <MINSKY@AI.AI.MIT.EDU>
Subject: AIList V6 #91 - Philosophy

Brian Yamauchi correctly paraphrases 30.6 of Society of Mind:

|Everything, including that which happens in our brains,
|depends on these and only fixed, deterministic laws or random accidents.
| There is no room on either side for any third alternative.

He agrees, and goes on to suggest that free will is a decision making
process. But that doesn't explain why we feel that we're free. I
claim that we feel free when we decide to not try further to
understand how we make the decisions: the sense of freedom comes from
a particular act - in which one part of the mind STOPs deciding, and
accepts what another part has done. I think the "mystery" of free will
is clarified only when we realize that it is not a form of decision making
at all - but another kind of action or attitude entirely, namely, of how
we stop deciding.

------------------------------

Date: 5 May 88 16:23:18 GMT
From: hpda!hp-sde!hpfcdc!hpfclp!nancyk@ucbvax.Berkeley.EDU (Nancy
Kirkwood)
Subject: Re: this is philosophy ??!!?


nancyk@hpfclp.sde.hp.com Nancy Kirkwood at HP, Fort Collins

Come now!!! Don't try to defend "logical consistency" with exaggeration
and personal attack.

> Just when is an assumption warranted ? By your yardstick (it seems ),
> 'logically inconsistent' assumptions are more likely to be warranted than
^^^^
> the logically consistent ones.

It's important to remember that the rules of logic we are discussing come
from Western (European) cultural traditions, and derive much of their power
"from the consent of the governed," so to speak. We have agreed that if
we present arguments which satisfy the rules of this system, that the
arguments are correct, and we are speaking "truth." This is a very useful
protocol, but we should not be so narrow as to believe that it is the only
yardstick for truth.

The "laws" of physics certainly preclude jumping off a 36 story building
and expecting not to get hurt, but physicists would be the first to admit
that these laws are incomplete, and the natural processes involved are
*not* completely known, and possibly never will be. Nor can we be sure,
being fallible humans who don't know all the facts, that our supposed
logical arguments are useful or even correct.

"Reality" in the area of human social interactions is largely if not
completely the "negotiated outcome of social processes." It has been a
topic of debate for thousands of years at least as to whether morality
has an abstract truth unrelated to the social milieu it is found in.

> Since logical consistency is taboo, logical errors are acceptable,
> reality and truth are functions of the current whim of the largest organized
> gang around ( oh! I am sorry, they are the 'negotiated ( who by ? ) outcomes
> of social processes ( what processes ? )') how do you guys conduct research ?

Distorting someone's statements and then attacking the distortions is
not an effective means of carrying on a productive discussion (though
it does stir up interest :-)).

-nancyk
* * * *********************************************** * * *
* "There are more things in heaven and earth, Horatio, *
* than are dreamt of in your philosophy." *
* -Shakespeare *
* * * *********************************************** * * *

------------------------------

Date: 7 May 88 16:12:55 GMT
From: vu0112@bingvaxu.cc.binghamton.edu (Cliff Joslyn)
Subject: Re: Free Will & Self Awareness

In article <940@cresswell.quintus.UUCP> ok@quintus.UUCP (Richard A. O'Keefe)
writes:
>In article <1179@bingvaxu.cc.binghamton.edu>,
vu0112@bingvaxu.cc.binghamton.edu (Cliff Joslyn) writes:
>> In article <4543@super.upenn.edu> lloyd@eniac.seas.upenn.edu.UUCP
(Lloyd Greenwald) writes:
>> >This is a good point. It seems that some people are associating free will
>> >closely with randomness.
>>
>> Yes, I do so. I think this is a necessary definition.
>>
>> Consider the concept of Freedom in the most general sense. It is
>> opposed by the concept of Determinism.
>
>For what it is worth, my "feeling" of "free will" is strongest when I
>act in accord with my own character / value system &c. That is, when I
>act in a relatively predictable way.

Yes, it is more complicated, isn't it? What if, instead of having a
"value system", you were rather in the grip of some hideous, controlling
"ideology." Let's say you're Ronald Raegan, or Botha, or Gorbachev. I
wouldn't want to deny them free will, but would say that their
ideologies are highly determining (at least on political issues). On
the other hand, let's say that I am an intelligent, impressionable child
in an "ideas bazarre," say a super-progressive, highly integrated school
in New York. Then my value system will be in constant flux.

Also, don't confuse predictibility with determinism. There are degrees
of predictibility. If I know the distribution of a random variable, I
can make some degree of prediction.

>Randomness is *NOT* freedom, it is the antithesis of freedom.
>If something else controls my behaviour, the randomness of the
>something else cannot make _me_ free.

Yes, it is critical to keep the levels of analysis clear. If something
external *determines* your behavior (for example, a value system), then
your behavior is *determined* no matter what. The *cause* of your
behavior being free in no way implies you are free. But we aren't
talking about something controlling you, we are talking about whether
you are controlled. My assertion is that if you are completely
controlled, then you cannot act randomly. If you are free, than you
can.

Try this: freedom implies the possibility of randomness, not its
necessity?

>That is,
>an agent possesses free will to the extent that its actions are
>explicable in terms of the agent's own beliefs and values.
>For example, I give money to beggars. This is not at all random; it is
>quite predictable. But I don't do it because someone else makes me do it,
>but because my own values and beliefs make it appropriate to do so.

In order for something to be not at all random, it must not just be
quite predictible, but rather completely predictible. To that extent,
it is determined.

Again, we're talking at different levels (probably a
subjective/objective problem). Let's try this: if you are free, that
means it is possible for you to make a choice. That is, you are free to
scrap your value system. At each choice you make, there is a small
chance that you will do something different, something unpredictible
given your past behavior/current value system. If, on the other hand,
you *always* adhere to that value system, then from my perspective, that
value system (as an *external cause*) is determining your behavior, and
you are not free. The problem here may be one of observation: if a coin
"chooses" to come up heads each time, I will say that it's necessary
that it does, as an inductive inference.

There's a lot of issues here. I don't either of us have thought through
it very clearly.

>A perfectly good person, someone who always does the morally appropriate
>thing because that's what he likes best, might well be both predictable
>and as free as it is possible for a human being to be.

He is not free so long as it is not possible for him to act imorally. I
say it is then impossible to distinguish between someone who is free to
act imorally, and chooses not to, and someone who is determined to act
morally.

>It has been shown that **LOCAL** hidden
>variables theories are not consistent with observation, but NON-local
>theories (I believe there are at least two current) have not been falsified.

Thanks for the clarification.

--
O---------------------------------------------------------------------->
| Cliff Joslyn, Cybernetician at Large
| Systems Science, SUNY Binghamton, vu0112@bingvaxu.cc.binghamton.edu
V All the world is biscuit shaped. . .

------------------------------

Date: Sun, 8 May 88 21:52:59 -0200
From: Antti Ylikoski <ayl%hutds.hut.fi%FINGATE.BITNET@CUNYVM.CUNY.EDU>
Subject: discussion involving free will and self-awareness

I've had a superficial look at the discussion involving free will and
self-awareness in AIList.

In my opinion, what we mean with the English word "will" depends upon
the philosophical viewpoint of the person in whose world the noun
exists.

"Will" might be a collection of neurological and psychological mechanisms
(ah ... like the frontal lobes being the central controller of the brain).

The noun "will" gets another meaning if we consider one's
responsibility of his deeds, to mankind, and ... for those who are
religious ... to God. CIA is said to hypnotize people to commit
murders. The questions arise whether they are evil ... whether they
committed a sin ... whether they did their crimes of their "free will".

And "self-awareness" ... in my opinion, simply, the mental processes of
humans are so complicated and developed we can form good models of
ourselves.

Antti Ylikoski

------------------------------

Date: Sun 8 May 88 10:25:51-PDT
From: Paul Roberts <PMR@Score.Stanford.EDU>
Subject: Philosophy, free will


If anyone is actually interested is reading something intelligent about
these topics, particularly as they apply to AI, I recommend they read
`Brainstorms' by Daniel Dennett. One chapter is called, I believe,
`The kind of free-will worth having'.

Paul Roberts

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT