Copy Link
Add to Bookmark
Report

AIList Digest Volume 6 Issue 093

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest             Monday, 9 May 1988       Volume 6 : Issue 93 

Today's Topics:
Philosophy - Free Will

----------------------------------------------------------------------

Date: 2 May 88 10:42:23 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net (Gilbert Cockton)
Subject: Sorry, no philosphy allowed here.

> Could the philosophical discussion be moved to "talk.philosophy"? (John Nagle)
I am always suspicious of any academic activity which has to request that it
becomes a philosophical no-go area. I know of no other area of activity which
is so dependent on such a wide range of unwarranted assumptions. Perhaps this
has something to do with the axiomatic preferences of its founders, who came
from mathematical traditions where you could believe anything as long as it was
logically consistent. Before the 5th Generation scare, AI in the UK had been
sat on for dodging too many methodological issues. Whilst, like the AI
pioneers, they "could see no reasons WHY NOT [add list of major controversial
positions"
, Lighthill could see no reasons WHY in their work.

> What about compatibilism? There are a lot of arguments that free will is
> compatible with strong determinism. (The ones I've seen are riddled with
> logical errors, but most philosophical arguments I've seen are.) (R. O'Keefe)
I would not deny the plausibility of this approach. However, detection of
logical errors in an argument is not enough to sensibly dismiss it, otherwise
we would have to become resigned to widespread ignorance. My concern over AI
is, like some psychology, it has no integration with social theories, especially
those which see 'reality' as a negotiated outcome of social processes, and not
logically consistent rules. If the latter approach to 'reality', 'truth' etc.
were feasible, why have we needed judges to deliver equity? For some AI
enthusiasts, the answer of course, is that we don't. In the brave new world,
machines will interpret the law unequivocably, making the world a much fairer
place:-) Anyway, everyone know's that mathematicians are much smarter
than lawyers and can catch up with them in a few months. Hey presto, rule-base!

> One of the problems with the English Language is that most of the
> words are already taken. ( --Barry Kort)
ALL the words that exist are taken! And if the West Coast had managed to take
more of them, we wouldn't have needed that silly Beach Boys talk ('far out' ->
'how extraordinary/well, fancy that; etc. :-)) AI was a natural term in the
late 50's before the whole concept of definable and measurable intelligence was
shot through in the 1960s on statistical, methodological and sociological
grounds. Given the changed intellectual climate, it would be sensible if the
mathematical naievety of the late 1950s were replaced by the more sophisticated
positions of at least the 1970s. There's no need to change names, just absorb
AI into computer science, linguistics, psychology, management etc. That would
leave workers in advanced computer applications free to get on with pragmatic
issues with no pressure to address the pottier postures of 'pure' AI.

> I would like to learn how to imbue silicon with consciousness,
> awareness, free will, and a value system. (--Barry Kort)
But why! Surely you can't have been bullied that much at school to
have developed such a materialist view of human nature? :-) :-)

> Suppose I were able to inculcate a Value System into silicon. And in the
> event of a tie among competing choices, I use a random mechanism to force
> a decision. Would the behavior of my system be very much different from a
> sentient being with free will? (--Barry Kort)
Oh brave Science! One minute it's Mind on silicon, the next it's a little
randomness to explain the inexplicable. Random what? Which domain? Does it
close? Is it enumerable? Will it type out Shakespeare? More seriously 'forcing
decisions' is a feature of Western capitalist society (a historical point
please, not a political one). There are consensus-based (small) cultures where
decisions are never forced and the 'must do something now' phenomena is
mercifully rare. Your system should prevaricate, stall, duck the
issue, deny there's a problem, pray, write to an agony aunt, ask its
mum, wait a while, get its friends to ring it up and ask it out ...

Before you put your value system on Silicon, put it on paper. That's hard
enough, so why should a dumb bit of constrained plastic and metal promise any
advance over the frail technology of paper? If you can't write it down, you
cannot possibly program it.

So come on you AI types, le't see your *DECLARATIVE TESTIMONIALS* on
this news group by the end of the month. Lay out your value systems
in good technical English. If you can't manage it, or even a little of it,
should you really keep believing that it will fit onto silicon?

------------------------------

Date: 4 May 88 21:10:50 GMT
From: dvm@yale-bulldog.arpa (Drew Mcdermott)
Subject: Free Will


My contribution to the free-will discussion:

Suppose we have a robot that models the world temporally, and uses
its model to predict what will happen (and possibly for other purposes).
It uses Qualitative Physics or circumscription, or, most likely, various
undiscovered methods, to generate predictions. Now suppose it is in a
situation that includes various objects, including an object it calls R,
which it knows denotes itself. For concreteness, assume it believes
a situation to obtain in which R is standing next to B, a bomb with a
lit fuse. It runs its causal model, and predicts that B will explode,
and destroy R.

Well, actually it should not make this prediction, because R will be
destroyed only if it doesn't roll away quickly. So, what will R do? The
robot could apply various devices for making causal prediction, but they
will all come up against the fact that some of the causal antecedents of R's
behavior *are situated in the very causal analysis box* that is trying to
analyze them. The robot might believe that R is a robot, and hence that
a good way to predict R's behavior is to simulate it on a faster CPU, but
this strategy will be in vain, because this particular robot is itself.
No matter how fast it simulates R, at some point it will reach the point
where R looks for a faster CPU, and it won't be able to do that simulation
fast enough. Or it might try inspecting R's listing, but eventually it
will come to the part of the listing that says "inspect R's listing."
The strongest conclusion it can reach is that "If R doesn't roll away,
it will be destroyed; if it does roll away, it won't be."
And then of
course this conclusion causes R to roll away.

Hence any system that is sophisticated enough to model situations that its own
physical realization takes part in must flag the symbol describing that
realization as a singularity with respect to causality. There is simply
no point in trying to think about that part of the universe using causal
models. The part so infected actually has fuzzy boundaries. If R is
standing next to a precious art object, the art object's motion is also
subject to the singularity (since R might decided to pick it up before
fleeing). For that matter, B might be involved (R could throw it), or
it might not be, if the reasoner can convince itself that attempts to
move B would not work. But all this is a digression. The basic point
is that robots with this kind of structure simply can't help but think of
themselves as immune from causality in this sense. I don't mean that they
must understand this argument, but that evolution must make sure that their
causal-modeling system include the "exempt" flag on the symbols denoting
themselves. Even after a reasoner has become sophisticated about physical
causality, his model of situations involving himself continue to have this
feature. That's why the idea of free will is so compelling. It has nothing
to do with the sort of defense mechanism that Minsky has proposed.

I would rather not phrase the conclusion as "People don't really have
free will,"
but rather as "Free will has turned out to be possession of
this kind of causal modeler."
So people and some mammals really do have
free will. It's just not as mysterious as one might think.

-- Drew McDermott

------------------------------

Date: 6 May 88 12:23:46 GMT
From: sunybcs!rapaport@boulder.colorado.edu (William J. Rapaport)
Subject: Free Will and Self-Reference

In article <28437@yale-celray.yale.UUCP> dvm@yale.UUCP (Drew Mcdermott) writes:
>
>Suppose we have a robot that models the world temporally, and uses
>its model to predict what will happen... Now suppose it is in a
>situation that includes various objects, including an object it calls R,
>which it knows denotes itself.
>The robot could apply various devices for making causal prediction, but they
>will all come up against the fact that some of the causal antecedents of R's
>behavior *are situated in the very causal analysis box* that is trying to
>analyze them. The robot might believe that R is a robot, and hence that
>a good way to predict R's behavior is to simulate it on a faster CPU, but
>this strategy will be in vain, because this particular robot is itself.
>...
>Hence any system that is sophisticated enough to model situations that its own
>physical realization takes part in must flag the symbol describing that
>realization as a singularity with respect to causality.

Followers of this debate should, at this point, familiarize themselves
with the literature on "essential indexicals" and "quasi-indexicality",
philosophical analyses designed for precisely such issues about
self-reference. Here are some pertinent references, each with pointers to
the literature:

Castaneda, Hector-Neri (1966), " `He': A Study in the Logic of
Self-Consciousness,"
Ratio 8: 130-157.

Castaneda, Hector-Neri (1967), "On the Logic of Self-Knowledge,"
Nous 1: 9-21.

Castaneda, Hector-Neri (1967), "Indicators and Quasi-Indicators,"
American Philosophical Quarterly 4: 85-100.

Castaneda, Hector-Neri (1968), "On the Logic of Attributions of
Self-Knowledge to Others,"
Journal of Philosophy 64: 439-456.

Perry, John (1979), "The Problem of the Essential Indexical," Nous 13:
3-21.

Rapaport, William J. (1986), "Logical Foundations for Belief Representation,"
Cognitive Science 10: 371-422.
William J. Rapaport
Assistant Professor

Dept. of Computer Science||internet: rapaport@cs.buffalo.edu
SUNY Buffalo ||bitnet: rapaport@sunybcs.bitnet
Buffalo, NY 14260 ||uucp: {decvax,watmath,rutgers}!sunybcs!rapaport
(716) 636-3193, 3180 ||

------------------------------

Date: 3 May 88 15:36:40 GMT
From: bwk@MITRE-BEDFORD.ARPA (Barry W. Kort)
Subject: Re: Free Will & Self-Awareness

I also went back to reread Professor Minsky's theory of Free Will in the
concluding chapters of _Society of Mind_. I am impressed with the
succinctness with which Minksky captures the essential idea that
individual behavior is generated by a mix of causal elements (agency
motivated by awareness of the state-of-affairs vis-a-vis one's value system)
and chance (random selection among equal-valued alternatives).

The only other treatises on Free Will that I resonated with were the
ones by Raymond Smullyan ("Is God a Taoist" in _The Tao is Silent_ and
reprinted in Hofstadter and Dennet's _The Mind's I_) and the book by
Daniel Dennet (_Elbow Room: The Varieties of Free Will Worth Wanting_).

My own contribution to this discussion is summarized in the only free
verse I ever composed in my otherwise prosaic career:


Free Will

or

Self Determination



I was what I was.

I am what I am.

I will be what I will be.


--Barry Kort

------------------------------

Date: Thu, 5 May 88 08:49 EDT
From: GODDEN%gmr.com@RELAY.CS.NET
Subject: Re: Free Will

$0.02: "The Mysterious Stranger" by Mark Twain is a novella dealing with
free will and determinism that the readers of this list may find interesting.

------------------------------

Date: Tue, 3 May 88 01:38 EST
From: Jeffy <JCOGGSHALL%HAMPVMS.BITNET@MITVMA.MIT.EDU>
Subject: Re: Proper subject matter of AILIST

______________________________________________________________________________

>Date: 28 Apr 88 15:42:18 GMT
>From: glacier!jbn@labrea.stanford.edu (John B. Nagle)
>If the combined list is to keep its present readership, which includes some
>of the major names in AI (both Minsky and McCarthy read AILIST), the content
>of this one must be improved a bit.

- As to questions about what is the appropriate content of AIlist -
Presumably it is coextensive with the field "AI" - which I would find
difficult to put a lid on - if you could do it, I would be happy. I
suggest, however, that you cannot, for all research field boundaries are,
by their nature, arbitrary (or, if not arbitrary, then _always_ extremely
fuzzy at the edges).
Quality - I cannot speak for; however, if you suggest limiting the
group of people who can make contributions to AIList to, perhaps, PHD's in
computer science, or perhaps, the membership of AAAI....
Or perhaps, limiting the content to specific technical issues (as
opposed to the "philosophical" debates about AI & Freewill and AI & Ethics,
or AI & Mimicking "human" consciousness....
Well, there are several reasons why this is just plain bad (and you
must understand that as I argue my position - I am a person who is
profoundly interested in AI & Ethics, and in hearing what people currently
working in "AI" think about ethics, as it relates to the work they are
doing).
Reason 1: What would be the point of making such regulations? Why not
respond to it case by case? Since, as I note, there are fuzzy boundaries,
why not allow the readership (ever heard of representative democracy?) be
the ones who put the social pressure on the contributors to contribute what
they want to hear about?
Anti-your-argument 3: So, we don't want to lose our present
readership.... (which includes Minsky & McCarthy) - and we should tailor
the "quality" (please tell me what this is) and the "subject matter" of our
submissions to "keep" our "major names"? Why? Because they give the list a
"good atmosphere"? Because they can hear what "lesser figures" have to say
and perhaps drop a few pearls of wisdom into our midst? Why?
Reason 2: I don't know about you, but I have this bias against the
sectarianism and ivory-towerism of the scientific community at large such
that the common society is excluded from decisions that are being made
about the direction of research etc.. that are going to have major effects
in years to come. Often, there is an unspoken "we are better because we
_really_ know what we're doing"
attitude among the scientific community,
and I would like you to tell me why your message isn't representative of
it.
Jeff Coggshall

------------------------------

Date: 3 May 88 19:05:06 GMT
From: centro.soar.cs.cmu.edu!acha@PT.CS.CMU.EDU (Anurag Acharya)
Subject: this is philosophy ??!!?

In article <1069@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk
(Gilbert Cockton) writes:
>I am always suspicious of any academic activity which has to request that it
>becomes a philosophical no-go area. I know of no other area of activity which
>is so dependent on such a wide range of unwarranted assumptions. Perhaps this
>has something to do with the axiomatic preferences of its founders, who came
>from mathematical traditions where you could believe anything as long as it
>was logically consistent.

Just when is an assumption warranted ? By your yardstick (it seems ),
'logically inconsistent' assumptions are more likely to be warranted than
the logically consistent ones. Am I parsing you wrong or do you really claim
that ?!

>logical errors in an argument is not enough to sensibly dismiss it, otherwise
>we would have to become resigned to widespread ignorance.

Same problem again ( sigh ! ). Just how do you propose to argue with logically
inconsistent arguments ? Come on Mr. Cockton, what gives ?

> My concern over AI
>is, like some psychology, it has no integration with social theories,
especially
>those which see 'reality' as a negotiated outcome of social processes, and not
>logically consistent rules.

You wish to assert that reality is a negotiated outcome of social processes ???
Imagine Mr. Cockton, you are standing on the 36th floor of a building and you
and your mates decide that you are Superman and can jump out without getting
hurt. By the 'negotiated outcome of social processes' claptrap, you really
are Superman. Would you then jump out and have fun ?

> Your system should prevaricate, stall, duck the
>issue, deny there's a problem, pray, write to an agony aunt, ask its
>mum, wait a while, get its friends to ring it up and ask it out ...

Whatever does all that stuff have to do with intelligence per se ?

Mr. Cockton, what constitutes a proof among you and your
"philosopher/sociologist/.." colleagues ? Since logical consistency is taboo,
logical errors are acceptable, reality and truth are functions of the current
whim of the largest organized gang around ( oh! I am sorry, they are the
'negotiated ( who by ? ) outcomes of social processes ( what processes ? )')
how do you guys conduct research ? Get together and vote on motions or what ?

-- anurag

--
Anurag Acharya Arpanet: acharya@centro.soar.cs.cmu.edu

"There's no sense in being precise when you don't even know what you're
talking about"
-- John von Neumann

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT