Copy Link
Add to Bookmark
Report

AIList Digest Volume 8 Issue 034

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest            Tuesday, 2 Aug 1988       Volume 8 : Issue 34 

Today's Topics:

Free Will:

How to dispose of naive science types (short)
The deterministic robot determines that it needs to become nondeterministic.
Root issue of free will and problems in war zones

----------------------------------------------------------------------

Date: 27 Jul 88 09:09:44 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net (Gilbert
Cockton)
Subject: Re: How to dispose of naive science types (short)

In article <531@ns.UUCP> logajan@ns.UUCP (John Logajan x3118) writes:
>Please explain to me how an unproveable theory (one that makes no unique
>predictions) can be useful?
>
Because people use them. Have a look at the social cognition
literature.

I understood your argument as saying that non-scientific theories
(a.k.a assumptions) cannot be useful, and conversely, that the only
useful theories are scientific ones.

If my understanding is correct, then this is very narrow minded and
smacks of epistemelogical bigotry which no-one can possibly match up
to in their day to day interactions.

Utility must not be confounded with one text-book epistemology.
--
Gilbert Cockton, Department of Computing Science, The University, Glasgow
gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

------------------------------

Date: 27 Jul 88 15:34:09 GMT
From: bwk@mitre-bedford.ARPA (Barry W. Kort)
Reply-to: bwk@mbunix (Kort)
Subject: The deterministic robot determines that it needs to become
nondeterministic.

In article <19880727030413.0.NICK@HOWARD-JOHNSONS.LCS.MIT.EDU>
JMC@SAIL.STANFORD.EDU (John McCarthy) writes:
>Almost all the discussion is too vague to be a contribution. Let me
>suggest that AI people concentrate their attention on the question of how
>a deterministic robot should be programmed to reason about its own free
>will, as this free will relates both to its past choices and to its future
>choices. Can we program it to do better in the future than it did in the
>past by reasoning that it could have done something different from what it
>did, and this would have had a better outcome? If yes, how should it be
>programmed? If no, then doesn't this make robots permanently inferior to
>humans in learning from experience?

To my mind, the robot's problem becomes interesting precisely when it
runs out of knowledge to predict the outcome of the choices open to it.

The classical metaphors for this state are "The Lady or the Tiger?",
the Parable of Buridan's Ass, and "Dorothy meets the Scarecrow at
the fork in the road."
The children's game of Rock, Scissors, Paper
illustrates the predicament faced by a deterministic robot.

In the above scenarios, the resolution is to pick a path at random
and pursue it first. To operationalize the decision, one needs to
implement the Axiom of Choice. One needs a random number generator.
Fortunately, it is possible to build one using a Quantum Amplifier.
(Casting lots will do, if you live in a low-tech society.)

Thus, I conclude that a deterministic robot will perceive itself at
a disadvantage relative to a robot who can implement the Axiom of
Choice, and will decide (of its own free will) that it must evolve
to include nondeterministic behavior.

Note, by the way, that decision in the face of uncertainty entails a
risk, so a byproduct of such behavior is anxiety. In other words,
emotion is the expression of vanishing ignorance.

--Barry Kort

------------------------------

Date: Wed 27 Jul 88 13:53:59-PDT
From: Leslie DeGroff <DEGROFF@INTELLICORP.ARPA>
Subject: root issue of free will and problems in war zones


The new AI in the war zone and the on going free will discussions
seem to both skirt around one of the fundamental crux's of Intelligence,
natural and artificial (and even "Non Intelligent" decision making processes)
There is a pair of Quantities that appear in all decision
processes, one is the information/knowledge in the "system/agent/individual"
and the other is the scale and variation of the universe to be modeled.
For the real world the latter is always much much greater than the prior.
Universe >> some subsystem. Even if we take out infinities there is this
many order of magnitude scale problem. This inequally holds regaurdless of
the equivalence of the internal representation to the external "facts"
Engineers, Programmers, and line managers get their noses rubbed in this
fact pretty often (but perhaps not enough to prevent horrible/scary/dumb
mistakes from being made) This ratio more or less means that systems
working in the real world can always be surprised and/or make mistakes.
The universe does have regularities that allow the causal and structural
mapping of a smaller "Mind" or "representation" to cover a lot of ground
but it also remains filled with places where you need to know the specifics
to know what is happening. Even simple Newtonian physics of multiple
orbiting bodies becomes a combinatorial problem very quickly.
In regards to the war zone, we have a similar case (the Russians and KAL)
which had no particular computer component... Just miss or missing
communications/information and a human decision. There is a limit to
the precision and availability of knowledge and an even lower limit to
the amount of processing that can be done. The universe and Murphy
will get us everytime we start thinking "it's ALL under control".
Related to this fundamental fact is that in many cases "WILL" turns
out to be a concept used by humans to represent the immediate
uncomputability/unpredictability of peices of the real Universe including
our own actions and conciousness. I find WILL to be a much more productive
concept to contemplate than FREE WILL. I can be scientifically educated
and still talk and think of inanimate objects like a truck or a storm as
having willful behavior. Even simple physical systems with unsensed
or unpredictable variability will often be treated as if decisions are
being made; ?Will my door handle give me a static spark today? Much of
the discussion on determinism vs non is simply missing the point that neither
our brains nor our computers will be able to "compute" in real time all
that might be of importance to a given situation and no realistic set of
sensors can gather enough information to be "complete".
From an AI perspective these issues are at the heart of the hardness
in the problems; how can we have an open ended learning system with out
catatonic behavior? (computation of all derivations from an ever increasing
fact base)and what kind of knowledge representation is efficient for learning
from sensors, effective at cutting off computation so that time critical
decisions can be made and knowing when knowledge contained dosn't apply
(classic case of the potential infinity of negations)
(Trick question for the brain modelers, Does sleep act like a Lisp Garbage
collector ie is part of the sleep process an elimination of material that
is not to be stored and reorganizing the rest of the material)
Much of applied statistics and measurement theory is oriented to
METRICs for comparing systems and models and determining "predicts correctly
and fails to predict"
where the models are parametric equations.
Question is how to evaluate a model for "surprise potential" or
"unincluded critcal factors".
Les Degroff DeGroff@intellicorp.com






(I disclaim all blame, I aint paid to think but I have this bad habit,
neither parents, schools or employers have been able to cure it)

------------------------------

Date: 28 Jul 88 18:20:16 GMT
From: umix!umich!eecs.umich.edu!itivax!dhw@uunet.UU.NET (David H.
West)
Subject: Re: free will


In a previous article, John McCarthy writes:
> Let me
> suggest that AI people concentrate their attention on the question of how
> a deterministic robot should be programmed to reason about its own free
> will, as this free will relates both to its past choices and to its future
> choices. Can we program it to do better in the future than it did in the
> past by reasoning that it could have done something different from what it
> did, and this would have had a better outcome? If yes, how should it be
> programmed? If no, then doesn't this make robots permanently inferior to
> humans in learning from experience?

At time t0, the robot has available a partial (fallible) account of:
the world-state, its own possible actions, the predicted
effects of these actions, and the utility of these
effects. Suppose it wants to choose the action with maximum
estimated utility, and further suppose that it can and does do this.
Then its decision is determined. Free-will (whatever that is)
is merely the freedom to do something that doesn't maximize its
utility, which is ex hypothesi not a freedom worth exercising.

At a later time t1, the robot has available all of the above, plus
the outcome of its action. It is therefore not in the same state as
previously. It would make no sense to ignore the additional
information. If the outcome was as expected, then there is no
reason to make a different choice next time unless some other
element of the situation changes. If the outcome was not as
predicted, the robot needs to update its models. This updating is
another choice-of-action-under-incomplete-information problem, so
again the robot can only maximize its own fallibly-estimated
utility, and again its behavior is determined, not (just) by its
physical structure, but by the meta-goal of acting coherently.

If the robot thought about its situation, it would presumably
conclude that it felt no impediment to doing what was obviously the
correct thing to do, and that it therefore had free will.

-David West dhw%iti@umix.cc.umich.edu

------------------------------

Date: 29 Jul 88 18:18:48 GMT
From: well!sierch@lll-lcc.llnl.gov (Michael Sierchio)
Subject: Re: How to dispose of naive science types (short)


Theories are not for proving!

A theory is a model, a description, an attempt to preserve and describe
phenomena -- science is not concerned with "proving" or "disproving"
theories. Proof may have a slightly different meaning for attorneys than
for mathematicians, but scientists are closer to the mathematician's
definition -- when they use the word at all.

A theory may or may not adequately describe the phenomena in question, in
which case it is a "good" or "bad" theory -- of two "good" theories, the
theory that is "more elegant" or "simpler" may be preferred -- but this
is an aesthetic or performance judgement, and again has nothing to do with
proof.

Demonstration and experimentation show (to one degree or another) the value
of a particular theory in a particular domain -- but PROOF? bah!
--
Michael Sierchio @ Small Systems Solutions

sierch@well.UUCP
{pacbell,hplabs,ucbvax,hoptoad}!well!sierch

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT