Copy Link
Add to Bookmark
Report

AIList Digest Volume 7 Issue 026

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest            Friday, 10 Jun 1988       Volume 7 : Issue 26 

Today's Topics:

Queries:
definition of information
Route planners
Induction in Current ES tools
Re: Response to: AI in weather forecasting

Free Will:
How to dispose of the free will issue (long)
brain research on free will
Re: How to dispose of the free will issue

----------------------------------------------------------------------

Date: Thu, 9 Jun 88 08:07:49 EDT
From: "Bruce E. Nevin" <bnevin@cch.bbn.com>
Subject: definition of information

It is often acknowledged that information theory has nothing to say
about information in the usual sense, as having to do with meaning.
It is only concerned with a statistical measure of the likelihood of
a particular signal sequence with respect to an ensemble of signal
sequences, a metric misleadingly dubbed by Hartley, Shannon, and
others "amount of information".

Can anyone point me to a coherent definition of information respecting
information content, as opposed to merely "quantity of information"?

Bruce Nevin
bn@cch.bbn.com
<usual_disclaimer>

------------------------------

Date: Thu 9 Jun 88 08:46:27-EDT
From: MCHALE@RADC-TOPS20.ARPA
Subject: Route planners


I am interested in the area of application of AI/Expert System
Techniques for flight/route planning in constrained domains
(threats, high traffic, ...).

I would appreciate receiving pointers to existing systems,
references/bibliography and copies of reports/publications
in these areas. Kindly reply to:

James Lawton
RADC/COES
Griffiss AFB
NY 13441-5700
Phone: (315)-330-2973

lawtonj@radc-lonex.arpa

------------------------------

Date: Thu, 09 Jun 88 12:17:27 EDT
From: <sriram@ATHENA.MIT.EDU>
Subject: Induction in Current ES tools


Although ID3 is supposed to do generalization and goody stuff like
that, my experience with some of the current inductive tools is that
they seem to: 1) consider only positive examples; 2) do no generalization;
and 3) be like (very efficient) decision table evaluators.

Any comments?

Sriram

------------------------------

Date: 9 Jun 88 18:13:44 GMT
From: dan%meridian@ads.com (Dan Shapiro)
Reply-to: dan@ads.com (Dan Shapiro)
Subject: Re: Response to: AI in weather forecasting


Has anyone tried using heuristic methods to generate an approximate
weather forecast, and employ the results to initialize a numeric
algorithm?

I have seen this technique applied to problems in computational
chemistry (conformational analysis of molecules) with great effect - 3
to 4 orders of magnitude improvement over the efficiency of the
numerical algorithm alone. That kind of improvement makes it possible
to attack problems of wholely different complexity.

Dan Shapiro

------------------------------

Date: 1 Jun 88 20:04:36 GMT
From: mcvax!ukc!warwick!cvaxa!aarons@uunet.uu.net (Aaron Sloman)
Subject: How to dispose of the free will issue (long)

(I wasn't going to contribute to this discussion, but a colleague
encouraged me. I haven't read all the discussion, so apologise if
there's some repetition of points already made.)

Philosophy done well can contribute to technical problems (as shown by
the influence of philosophy on logic, mathematics, and computing, e.g.
via Aristotle, Leibniz, Frege, Russell).

Technical developments can also help to solve or dissolve old
philosophical problems. I think we are now in a position to dissolve the
problems of free will as normally conceived, and in doing so we can make
a contribution to AI as well as philosophy.

The basic assumption behind much of the discussion of freewill is

(A) there is a well-defined distinction between systems whose
choices are free and those which are not.

However, if you start examining possible designs for intelligent systems
IN GREAT DETAIL you find that there is no one such distinction. Instead
there are many "lesser" distinctions corresponding to design decisions
that a robot engineer might or might not take -- and in many cases it is
likely that biological evolution tried both (or several) alternatives.

There are interesting, indeed fascinating, technical problems about the
implications of these design distinctions. Exploring them shows that
there is no longer any interest in the question whether we have free
will because among the REAL distinctions between possible designs there
is no one distinction that fits the presuppositions of the philosophical
uses of the term "free will". It does not map directly onto any one of
the many different interesting design distinctions. (A) is false.

"Free will" has plenty of ordinary uses to which most of the
philosophical discussion is irrelevant. E.g.

"Did you go of your own free will or did she make you go?"

That question a well understood distinction between two possible
explanations for someone's action. But the answer "I went of my own free
will" does not express a belief in any metaphysical truth about human
freedom. It is merely a denial that certain sorts of influences
operated. There is no implication that NO causes, or no mechanisms were
involved.

This is a frequently made common sense distinction between the existence
or non-existence of particular sorts of influences on a particular
individual's action. However there are other deeper distinctions that
relate to to different sorts of designs for behaving systems.

The deep technical question that I think lurks behind much of the
discussion is

"what kinds of designs are possible for agents and what are the
implications of different designs as regards the determinants of
their actions?"

I'll use "agent" as short for "behaving system with something like
motives". What that means is a topic for another day. Instead of one big
division between things (agents) with and things (agents) without free
will we'll then come up with a host of more or less significant
divisions, expressing some aspect of the pre-theoretical free/unfree
distinction. E.g. here are some examples of design distinctions (some
of which would subdivide into smaller sub-distinctions on closer
analysis):

- Compare (a) agents that are able simultaneously to store and compare
different motives with (b) agents that have no mechanisms enabling this:
i.e. they can have only one motive at a time.

- Compare (a) agents all of whose motives are generated by a single top
level goal (e.g. "win this game") with (b) agents with several
independent sources of motivation (motive generators - hardware or
software), e.g. thirst, sex, curiosity, political ambition, aesthetic
preferences, etc.

- Contrast (a) an agent whose development includes modification of its
motive generators and motive comparators in the light of experience with
(b) an agent whose generators and comparators are fixed for life
(presumably the case for many animals).

- Contrast (a) an agent whose motive generators and comparators change
partly under the influence of genetically determined factors (e.g.
puberty) with (b) an agent for whom they can change only in the light of
interactions with the environment and inferences drawn therefrom.

- Contrast (a) an agent whose motive generators and comparators (and
higher order motivators) are themselves accessible to explicit internal
scrutiny, analysis and change, with (b) an agent for which all the
changes in motive generators and comparators are merely uncontrolled
side effects of other processes (as in addictions, habituation, etc.)
[A similar distinction can be made as regards motives themselves.]

- Contrast (a) an agent pre-programmed to have motive generators and
comparators change under the influence of likes and dislikes, or
approval and disapproval, of other agents, and (b) an agent that is only
influenced by how things affect it.

- Compare (a) agents that are able to extend the formalisms they use for
thinking about the environment and their methods of dealing with it
(like human beings) and (b) agents that are not (most other animals?)

- Compare (a) agents that are able to assess the merits of different
inconsistent motives (desires, wishes, ideals, etc.) and then decide
which (if any) to act on with (b) agents that are always controlled by
the most recently generated motive (like very young children? some
animals?).

- Compare (a) agents with a monolithic hierarchical computational
architecture where sub-processes cannot acquire any motives (goals)
except via their "superiors", with only one top level executive process
generating all the goals driving lower level systems with (b) agents
where individual sub-systems can generate independent goals. In case
(b) we can distinguish many sub-cases e.g.
(b1) the system is hierarchical and sub-systems can pursue their
independent goals if they don't conflict with the goals of their
superiors
(b2) there are procedures whereby sub-systems can (sometimes?) override
their superiors.

- Compare (a) a system in which all the decisions among competing goals
and sub-goals are taken on some kind of "democratic" voting basis or a
numerical summation or comparison of some kind (a kind of vector
addition perhaps) with (b) a system in which conflicts are resolved on
the basis of qualitative rules, which are themselves partly there from
birth and partly the product of a complex high level learning system.

- Compare (a) a system designed entirely to take decisions that are
optimal for its own well-being and long term survival with (b) a system
that has built-in mechanisms to ensure that the well-being of others is
also taken into account. (Human beings and many other animals seem to
have some biologically determined mechanisms of the second sort - e.g.
maternal/paternal reactions to offspring, sympathy, etc.).

- There are many distinctions that can be made between systems according
to how much knowledge they have about their own states, and how much
they can or cannot change because they do or do not have appropriate
mechanisms. (As usually there are many different sub-cases. Having
something in a write-protected area is different from not having any
mechanism for changing stored information at all.)

There are some overlaps between these distinctions, and many of them are
relatively imprecise, but all are capable of refinement and can be
mapped onto real design decisions for a robot-designer (or evolution).

They are just some of the many interesting design distinctions whose
implications can be explored both theoretically and experimentally,
though building models illustrating most of the alternatives will
require significant advances in AI e.g. in perception, memory, learning,
reasoning, motor control, etc.

When we explore the fascinating space of possible designs for agents,
the question which of the various sytems has free will loses interest:
the pre-theoretic free/unfree contrast totally fails to produce any one
interesting demarcation among the many possible designs -- it can be
loosely mapped on to several of them.

So the design distinctions define different notions of free:- free(1),
free(2), free(3), .... However, if an object is free(i) but not free(j)
(for i /= j) then the question "But is it really FREE?" has no answer.

It's like asking: What's the difference between things that have life and
things that don't?

The question is (perhaps) OK if you are contrasting trees, mice and
people with stones, rivers and clouds. But when you start looking at a
larger class of cases, including viruses, complex molecules of various
kinds, and other theoretically possible cases, the question loses its
point because it uses a pre-theoretic concept ("life") that doesn't have
a sufficiently rich and precise meaning to distinguish all the cases
that can occur. (Which need not stop biologists introducing a new
precise and technical concept and using the word "life" for it. But that
doesn't answer the unanswerable pre-theoretical question about precisely
where the boundary lies.

Similarly "what's the difference between things with and things without
free will?" This question makes the false assumpton (A).

So, to ask whether we are free is to ask which side of a boundary we are
on when there is no particular boundary in question. (Which is one
reason why so many people are tempted to say "What I mean by free is..."
and they then produce different incompatible definitions.)

I.e. it's a non-issue. So let's examine the more interesting detailed
technical questions in depth.

(For more on motive generators, motive comparators, etc. see my (joint)
article in IJCAI-81 on robots and emotions, or the sequel "Motives,
Mechanisms and Emotions" in the journal of Cognition and Emotion Vol I
no 3, 1987).

Apologies for length.

Now, shall I or shan't I post this.........????

Aaron Sloman,
School of Cognitive Sciences, Univ of Sussex, Brighton, BN1 9QN, England
ARPANET : aarons%uk.ac.sussex.cvaxa@nss.cs.ucl.ac.uk
aarons%uk.ac.sussex.cvaxa%nss.cs.ucl.ac.uk@relay.cs.net
JANET aarons@cvaxa.sussex.ac.uk
BITNET: aarons%uk.ac.sussex.cvaxa@uk.ac
or aarons%uk.ac.sussex.cvaxa%ukacrl.bitnet@cunyvm.cuny.edu
As a last resort (it costs us more...)
UUCP: ...mcvax!ukc!cvaxa!aarons
or aarons@cvaxa.uucp

------------------------------

Date: Mon, 6 Jun 88 10:38:39 EDT
From: "Bruce E. Nevin" <bnevin@cch.bbn.com>
Subject: brain research on free will

Here are excerpts from two articles concerning brain research relating to
the issue of free will:

. . .
Benjamin Lebet of the University of California, San Franciso, . . .
has been studying EEG correlates of conscious expreience since the
early 1960s. He bases his model on his experimental finding that a
distinct brainwave pattern, the readiness potential (RP), occurs 350
milliseconds . . . before the subjective experience of wanting to
move. . . .There is another interval of 150 milliseconds before actual
movement. During that period, the movement--quick flexion of wrist or
finger--could be vetoed or blocked by the individual.

At the moment they were aware of a conscious decision to act, Libet's
subjects noted the position of a moving target. (The accuracy of the
notation of time was checked, or corrected, by objective measurements
in another setting.)

In one experiment, they were asked to note when they actually moved.
They reported having moved slightly _before_ any actual fpysiological
evidence of movement. It was as if the "mind's muscle"--their image
of movement-- preceded actual muscle activation. The brain's motor
commands may be experienced as the movement itself.

The veto or blockade, Libet commented, is in accord with relitious and
humanistic views of ethical behavior and individual responsibility.
The choice not to act is "self control."

On the other hand, he said, if the final intention to act arises
unconsciously, the mere appearance of an intention could not
consciously be prevented, even though action could be blocked. Thus
religious or philosophical systems can create insurmountable
difficulties if they blame individuals for simply having a mental
impulse, even if it not acted out.

Libet: Physiology Dept., UCSF School of Medicine, San Francisco 94143.

This is of course controversial:

In a recent issue of _The Behavioral and Brain Sciences_ (8:529-566),
26 well-known researchers from seven countries commented on the
implications of Libet's work.
. . .
Most . . . praised his care and ingenuity and his courage in trying to
understand the complex interaction between conscious and unconscious
processes.

Several doubted that subjective reports of time could ever be precise
enough to trust. Others suggested that the experiment is a
combination of materialist and mentalist approaches--hard EEG data for
the readiness potential and subjective reports for conscious decision.

John Eccles of the Max Planck Institute (West Germany) accepted the
accuracy of the findings but reinterpreted them in a way that fits his
view of mind and brain as separate.

Conscious intention, Eccles said, may result from our subconscious
sensing of a particular brainwave configuration, the readiness
potential. Intention occurs after we sense this subconscious
readiness.

Subjects may be reporting the peak of an urge, according to James
Ringo of the University of Rochester (NY) Medical Center. The very
beginning of the "urge waveform" might be the readiness potential
evident in the EEG.

Conscious will might be triggered by an "anticipatory image," as
described in 1890 by William James. Eckart Scheerer of the university
of Oldenburg (West Germany) said that Libet's subjects did not report
such images preceding the conscious urge because they were not
instructed to look for them.

The other commentators noted that the will to veto the chosen movement
is itself a conscious intention. What precedes it?

Charles Wood, a Yale psychologist, noted that an executive function
activates a computer's programa. Perhaps the brain's readiness
potential is evidence of an executive function that triggers its
conscious deciding.

I quote both these articles from Brain/Mind Bulletin 11.9:1-2 (May 5, 1986).

Bruce Nevin
bn@cch.bbn.com
<usual_disclaimer>

------------------------------

Date: 8 Jun 88 13:43:13 GMT
From: l.cc.purdue.edu!cik@k.cc.purdue.edu (Herman Rubin)
Subject: Re: How to dispose of the free will issue


The following was posted a long time ago to a different newsgroup.
I did not keep the author's name. This says it all.


I do not understand all the fuss. The answer is very simple:

Whether or not we have free will, we should behave as if we do,
because if we don't, it doesn't matter.


--
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907
Phone: (317)494-6054
hrubin@l.cc.purdue.edu (ARPA or UUCP) or hrubin@purccvm.bitnet

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT