Copy Link
Add to Bookmark
Report

AIList Digest Volume 7 Issue 019

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest             Monday, 6 Jun 1988       Volume 7 : Issue 19 

Today's Topics:

The Future of the Free Will Discussion

Philosophy:
a good jazz ensemble
Who else isn't a science?
Bad AI: A Clarification

randomness

----------------------------------------------------------------------

Date: Sat 4 Jun 88 21:44:01-PDT
From: Raymond E. Levitt <LEVITT@Score.Stanford.EDU>
Subject: Free Will

Raymond E. Levitt
Associate Professor
Center for Integrated Facility Engineering
Departments of Civil Engineering and Computer Science
Stanford University
==============================================================

Several colleagues and I would like to request that the free will debate -
which seems endless - be set up on a different list with one of the more
active contributors as a coordinator.

The value of the AILIST as a source of current AI research issues, conferences,
software queries and evaluations, etc., is diminished for us by having to
plough through the philosophical dialectic in issue after issue of the AILIST.

Perhaps you could run this message and take a poll of LIST readers to help
decide this in a democratic way.

Thanks for taking on the task of coordinating the AILIST. It is a great
service to the community.

Ray Levitt
-------


[Editor's Note:

Thank you, Mr. Levitt, and many thanks to all those who have
written expressing interest or comments regarding AIList. I regret that
I have not had time to respond to many of you individually, as I have
lately been more concerned with the simple mechanics of generating
digests and dealing with the average of sixty bounce messages per day
than with the more substantive issues of moderation.

However, a new COMSAT mail-delivery program is now orbiting, and
we may perhaps be able to move away from the days of lost messages,
week-long delays, and 50K digests ... My heartfelt apologies to all.

Being rather new at this job, I have hesitated to express my
opinion with respect to the free-will debate, preferring to retain the
status quo and hoping that the problem would fix itself. But since Mr.
Levitt is only the latest of several people who have complained about
this particular issue, I feel I must take some action.

Clearly this discussion is interesting and valuable to many of
the participants, but equally clearly it is less so for many others. I
have tried as far as possible to group the free-will discussions in
digests apart from other matters, so people uninterested in the topic
could simply 'delete' the offending digests unread. (There are many
readers who only have access to the undigested stream and cannot do
this.)

Several people have suggested moving the discussion to a USENET
list called 'tallk.philosophy'. The difficulty here is that AIList
crosses USENET, INTERNET and BITNET, and not all readers would be able
to contribute. In V7#6, John McCarthy <JMC@SAIL.Stanford.EDU> said:

> I am not sure that the discussion should progress further, but if
> it does, I have a suggestion. Some neutral referee, e.g. the moderator,
> should nominate principal discussants. Each principal discussant should
> nominate issues and references. The referee should prune the list
> of issues and references to a size that the discussants are willing
> to deal with. They can accuse each other of ignorance if they
> don't take into account the references, however perfunctorily.
> Each discussant writes a general statement and a point-by-point
> discussion of the issues at a length limited by the referee in
> advance. Maybe the total length should be 20,000 words,
> although 60,000 would make a book. After that's done we have another
> free-for-all. I suggest four as the number of principal discussants
> and volunteer to be one, but I believe that up to eight could
> be accomodated without making the whole thing too unwieldy.
> The principal discussants might like help from their allies.
>
> The proposed topic is "AI and free will".

I would be more than willing to coordinate this effort, but I
have, as yet, received no responses expressing an opinion one way or the
other. I invite the readers of AIList who have found the free-will
discussion interesting (as opposed to those who have not) to send me net
mail at AILIST-REQUEST@AI.AI.MIT.EDU concerning the future of this
discussion. Please send me a separate message, and do not intersperse
your comments with other contributions, whether to the free-will debate
or other matters.

In the meantime, I will continue to send out digests covering
the free-will topic, although separate from other material.

- nick ]

------------------------------

Date: Sat, 4 Jun 88 14:21:06 EDT
From: George McKee <mckee%corwin.ccs.northeastern.edu@RELAY.CS.NET>
Subject: Artificial Free Will -- what's it good for?

Obviously many people think that the question of whether or not
humans have free will is important to a lot of people, and
thinking about how it could be implemented in a computer program
is an effective way to clarify exactly what we're talking about.
I think the McDermott's contributions show this -- they're
getting pretty close to pseudocode that you could think about
translating into executable programs. (But just to put in my
historical two cents, I first saw this kind of analysis in a
proceedings of the Pontifical Academy of Sciences article by
D.M.MacKay in about 1968.) If free will is programmable,
it's appropriate to then ask "why bother?", and "how will we
recognize success?", i.e. to make explicit the scientific
motivation for such a project, and the methodology used to
evaluate it.
I can see two potential reasons to work on building
free will into a computer system: (1) formalizing free will into
a program will finally show us the structure of an aspect of the
human mind that's been confusing to philosophers and psychologists
for thousands of years. (2) free-will-competent computer systems
will have some valuable abilities missing from systems without
free will.
Reason 1 is unquestionably important to the cognitive
sciences, and insofar as AI programs are an essential tool to
cognitive scientists, *writing* a program that includes free will
as part of its structure might be a worthwhile project. But
*executing* a program embodying free will won't necessarly show
us anything that we didn't know already. Free will in its sense
as a consequence of the incompleteness of an individual's self-model
has an essentially personal character, that doesn't get out into
behavior except as verbal behavior in arguments about whether it
exists at all. For instance, I haven't noticed in this discussion
any mention of how you recognize free will in anyone other than
yourself. If you can't tell whether I have free will or not, how
will you recognize if my program has it without looking at the code?
And if you always need to look at the code, what's the point in
actually running the program, except for other, irrelevant reasons?
(This same argument applies to consciousness, and explains why I,
and maybe others out there as well, after sketching out some
pseudocode that would have some conscious notion of its own
structure, decided to leave implementation to the people who
work on the formal semantics of reflective languages like
"3lisp" or "brown". (See the proceedings of the Lisp and FP
conferences, but be careful to avoid thinking about multiprocessing
while reading them.))
Which brings us to Reason 2, and free will from the
perspective of pure, pragmatic AI. As far as I can tell, the only
way free will can affect behavior is by making it unpredictable.
But since there are many other, easier ways to get unpredictability
without having to invoke the demoniacal (or is it oracular?)
Free Will, I'm back to "why bother?" again. Unpredictability in
behavior is certainly valuable to an autonomous organism in a
dangerous environment, both as an individual (e.g. a rabbit trying
to outrun a hungry fox) and as a group (e.g. a plant species trying
to find a less-crowded ecological niche), but in spite of my use of
the word "trying" this doesn't need to involve any will, free or
otherwise. In highly sophisticated systems like human societies,
where statements of ability (like diplomas :-) are often effectively
equivalent to demonstrations of ability, claiming "I have Free Will,
you'll fail if you try to predict/control my behavior!" might well be
quite effective in fending off a coercive challenge. But computer
systems aren't in this kind of social situation (at least the ones I
work with aren't). In fact they are designed to be as predictable
as possible, and when they aren't, it indicates a failure either of
understanding or in design. So again, I don't see the need for
Artificial Free Will, fake or real.
My background is largely psychology, so I think that it's
valuable to understand how it is that people feel that their behavior
is fundamentally unconstrained by external forces, especially social
ones. But I also don't think that this illusion has any primary
adaptive value, and I don't think there's anything to be gained by
giving it to a computer. If this is true, then the proper place
for this discussion is some cognitive-science list, which I'd be
happy to read if I knew where to send my subscription request.
- George McKee
NU Computer Science

------------------------------

Date: Fri 03 Jun 1988 15:04 CDT
From: <UUCJEFF%ECNCDC.BITNET@MITVMA.MIT.EDU>
Subject: >> RE>...a steeplejack and his mate, a good jazz ensemble,
...)


>No. Fire and ambulance personnel have regulations, basketball has rules
>and teams discuss strategy and tactics during practice, and even jazz
>musicians use sheet music sometimes. I don't mean to say that implicit
>communication doesn't exist, just that it's not as useful. I don't know
>how to build steeples, but I'll bet it can be written down.

This person obviously doesnt know much about music performance.
Of course jazzers use sheet music but have you ever seen a page out of
the Realbook, if you have, you never follow it literally. Even the more
classically oriented stuff, the bandwidth of the information on the sheet
no where nearly approaches the gestural dimensions required to musically or
otherise correctly interpret the piece. For one, there is tradition, which
is passed through schools, performance ensembles, and now recording media.
If you are a trumpet player in an orchestra, and you see a dot over a note,
that dot means different things depending on which composer, period, genre,
tempo, etc, ect, ect.

With jazz there are even more intangibles, like you can be on top of the beat
or lay in the pocket. Nowhere is a written method which can gaurantee that
you are going to get the right feel, man, you just gotto feel it baby *snap*.

You still may want to call jazz a language, of course it is, it has meaning,
but it is not something that can be put down in a machine readable format.

Jeff Beer, UUCJEFF@ECNCDC.BITNET
==================================
Language is a virus --- Laurie Anderson

------------------------------

Date: 3 Jun 88 20:22:32 GMT
From: maui!bjpt@locus.ucla.edu (Benjamin Thompson)
Subject: Re: Who else isn't a science?

In article <10510@agate.BERKELEY.EDU> weemba@garnet.berkeley.edu writes:
>Gerald Edelman, for example, has compared AI with Aristotelian
>dentistry: lots of theorizing, but no attempt to actually compare
>models with the real world. AI grabs onto the neural net paradigm,
>say, and then never bothers to check if what is done with neural
>nets has anything to do with actual brains.

This is symptomatic of a common fallacy. Why should the way our brains
work be the only way "brains" can work? Why shouldn't *A*I workers look
at weird and wonderful models? We (basically) don't know anything about
how the brain really works anyway, so who can really tell if what they're
doing corresponds to (some part of) the brain?

Ben

------------------------------

Date: Sat, 4 Jun 88 00:14 EST
From: EBARNES%HAMPVMS.BITNET@MITVMA.MIT.EDU
Subject: Re: Sorry, no philosophy allowed here


Editors:

>If you can't write it down, you can't program it.

This comes down to two questions. Can we build machines with original
thought capabilities, and what is meant by `program'. I think that it
is possible to build machines which will think originally. The question
now becomes: "Is what we do to set these "free thinking" machines up
considered programing". It would not be a strict set of instructions,
but we would surely instill the rules of deductive reasoning in the
machine. Whether or not this is "programing" is an uniteresting question.
Call it what you will, one way makes the original statement true and
the other way makes it false.
Eric Barnes

------------------------------

Date: 4 Jun 88 15:41:26 GMT
From: trwrb!aero!venera.isi.edu!smoliar@bloom-beacon.mit.edu
(Stephen Smoliar)
Subject: Re: Bad AI: A Clarification

In article <1299@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert
Cockton) writes:
> Research requires skill. Research into humanity requires special
>skills. Computer scientists and mathematicians are not taught these skills.
>
There is no questioning the premise of the first sentence. I am even willing
to grant, further, that artificial intelligence (or at least aspects which
are of particularly interest to me) may be regarded as "research into
humanity." However, after that, Cockton's argument begins to fall apart.
Just what are those "special skills" which such research "requires?" Does
anyone have them? Does Cockton regard familiarity with the humanistic
literature as such a skill? I suspect there could be some debate as to
whether or not extensive literary backgroud is a skill, particularly when
the main virtue of such knowledge is that it provides one with a history
of how one's predecessors have failed on similar tasks. There is no doubt
that it is valuable to know that certain paths lead to dead ends; but when
there are so many forks in the road, it is not always easy to determine WHICH
fork was the one which ultimately embodied the incorrect decision.

Perhaps I am misrepresenting Cockton by throwing too much weight on "being
well read." In that case, he can set the record straight by doing a better
job of characterizing those skills which he feels computer scientists and
mathematicians lack. Then he can tell us how many humanists have those
skills and have exercised them in the investigation of intelligence with
a discipline which he seems to think the AI community lacks. Let he who
is without guilt cast the first stone, Mr. Cockton! (While we're at it,
is your house made of glass, by any chance?)

One final note on bad AI. I don't think there is anyone reading this
newsgroup who would doubt that there is bad AI. However, in another
article, Cockton seems quite willing to admit (as most of us knew already)
that there is bad sociology, too. One of the more perceptive writers on
social behavior, Theodore Sturgeon (who had the good sense to articulate
his views in the palatable form of science fiction), once observed that
90% of X is crud, for any value of X . . . that can be AI, sociology, or
classical music. Bad AI is easy enough to find and even easier to pick
on. Rather than biting the finger of the bad stuff, why not take the
time to look where the finger of the good stuff is really pointing?

------------------------------

Date: 4 Jun 88 16:09:56 GMT
From: trwrb!aero!venera.isi.edu!smoliar@bloom-beacon.mit.edu
(Stephen Smoliar)
Subject: Re: AI and Sociology

In article <1301@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert
Cockton) writes:
> AI can be
>mulitidisciplinary, but it is, for me, unique in its insistence on a single
>paradigm which MUST distort the researcher's view of humanity, as well as the
>research consumer's view on a bad day. Indefensible.
>
. . . and patently untrue! Perhaps Mr. Cockton has suffered from an attempt to
study AI in such a dogmatic environment. His little anecdote about the
advior who put him off AI is quite telling. I probably would have been
put off by such an attitude, too. Fortunately, I could affort the luxury
of changing advisors without changing my personal interest in questions I
wanted to pursue.

First of all, it is most unclear that there is any single paradigm for the
pursuit of artificial intelligence. Secondly, it is at least somewhat unclear
that any paradigm which certainly will INFLUENCE one's view of humanity also
necessarily DISTORTS it. To assume that the two thousands years of philosophy
which have preceded us have provided an undistorted view of humanity is
arrogance in its most ignorant form. Finally, having settled that there
is more than one paradigm, we can hardly accuse the AI community of INSISTING
on any paradigm.
>
>Again, I challenge AI's rejection of social criticisms of its paradigm. We
>become what we are through socialisation, not programming (although some
>teaching IS close to programming, especially in mathematics). Thus a machine
>can never become what we are, because it cannot experience socialisation in
>the
>same way as a human being. Thus a machine can never reason like us, as it can
>never absorb its model of reality in a proper social context. Again, there
>are
>well documented examples of the effect of social neglect on children.
>Machines
>will not suffer in the same way, as they only benefit from programming, and
>not all forms of human company.

Actually, if there is any agreement at all in the AI community it is in the
conviction to be sceptical of all authoritative usage of the word "never."
I, personally, do not feel that any social criticisms are being rejected
wholesale. However, AI is a very difficult area to pursue (at least if
you are really interested in a research pursuit, as opposed to marketing
a new shell for building expert systems). One of the most important keys
to getting any sort of viable result at all is understanding how to break
off a piece of the whole, big, intimidating problem whose investigation is
likely to provide some insight. This generally leads to the construction
of a model, usually in the form of a software artifact. The next key is
to investigate that model to see what it has REALLY told us. A good example
of such an investigation is the one by Brown and Lenat on what AM and EURISKO
APPEAR (their words) to work.

There are valid questions about socialization which can probably be formulated
in terms of communities of automata. However, we need to form a better vision
of what we can expect by way of the behavior of individual automata before we
can express those questions in any useful way. There is no doubt that this
will take some time. However, there is at least a glimmer of hope that when
we get around to expressing them, we will have a better idea of what we are
talking about than those who have chosen to reject the abstraction of
automata out of hand.

------------------------------

Date: 4 Jun 88 16:21:47 GMT
From: trwrb!aero!venera.isi.edu!smoliar@bloom-beacon.mit.edu
(Stephen Smoliar)
Subject: Re: Aah, but not in the fire brigade, jazz ensembles, rowing
eights,...

In article <239@proxftl.UUCP> tomh@proxftl.UUCP (Tom Holroyd) writes:
>In article <1171@crete.cs.glasgow.ac.uk>, gilbert@cs.glasgow.ac.uk (Gilbert
>Cockton) writes:
>> In article <5499@venera.isi.edu> smoliar@vaxa.isi.edu.UUCP (Stephen Smoliar)
>>writes:
>> >The problem comes in deciding
>> >WHAT needs to be explicitly articulated and what can be left in the
>> >"implicit
>> >background."
>> ...
>> For people who haven't spent all their life in academia or
>> intellectual work, there will be countless examples of carrying out
>> work in near 100% implicit background (watch fire and ambulance
>> personelle who've worked together as a team for ages, watch a basketball
>> team, a steeplejack and his mate, a good jazz ensemble, ...)
>
>No. Fire and ambulance personnel have regulations, basketball has rules
>and teams discuss strategy and tactics during practice, and even jazz
>musicians use sheet music sometimes. I don't mean to say that implicit
>communication doesn't exist, just that it's not as useful. I don't know
>how to build steeples, but I'll bet it can be written down.
>
Take a look at Herb Simon's article in ARTIFICIAL INTELLIGENCE about
"ill-structured problems" and then decide whether or not you want to
make that bet.

------------------------------

Date: 5 Jun 88 17:29:29 GMT
From: glacier!jbn@labrea.stanford.edu (John B. Nagle)
Subject: Re: Bad AI: A Clarification


On this subject, one should read Drew McDermott's "Artificial Intelligence
meets Natural Stupidity" (ACM SIGART newsletter, #57, April 1976.) His
comments are all too apt today.

John Nagle

------------------------------

Date: 5 Jun 88 18:07:42 GMT
From: glacier!jbn@labrea.stanford.edu (John B. Nagle)
Subject: Re: Ill-structured problems

In article <5644@venera.isi.edu> Stephen Smoliar writes:
>Take a look at Herb Simon's article in ARTIFICIAL INTELLIGENCE about
>"ill-structured problems" and then decide whether or not you want to
make that bet.

A reference to the above would be helpful.

Little progress has been made on ill-structured problems in AI.
This reflects a decision in the AI community made in the early 1970s to
defer work on those hard problems and go for what appeared to be an
easier path, the path of logic/language/formal representation sometimes
referred to as "mainstream AI". In the early 1970s, both Minsky and
McCarthy were working on robots; McCarthy proposed to build a robot
capable of assembling a Heathkit color TV kit. This was a discrete
component TV, requiring extensive soldering and hand-wiring to build,
not just some board insertion. The TV kit was actually
purchased, but the robot assembly project went nowhere. Eventually,
somebody at the SAIL lab assembled the TV kit, which lived in the SAIL
lounge for many years, providing diversion for a whole generation of
hackers.

Embarassments like this tended to discourage AI workers from
attempting projects where failure was so painfully obvious. With
more abstract problems, one can usually claim (one might uncharitably
say "fake") success by presenting one's completed system only with
carefully chosen problems that it can deal with. But in dealing
with the physical world, one regularly runs into ill-structured
problems that can't be bypassed. This can be hazardous to your career.
If you fail, your thesis committee will know. Your sponsor will know.
Your peers will know. Worst of all, you will know.

So most AI researchers abandoned the problems of vision, navigation,
decision-making in ill-structured physical environments, and a number
of other problems which must be solved before there is any hope of dealing
effectively with the physical world. Efforts were focused on logic,
language, abstraction, and "understanding". Much progress was made; we
now have a whole industry devoted to the production of systems with
a superficial but useful knowledge of a wide assortment of problems.

Still, in the last few years, the state of the art in that area
seems to have reached a plateau. That set of ideas may have been
mined out. Certainly the public claims made a few years ago have not been
furfilled. (I will refrain from naming names; that's not my point today.)
The phrase "AI winter" is heard in some quarters.

------------------------------

Date: Sat, 4 Jun 88 17:48:57 EDT
From: aboulang@WILMA.BBN.COM
Reply-to: aboulanger@bbn.com
Subject: randomness


In AIList Digest V7 #4, Barry Kort writes:

>If I wanted to give my von Neumann machine a *true* random number
>generator, I would connect it to an A/D converter driven by thermal
>noise (i.e. a toasty resister).

I recall that a Zener diode is a good source of noise (but cannot remember
the spectrum it gives).

It could be a good idea to utilize a Zener / A-D converter random number
generator in Monte Carlo simulations.

Andy Ylikoski


Ahem, all this stuff about analog sources being better random sources
is a bit of a "scientific" urban myth. It is instructive to go back to
the papers of the early 60's and see what it took to utilize analog
random sources. The basic problem in analog sources is correlation. To
wit:

"A Hybrid Analog-Digital Pseudo-Random Noise Generator", R.L.T.
Hampton, AFIPS Conference Proceedings, Vol 25, 1964 Spring Joint
Computer Conference. 287-301.

To quote a little:

"By precision clamping, the RMS level of binary noise can be closely
controlled, but the non-stationarity of the circuits used to obtain
electrical noise, even form stationary mechanism such an a
radio-active source, still create problems and expense. For example,
the 80 Kc random-telegraph wave generator .... required a fairly
sophisticated and not completely satisfactory count-rate control loop.

In the design of University of Arizona's new ASTRAC II iterative
differential analyzer ... it was decided to abandon analog noise
generation completely. Instead, the machine will employ a digital
shift-register sequence generator ..."

If you would like to investigate recent high-quality theoretical work
on this matter, see the paper:

"Generating Quasi-random Sequences from Semi-random Sources", Miklos
Santha & Umesh V. Vazirani, Journal of Computer and System Sciences,
Vol 33, No 1, August 1986, 75-87.

They propose a clever method to eliminate the correlations in analog
sources.

Help stamp out scientific urban myths!


Albert Boulanger
aboulanger@bbn.com
BBN Labs

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT