Copy Link
Add to Bookmark
Report

AIList Digest Volume 6 Issue 086

eZine's profile picture
Published in 
AIList Digest
 · 11 months ago

AIList Digest           Saturday, 30 Apr 1988      Volume 6 : Issue 86 

Today's Topics:
Philosophy - Sociology & Free Will and Self-Awareness

----------------------------------------------------------------------

Date: 26 Apr 88 18:05:43 GMT
From: uflorida!novavax!maddoxt@gatech.edu (Thomas Maddox)
Subject: Re: The future of AI

In article <978@crete.cs.glasgow.ac.uk> gilbert@crete.UUCP (Gilbert
Cockton) writes:
>
>Sociologists study the present, not the future.
>I presume the "Megatrends" books
>cited is Toffler style futurology, and this sort of railway journey light
>reading has no connection with rigorous sociology/contemporary anthrolopology.
>
>The only convincing statements about the future which competent sociologists
>generally make are related to the likely effects of social policy. Such
>statements are firmly rooted in a defendible analysis of the present.
>
>This ignorance of the proper practices of historians, anthropologists,
>sociologists etc. reinforces my belief that as long as AI research is
>conducted in philistine technical vacuums, the whole research area
>will just chase one dead end after another.

"Rigorous sociology/contemporary anthropology"? Ha ha ha ha
ha ha ha ha, &c. While much work in AI from its inception has
consisted of handwaving and wishful thinking, the field has produced
and continues to produce ideas that are useful. And some of the most
interesting investigations of topics once dominated by the humanities,
such as theory of mind, are taking place in AI labs. By comparison,
sociologists produce a great deal of nonsense, and indeed the social
"sciences" in toto are afflicted by conceptual confusion at every
level. Ideologues, special interest groups, purveyors of outworn
dogma (Marxists, Freudians, et alia) continue to plague the social
sciences in a way that would be almost unimaginable in the sciences,
even in a field as slippery, ill-defined, and protean as AI.
So talk about "philistine technical vacuums" if you wish, but
remember that by and large people know which emperor has no clothes.
Also, if you want to say "one dead end after another," you might
adduce actual dead ends pursued by AI research and contrast them
with non-dead ends so that the innocent who stumbles across your
remark won't be utterly misled by your unsupported assertions.

------------------------------

Date: 21 Apr 88 03:58:53 GMT
From: SPEECH2.CS.CMU.EDU!yamauchi@pt.cs.cmu.edu (Brian Yamauchi)
Subject: Free Will & Self-Awareness

In article <3200014@uiucdcsm>, channic@uiucdcsm.cs.uiuc.edu writes:
>
> I can't justify the proposition that scientific endeavors grouped
> under the name "AI" SHOULD NOT IGNORE issues of free wil, mind-brain,
> other minds, etc. If these issues are ignored, however, I would
> strongly oppose the use of "intelligence" as being descriptive
> of the work. Is it fair to claim work in that direction when
> fundamental issues regarding such a goal are unresolved (if not
> unresolvable)? If this is the name of the field, shouldn't the
> field at least be able to define what it is working towards?
> I personally cannot talk about intelligence without concepts such
> as mind, thoughts, free will, consciousness, etc. If we, as AI
> researchers make no progress whatsoever in clarifying these issues,
> then we should at least be honest with ourselves and society, and find a
> new title for our efforts. Actually the slight modification,
> "Not Really Intelligence" would be more than suitable.
>
>
> Tom Channic

I agree that AI researchers should not ignore the questions of free will,
consciousness, etc, but I think it is rather unreasonable to criticise AI
people for not coming up with definitive answers (in a few decades) to
questions that have stymied philosophers for millenia.

How about the following as a working definition of free will? The
interaction of an individual's values (as developed over the long term) and
his/her/its immediate mental state (emotions, senses, etc.) to produce some
sort of decision.

I don't see any reason why this could not be incorporated into an AI
program. My personal preference would be for a connectionist
implementation because I believe this would be more likely to produce
human-like behavior (it would be easy to make it unpredictable, just
introduce a small amount of random noise into the connections).

Another related issue is self-awareness. I would be interested in hearing
about any research into having AI programs represent information about
themselves and their "self-interest". Some special cases of this might
include game-playing programs and autonomous robots / vehicles.

By the way, I would highly recommend the book "Vehicles: Experiments
in Synthetic Psychology" by Valentino Braitenburg to anyone who doesn't
believe that machines could ever behave like living organisms.

______________________________________________________________________________

Brian Yamauchi INTERNET: yamauchi@speech2.cs.cmu.edu
Carnegie-Mellon University
Computer Science Department
______________________________________________________________________________

------------------------------

Date: 26 Apr 88 11:41:52 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net (Gilbert
Cockton)
Subject: Re: Free Will & Self-Awareness

In article <1484@pt.cs.cmu.edu> yamauchi@SPEECH2.CS.CMU.EDU (Brian Yamauchi)
writes:
>I agree that AI researchers should not ignore the questions of free will,
>consciousness, etc, but I think it is rather unreasonable to criticise AI
>people for not coming up with definitive answers (in a few decades) to
>questions that have stymied philosophers for millenia.

It is not the lack of answers that is criticised - it is the ignorance
of candidate answers and their problems which leads to the charge of
self-perpetuating incompetence. There are philosophers who would
provide arguments in defence of AI, so the 'free-will' issue is not
one where the materialists, logicians and mechanical/biological
determinists will find themselves isolated without an intellectual tradition.
>
>I don't see any reason why this could not be incorporated into an AI program
So what? This is standard silly AI, and implies that what is true has
anything to do with the quality of your imagination. If people make
personal statements like this, unfortunately the rebuttals can only be
personal too, however much the rebutter would like to avoid this position.
>
>By the way, I would highly recommend the book "Vehicles: Experiments
>in Synthetic Psychology" by Valentino Braitenburg to anyone who doesn't
>believe that machines could ever behave like living organisms.

There are few idealists or romantics who believe that NO part of an organism
can be modelled as a mechanical process. Such a position would require that a
heart-lung machine be at one with the patient's geist, soul or psyche! The
logical fallacy beloved in AI is that if SOME aspects of an organism can be
modelled mechanically, than ALL can. This extension is utterly flawed. It may
be the case, but the case must be proven, and there are substantial arguments
as to why this cannot be the case.

For AI workers (not AI developers/exploiters who are just raiding the
programming abstractions), the main problem they should recognise is
that a rule-based or other mechanical account of cognition and decision
making is at odds with the doctrine of free will which underpins most Western
morality. It is in no way virtuous to ignore such a social force in
the name of Science. Scientists who seek moral, ethical, epistemological
or methodological vacuums are only marginalising themselves into
positions where social forces will rightly constrain their work.

------------------------------

Date: 28 Apr 88 15:42:18 GMT
From: glacier!jbn@labrea.stanford.edu (John B. Nagle)
Subject: Re: Free Will & Self-Awareness


Could the philosophical discussion be moved to "talk.philosophy"?
Ken Laws is retiring as the editor of the Internet AILIST, and with him
gone and no replacement on the horizon, the Internet AILIST (which shows
on USENET as "comp.ai.digest") is to be merged with this one, unmoderated.
If the combined list is to keep its present readership, which includes some
of the major names in AI (both Minsky and McCarthy read AILIST), the content
of this one must be improved a bit.

John Nagle

------------------------------

Date: 29 Apr 88 13:04:43 GMT
From: bwk@mitre-bedford.arpa (Barry W. Kort)
Subject: Re: Free Will & Self-Awareness

One of the problems with the English Language is that most of the
words are already taken.

Rather than argue over whether AI should or should not include
investigations into consciousness, awareness, free will, etc,
why not just make up a new label for this activity.

I would like to learn how to imbue silicon with consciousness,
awareness, free will, and a value system. Maybe this is not
considered a legitimate branch of AI, and maybe it is a bit
futuristic, but it does need a name that people can live with.

So what can we call it? Artificial Consiousness? Artificial
Awareness? Artificial Value Systems? Artificial Agency?

Suppose I were able to inculcate a Value System into silicon.
And in the event of a tie among competing choices, I use a
random mechanism to force a decision. Would the behavior of
my system be very much different from a sentient being with
free will?

--Barry Kort

------------------------------

Date: 29 Apr 88 01:26:02 GMT
From: quintus!ok@unix.sri.com (Richard A. O'Keefe)
Subject: Re: Free Will & Self-Awareness

In article <1029@crete.cs.glasgow.ac.uk>, gilbert@cs.glasgow.ac.uk
(Gilbert Cockton) writes:
> For AI workers (not AI developers/exploiters who are just raiding the
> programming abstractions), the main problem they should recognise is
> that a rule-based or other mechanical account of cognition and decision
> making is at odds with the doctrine of free will which underpins most Western
> morality.

What about compatibilism? There are a lot of arguments that free will is
compatible with strong determinism. (The ones I've seen are riddled with
logical errors, but most philosophical arguments I've seen are.)
When I see how a decision I have made is consistent with my personality,
so that someone else could have predicted what I'd do, I don't _feel_
that this means my choice wasn't free.

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT