Copy Link
Add to Bookmark
Report

AIList Digest Volume 1 Issue 050

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest            Tuesday, 30 Aug 1983      Volume 1 : Issue 50 

Today's Topics:
AI Literature - Bibliography Request,
Intelligence - Definition & Turing Test & Prejudice & Flamer
----------------------------------------------------------------------

Date: 29 Aug 1983 11:05:14-PDT
From: Susan L Alderson <mccarty@Nosc>
Reply-to: mccarty@Nosc
Subject: Help!


We are trying to locate any and all bibliographies, in electronic
form, of AI and Robotics. I know that this covers a broad spectrum,
but we would rather have too many things to choose from than none at
all. Any help or leads on this would be greatly appreciated.

We are particularly interested in:

AI Techniques
Vision Analysis
AI Languages
Robotics
AI Applications
Speech Analysis
AI Environments
AI Systems Support
Cybernetics

This is not a complete list of our interests, but a good portion of
the high spots!

susie (mccarty@nosc-cc)


[Several partial bibilographies have been published in AIList; more
would be most welcome. Readers able to provide pointers should reply
to AIList as well as to Susan.

Many dissertation and report abstracts have been published in the
SIGART newsletter; online copies may exist. Individual universities
and corporations also maintain lists of their own publications; CMU,
MIT, Stanford, and SRI are among the major sources in this country.
(Try Navarro@SRI-AI for general AI and CPowers@SRI-AI for robotics
reports.)

One of the fastest ways to compile a bibliography is to copy author's
references from the IJCAI and AAAI conference proceedings. The AI
Journal and other AI publications are also good. Beware of straying
too far from your main topics, however. Rosenfeld's vision and image
processing bibliographies in CVGIP (Computer Vision, Graphics, and
Image Processing) list over 700 articles each year.

-- KIL]

------------------------------

Date: 25 Aug 1983 1448-PDT
From: Jay <JAY@USC-ECLC>
Subject: intelligence is...

An intelligence must have at least three abilities; To act; To
perceive, and classify (as one of: better, the same, worse) the
results of its actions, or the environment after the action; and
lastly To change its future actions in light of what it has perceived,
in attempt to maximize "goodness", and avoid "badness". My views are
very obviously flavored by behaviorism.

In defense of objections I hear coming... To act is necessary for
intelligence, since it is pointless to call a rock intelligent since
there seems to be no way to detect it. To perceive is necessary of
intelligence since otherwise projectiles, simple chemicals, and other
things that act following a set of rules, would be classified as
intelligent. To change future actions is the most important since a
toaster could perceive that it was overheating, oxidizing its heating
elements, and thus dying, but would be unable to stop toasting until
it suffered a breakdown.

In summary (NOT (AND actp percievep evolvep)) -> (NOT intelligent),
or Action, Perception, and Evolution based upon perception is
necessary for intelligence. I *believe* that these conditions are
also sufficient for intelligence.

awaiting flames,

j'

PS. Yes, the earth's bio-system IS intelligent.

------------------------------

Date: 25 Aug 83 2:00:58-PDT (Thu)
From: harpo!gummo!whuxlb!pyuxll!ech @ Ucb-Vax
Subject: Re: Prejudice and Frames, Turing Test
Article-I.D.: pyuxll.403

The characterization of prejudice as an unwillingness/inability
to adapt to new (contradictory) data is an appealing one.
Perhaps this belongs in net.philosophy, but it seems to me that a
requirement for becoming a fully functional intelligence (human
or otherwise) is to abandon the search for compact, comfortable
"truths" and view knowledge as an approximation and learning as
the process of improving those approximations.

There is nothing wrong with compact generalizations: they reduce
"overhead" in routine situations to manageable levels. It is when
they are applied exclusively and/or inflexibly that
generalizations yield bigotry and the more amusing conversations
with Eliza et al.

As for the Turing test, I think it may be appropriate to think of
it as a "razor" rather than as a serious proposal. When Turing
proposed the test there was a philosophical argument raging over
the definition of intelligence, much of which was outright
mysticism. The famous test cuts the fog nicely: a device needn't
have consciousness, a soul, emotions -- pick your own list of
nebulous terms -- in order to function "intelligently." Forget
whether it's "the real thing," it's performance that counts.

I think Turing recognized that, no matter how successful AI work
was, there would always be those (bigots?) who would rip the back
off the machine and say, "You see? Just mechanism, no soul,
no emotions..." To them, the Turing test replies, "Who cares?"

=Ned=

------------------------------

Date: 25 Aug 83 13:47:38-PDT (Thu)
From: harpo!floyd!vax135!cornell!uw-beaver!uw-june!emma @ Ucb-Vax
Subject: Re: Prejudice and Frames, Turing Test
Article-I.D.: uw-june.549

I don't think I can accept some of the comments being bandied about
regarding prejudice. Prejudice, as I understand the term, refers to
prejudging a person on the basis of class, rather than judging that
person as an individual. Class here is used in a wider sense than
economic. Examples would be "colored folk got rhythm" or "all them
white saxophonists sound the same to me"-- this latter being a quote
from Miles Davis, by the way. It is immediately apparent that
prejudice is a natural result of making generalizations and
extrapolating from experience. This is a natural, and I would suspect
inevitable, result of a knowledge acquisition process which
generalizes.

Bigotry, meanwhile, refers to inflexible prejudice. Miles has used a
lot of white saxophonists, as he recognizes that some don't all sound
the same. Were he bigoted, rather than prejudiced, he would refuse to
acknowledge that. The problem lies in determining at what point an
apparent counterexample should modify a conception. Do we decide that
gravity doesn't work for airplanes, or that gravity always works but
something else is going on? Do we decide that a particular white sax
man is good, or that he's got a John Coltrane tape in his pocket?

In general, I would say that some people out there are getting awfully
self-righteous regarding a phenomenon that ought to be studied as a
result of our knowledge acquisition process rather than used to
classify people as sub-human.

-Joe P.

------------------------------

Date: 25 Aug 83 11:53:10-PDT (Thu)
From: decvax!linus!utzoo!utcsrgv!utcsstat!laura@Ucb-Vax
Subject: AI and Human Intelligence [& Editorial Comment]

Goodness, I stopped reading net.ai a while ago, but had an ai problem
to submit and decided to read this in case the question had already
been asked and answered. News here only lasts for 2 weeks, but things
have changed...

At any rate, you are all discussing here what I am discussing in mail
to AI types (none of whom mentioned that this was going on here, the
cretins! ;-) ). I am discussing bigotry by mail to AI folk.

I have a problem in furthering my discussion. When I mentioned it I
got the same response from 2 of my 3 AI folk, and am waiting for the
same one from the third. I gather it is a fundamental AI sort of
problem.

I maintain that 'a problem' and 'a discription of a problem' are not
the same thing. Thus 'discrimination' is a problem, but the word
'nigger' is not. 'Nigger' is a word which describes the problem of
discrimination. One may decide not to use the word 'nigger' but
abolishing the word only gets rid of one discription of the problem,
but not the problem itself.

If there were no words to express discrimination, and discrimination
existed, then words would be created (or existing words would be
perverted) to express discrimination. Thus language can be counted
upon to reflect the attitudes of society, but changing the language is
not an effective way to change society.


This position is not going over very well. I gather that there is some
section of the AI community which believes that language (the
description of a problem) *is* the problem. I am thus reduced to
saying, "oh no it isnt't you silly person" but am left holding the bag
when they start quoting from texts. I can bring out anthropology and
linguistics and they can get out some epistomology and Knowledge
Representation, but the discussion isn't going anywhere...

can anybody out there help?

laura creighton
utzoo!utcsstat!laura


[I have yet to be convinced that morality, ethics, and related aspects
of linguistics are of general interest to AIList readers. While I
have (and desire) no control over the net.ai discussion, I am
responsible for what gets passed on to the Arpanet. Since I would
like to screen out topics unrelated to AI or computer science, I may
choose not to pass on some of the net.ai submissions related to
bigotry. Contact me at AIList-Request@SRI-AI if you wish to discuss
this policy. -- KIL]

------------------------------

Date: 25 Aug 1983 1625-PDT
From: Jay <JAY@USC-ECLC>
Subject: [flamer@ida-no: Re: Turing Test; Parry, Eliza, and Flamer]

Is this a human response??

j'
---------------

Return-path: <flamer%umcp-cs%UMCP-CS@UDel-Relay>
Received: from UDEL-RELAY by USC-ECLC; Thu 25 Aug 83 16:20:32-PDT
Date: 25 Aug 83 18:31:38 EDT (Thu)
From: flamer@ida-no
Return-Path: <flamer%umcp-cs%UMCP-CS@UDel-Relay>
Subject: Re: Turing Test; Parry, Eliza, and Flamer
To: jay@USC-ECLC
In-Reply-To: Message of Tue, 16-Aug-83 17:37:00 EDT from
JAY%USC-ECLC@sri-unix.UUCP <4325@sri-arpa.UUCP>
Via: UMCP-CS; 25 Aug 83 18:55-EDT

From: JAY%USC-ECLC@sri-unix.UUCP

. . . Flamer would read messages from the net and then
reply to the sender/bboard denying all the person said,
insulting him, and in general making unsupported statements.
. . .

Boy! Now that's the dumbest idea I've heard in a long time. Only an
idiot such as yourself, who must be totally out of touch with reality,
could come up with that. Besides, what would it prove? It's not much
of an accomplishment to have a program which is stupider than a human.
The point of the Turing test is to demonstrate a program that is as
intelligent as a human. If you can't come up with anything better,
stay off the net!

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT