Copy Link
Add to Bookmark
Report

AIList Digest Volume 1 Issue 042

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest           Thursday, 18 Aug 1983      Volume 1 : Issue 42 

Today's Topics:
Fifth Generation - National Security,
Artificial Intelligence - Prejudice & Turing Test
----------------------------------------------------------------------

Date: Tue, 16 Aug 83 13:32:17 EDT
From: Morton A. Hirschberg <mort@brl-bmd>
Subject: AI & Morality

The human manner has led to all sorts of abuses. Indeed your latest
series of messages (e.g. Spaf) has offended me. Maybe he meant
humane? In any event there is no need to be vulgar to make a point.
Any point.

There are some of us who work for the US government who are very
aware of the threats of exporting high technology and deeply concerned
about the free exchange of data and information and the benefits of
such exchange. It is only in recent years and maybe because of the
Japanese that academia has taken a greater interest in areas which
they were unwilling to look at before (current economics also makes
for strange bedfellows). Industry has always had an interest (if for
nothing more than to show us a better? wheel for bigger! bucks). We
are in a good position to maintain the military-industrial-university
complex (not sorry if this offends anyone) and get some good work
done. Recent government policy may restrict high technology flow so
that you might not even get on that airplane soon.

[...]

Mort

------------------------------

Date: Tue, 16 Aug 83 17:15:24 EDT
From: Joe Buck <buck@NRL-CSS>
Subject: frame theory of prejudice


We've heard on this list that we should consider flamers and bigots
less than human. But doesn't Minsky's frame theory suggest that
prejudice is simply a natural by-product of the way our minds work?
When we enter a new situation, we access a "script" containing default
assumptions about the situation. If the default assumptions are
"sticky" (don't change to agree with newly obtained information), the
result is prejudice.

When I say "doctor", a picture appears in your mind, often quite
detailed, containing default assumptions about sex, age, physical
appearance, etc. In some people, these assumptions are more firmly
held than in others. Might some AI programs designed along these
lines show phenomena resembling human prejudice?

Joe Buck
buck@nrl-css

------------------------------

Date: 16 Aug 1983 1437-PDT
From: Jay <JAY@USC-ECLC>
Subject: Turing Test; Parry, Eliza, and Flamer

Parry and Eliza are fairly famous early AI projects. One acts
paranoid, another acts like an interested analyst. How about reviving
the project and challenging the Turing test? Flamer is born.

Flamer would read messages from the net and then reply to the
sender/bboard denying all the person said, insulting him, and in
general making unsupported statements. I suggest some researchers out
there make such a program and put it on the net. The goal would be
for the readers of the net try to detect the Flamer, and for Flamer to
escape detection. If the Flamer is not discovered, then it could be
considered to have passed the Turing test.

Flamer has the advantage of being able to take a few days in
formulating a reply; it could consult many related online sources, it
could request information concerning the subject from experts (human,
or otherwise), it could perform statistical analysis of other flames
to make appropriate word choices, it could make common errors
(gramical, syntactical, or styleistical), and it could perform other
complex computations.

Perhaps Flamer is already out there, and perhaps this message is
generated by such a program.

j'

------------------------------

Date: 16 Aug 83 20:57:20 EDT (Tue)
From: Mark Weiser <mark%umcp-cs@UDel-Relay>
Subject: artificially intelligent bigots.

I agree that bigotry and intelligence exclude each other. An
Eliza-like bigotry program would be simple in direct proportion to its
bigotry.

------------------------------

Date: 15 Aug 83 20:05:24-PDT (Mon)
From: harpo!floyd!vax135!cornell!uw-beaver!ssc-vax!sts @ Ucb-Vax
Subject: AI Projects on the Net
Article-I.D.: ssc-vax.417


This is a really fun topic. The problem of the Turing Test is
enormously difficult and *very* subtle (either that or we're
overlooking something really obvious). Now the net provides a
gigantic lab for enterprising researchers to try out their latest
attempts. So far I have resisted the temptation, since there are more
basic problems to solve first! The curious thing about an AI project
is that it can be made infinitely complicated (programs are like that;
consider emacs or nroff), certainly enough to simulate any kind of
behavior desired, whether it be bigotry, right-wingism, irascibility,
mysticism, or perhaps even ordinary rational thought. This has been
demonstrated by several programs, among them PARRY (simulates
paranoia), and POLITICS (simulates arguments between ideologues) (mail
me for refs if interested). So it doesn't appear that there is a way
to detect an AI project, based on any *particular* behavior.

A more productive approach might be to look for the capability to vary
behavior according to circumstances (self-modifiability). I can note
that all humans appear capable of modifying their behavior, and that
very few AI programs can do so. However, not all human behavior can
be modified, and much cannot be modified easily. "Try not to think of
a zebra for the next ten minutes" - humans cannot change their own
thought processes to manage this feat, while an AI program would not
have much problem. In fact, Lenat's Eurisko system (assuming we can
believe all the claims) has the capability to speed up its own
operation! (it learned that Lisp 'eq' and 'equal' are the same for
atoms, and changed function references in its own code) The ability to
change behavior cannot be a criterion.

So how does one decide? The question is still open....

stan the leprechaun hacker
ssc-vax!sts (soon utah-cs)

ps I thought about Zeno's Paradox recently - the Greeks (especially
Archimedes) were about a hair's breadth away from discovering
calculus, but Zeno had crippled everybody's thinking by making a
"paradox" where none existed. Perhaps the Turing Test is like
that....

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT