Copy Link
Add to Bookmark
Report
AIList Digest Volume 3 Issue 177
AIList Digest Sunday, 24 Nov 1985 Volume 3 : Issue 177
Today's Topics:
Query - Connectionist BBoard,
Science Fiction - Machines That Talk,
Literature - Technical Translation Group,
Expert Systems - Liability,
Intelligence - Modeling Prejudice
----------------------------------------------------------------------
Date: Sat 23 Nov 85 18:07:06-PST
From: Lee Altenberg <ALTENBERG@SUMEX-AIM.ARPA>
Subject: connectionist query
Does anyone know of a connectionist mailing list or bulletin board to read
about the latest stuff in this area?
-Lee Altenberg@sumex-aim.arpa
------------------------------
Date: Thu, 21 Nov 85 14:33:59 est
From: ulysses!blade!gamma!mike@ucbvax.berkeley.edu (Mike Lukacs)
Subject: Machines That Talk
Fictional accounts of machines that talk:
see: "The Moon is a Harsh Mistress" by Robert A. Heinlein
not the earliest account but a good detailed discussion of
the (fictional) process by which Adam Selene (a self-aware
computer) produced natural language and a video self image.
Michael E. Lukacs
Bell Communications Research
NVC 3X-330 (201)758-2876
<any-backbone-site>! \
.OR. \
<AT&T-Bell-labs-machines>! \
.OR. > bellcore!sabre!nyquist!maxwell!mike
direct ----> ----->-----/
------------------------------
Date: Fri, 22 Nov 85 10:37:21 pst
From: eugene@AMES-NAS.ARPA (Eugene Miya)
Subject: Misunderstanding about the technical translation group
Recently, several members of the group volunteering technical
translation time (including myself) have been asked by people on the
net for copies of papers we have identified as significant.
PLEASE DO NOT ASK US FOR COPIES, especially translated copies.
We are NOT a translation service: this is too expensive of our time.
We simply identify (by translating the titles, authors, journals,
and subjects for the readership of existing newsgroups) potentially
significant papers. We try to provide source material when ever
possible (institution or publishing house).
Some readers may find this like "Tantalus and the Grapes." We can stop,
and you don't ever to hear about these publications again. Just complain.
This is an international volunteer effort. I've seen some postings
in their native European language (I should have told some people to
use English as a common Aether). It is up to the readership of the net
to obtain foreign language reports just as they obtain other TRs.
--eugene miya
NASA Ames Research Center
------------------------------
Date: Fri, 22 Nov 85 18:57 EST
From: Stephen G. Rowley <SGR@SCRC-PEGASUS.ARPA>
Subject: Expert Systems and Liability
Date: Wed, 20 Nov 85 12:02:13 est
From: mayerk%UPenn-GradEd%upenn.csnet@CSNET-RELAY.ARPA
Subject: Issues Concerning Expert Systems -- Who Owns What
I think that in the next few years, serious questions like these
will affect the way we think about computer systems. Another hard question
is who is liable? Let's say some expert system in medical diagnosis was
fed data, which is later found to be faulty, or downright wrong. And
a serious injury was related to a physician's use of this information.
If the writers of the expert system could be shown to be negligent in
the verification of their data, could it be possible that they are liable,
just as if it were a case of malpractice?
The questions get even more interesting than that! Suppose there's a
real zippy expert system available to your doctor. He has to decide
whether or not to use it, and, if he uses it, whether or not to take its
advice.
[1] Suppose he decides NOT to use it, and he later messes up in a way
the program would have warned him about.
Can you sue him for NOT using the latest technology?
If not, how is this different from a doctor who refuses
to use CAT scans, X-rays, tissue-typing, or anything else?
[2] Suppose he decides to use it.
[2a] The program tells him to pursue a particular treatment, which
he does. You are injured as a result.
Can you sue the doctor? Or should you sue the program's
sellers? Suppose the program's sellers and implementors are
not the same; can you sue them both? What if the doctor
"should have known better"? (Never apply a tourniquet about
the neck...)
[2b] He overrides the program and does something else. You are
injured, either as a result of his treatment, or lack of the
treatment the program ordered, or both.
<Same questions as [2a]>
One consistent interpretation is that the doctor is ALWAYS responsible.
[That's why people usually trust a doctor; he's paid to take the
responsibility.] On the other hand, you could reasonably claim that
it's not his fault if the technology misled him...
------------------------------
Date: 20 Nov 1985 1733-PST (Wednesday)
From: aurora!eugene@RIACS.ARPA (Eugene miya)
Subject: Re: Removing prejudice (actually question on AI)
> Date: Thu, 14 Nov 85 22:32:12 PST
> From: Richard K. Jennings <jennings@AEROSPACE.ARPA>
> Subject: Removing Prejudice
>
> I think modeling prejudice accurately is the mandate for AI systems, not
> removing it.
>
> Rich.
I think the term you mean to use is "discriminate" in the behaviorism
sense. I know the keyword you have in mind is "accurately," but I
cannot escape the problem that Intelligence is merely "accurate
preudice or discrimination." Recently, I stuck the "artificial"
adjective in front of various words: artificial prejudice,
artificial discrimination [all this before Rich's posting]. It occurs
to me that a lot of what is lacking in AI would be illustrated by
the difference of these and what be called "artificial compassion."
I am not arguing for emotion in AI, but let me give you the circumstances
where I thought of this.
Recently, I met with some people who wanted to build an expert system
to help with Federal government procurement. It occured to me that the
Government might get the idea to build expert systems as social workers
to process welfare cases or perhaps scholarship application. That type
of work is not merely accessing numbers or conditions (frames?).
Deception is obviously a problem. It occurs to me that creating
"artificial compassion" in this type of example is much harder than
building artificial discriminators. This type of activity is the type
which humans pride themselves, and perhaps makes up a portion of
"intelligence." Again, take the emotional aspect out of this, and
I wonder one might implement the "social worker" system [high human
goal]. Subexpert systems to aid overworked humans need not apply. Can we
build such a system?
Lastly, the above comment falls into the category of social implications of
AI. I am uncertain about all the AI issues I have raised, but it
seems to me that if corporations get the ideas of making such systems
for things like credit ratings [yes, I know you can argue they do this now],
or in worse cases to support "evil" governments, then ..... My interest
is in the rule-system when "rules" are broken: the welfare worker decides
to give assistance or the student gets the scholarship when rule say NO.
I'm confused, and all explanations are welcome. Cite Turing's original
paper if you would like to show what area I'm getting wrong.
--eugene miya
NASA Ames Research Center
{hplabs,hao,dual,ihnp4,vortex}!ames!amelia!eugene
eugene@ames-nas.ARPA
------------------------------
Date: Fri, 22 Nov 85 14:58 EST
From: Mukhop <mukhop%gmr.csnet@CSNET-RELAY.ARPA>
Subject: Modeling Prejudice
> From: Richard K. Jennings <jennings@AEROSPACE.ARPA>
> Subject: Removing Prejudice
> Concerning removing prejudice .......
> I think modeling prejudice accurately is the mandate for AI
> systems, not removing it.
Modeling prejudice accurately is the mandate for AI systems
inasmuch as prejudicial inferencing embodies default reasoning and
generalizing. Admittedly, these are powerful techniques, but they
can sometimes cause unsound inferences. Over-generalization from a
small sample size and upward inheritance of defaults (most sports cars
are two-seaters => most cars are two-seaters) are error-prone.
Modeling those aspects of prejudicial reasoning that cause
irrational behavior is certainly not mandated for common-sense
reasoning in AI systems. After all, some people exhibit more (robust)
common sense than others and it would be worthwhile modeling a clear
thinker. The upward inheritance of defaults may actually be used with
great advantage, but the clear thinker is aware of its limitations
and uses it with caution.
My original submission regarding prejudice and
rumor addresses the design issues of a higher level control structure
for selectively invoking the appropriate reasoning techniques. This
requires a good knowledge of the degradation criteria for each
technique--an important component of common sense.
Uttam Mukhopadhyay
Computer Science Dept.
GM Research Labs
Warren, MI 48090-9055
Phone: (313) 575-2105
------------------------------
End of AIList Digest
********************