Copy Link
Add to Bookmark
Report

AIList Digest Volume 8 Issue 012

eZine's profile picture
Published in 
AIList Digest
 · 11 months ago

AIList Digest            Monday, 18 Jul 1988       Volume 8 : Issue 12 

Today's Topics:

does AI kill? - Responses to the Vincennes downing of an Iranian Jetliner

----------------------------------------------------------------------

Date: 12 Jul 88 22:21:17 GMT
From: amdahl!nsc!daisy!klee@ames.arpa (Ken Lee)
Subject: does AI kill?

This appeared in my local paper yesterday. I think it raises some
serious ethical questions for the artificial intelligence R&D community.
------------------------------
COMPUTERS SUSPECTED OF WRONG CONCLUSION
from Washington Post, July 11, 1988

Computer-generated mistakes aboard the USS Vincennes may lie at the root
of the downing of Iran Air Flight 655 last week, according to senior
military officials being briefed on the disaster.

If this is the case, it raises the possibility that the 280 Iranian
passengers and crew may have been the first known victims of "artificial
intelligence,"
the technique of letting machines go beyond monitoring to
actually making deductions and recommendations to humans.

The cruiser's high-tech radar system, receivers and computers - known as
the Aegis battle management system - not only can tell the skipper what
is out there in the sky or water beyond his eyesight but also can deduce
for him whether the unseen object is friend or foe and say so in words
displayed on the console.

This time, said the military officials, the computers' programming could
not deal with the ambiguities of the airline flight and made the wrong
deduction, reached the wrong conclusion and recommended the wrong
solution to the skipper of the Vincennes, Capt. Will Rogers III.

The officials said Rogers believed the machines - which wrongly
identified the approaching plane as hostile - and fired two missiles at
the passenger plane, knocking it out of the sky over the Strait of
Hormuz.
------------------------------

The article continues with evidence and logic the AI should have used to
conclude that the plane was not hostile.

Some obvious questions right now are:
1. is AI theory useful for meaningful real world applications?
2. is AI engineering capable of generating reliable applications?
3. should AI be used for life-and-death applications like this?

Comments?

Ken
--
uucp: {ames!atari, ucbvax!imagen, nsc, pyramid, uunet}!daisy!klee
arpanet: atari!daisy!klee@ames.arc.nasa.gov

STOP CONTRA AID - BOYCOTT COCAINE

------------------------------

Date: 13 Jul 88 15:23:00 GMT
From: uxe.cso.uiuc.edu!morgan@uxc.cso.uiuc.edu
Subject: Re: does AI kill?


I think these questions are frivolous. First of all, there is nothing in
the article that says AI was involved. Second, even if there was, the
responsibility for using the information and firing the missile is the
captain's. The worst you could say is that some humans may have oversold
the captain and maybe the whole navy on the reliability of the information
the system provides. That might turn out historically to be related to
the penchant of some people in AI for grandiose exaggeration. But that's
a fact about human scientists.

And if you follow the reasoning behind these questions consistently,
you can find plenty of historical evidence to substitute 'engineering'
for 'AI' in the three questions at the end.
I take that to suggest that the reasoning is faulty.

Clearly the responsibility for the killing of the people in the Iranian
airliner falls on human Americans, not on some computer.

At the same time, one might plausibly interpret the Post article as
a good argument against any scheme that removes human judgment from the
decision process, like Reagan's lunatic fantasies of SDI.

------------------------------

Date: 13 Jul 88 19:18:47 GMT
From: ucsdhub!hp-sdd!ncr-sd!ncrlnk!rd1632!king@ucsd.edu (James King)
Subject: RE: AI and Killing (Long and unstructured)


Does AI Kill

Some of my opinions:

In my understanding of the Aegis system there is NO direct connect between
the fire control and the detection systems. The indirect connect between
these two modules is now (and hopefully will be for some time) a human
being. A knowledge-based system interface to sensor units on the Vincennes
was not responsible for killing anyone. It was a human's
position to make the decision, based on multiple sensors and decision
paths (experience), whether or not to fire on the jet.

The computer system onboard the Vincennes was responsible for evaluating
a sensor reading(s) and providing recommendations as to what the reading was
and possibly what action to take. Is this fundamentally any different
than employing a trained human to interpret radar signals and asking for
their opinion on a situation?

There are two answers to the above. One is that there is little difference
in the input or the output of human or the computer system. Two the
difference IS in arriving at the output. The computer is limited by the rules
in the computer system and the fact that a computer system may not possess
a complete understanding of the situation (i.e. area tensions, ongoing
stress in battle situations, prior situational experience that
includes "gut" reactions or intuition, etc.) Of course speed of
execution is critical but that's another topic. The human can rely on
emotions, peripheral knowledge, intuition, etc. to make a decision or
a recommendation. But the point is that each is concluding something
about the sensor output and reporting it to the commander. (period)

I feel that the system contributed to the terrible mistake by providing
inaccurate interpretation of a sensor signal(s) and making an incorrect
recommendation(s) to the commander - BUT it was still a human decision
that caused the firing. BTW I do not (at present) condemn the commander
for the decision he made - as best I know (as he did) the ship's captain
made the best decision under the circumstances (danger to his ship) and
through interpretation of his sensor evaluations (Aegis as one type).
But I disagree with placing oneself in that position of killing another.

Our problems (besides having our fingers on the trigger) stem from the
facts that we have to have a trigger in the first place and secondly that
we have to rely on sensory input to make a decision - where certain
sensory inputs may be in error. The "system" is at fault - not just
one component of that system. To correct the system we fix the components.
So we make the Aegis system "smarter" so it contains knowledge about the
patterns that an Air Bus gives as a cross section (or signature) on a
radar sensor - what then? We have refined the system somewhat. Do we
keep refining these systems continuously so they grow through experience
as we do?

Oh no! We've started another discussion topic.

--------------------------------------------------------------------------

A couple thoughts on Mr Lee's questions:

- Whether AI can produce real applications:
- The conventional/commercial view of AI is that of expert systems.
Expert systems in and of themselves are just software systems which
were programmed using a computer language(s) like any other software
system. Some pseudo-differences are based on how we view the
development of this software. We are now more aware of the need for
embedding expert's knowledge in these systems and providing automated
methods for "reasoning" through this information.
- Through being aware people ARE developing systems that use expert
knowledge in very focused domains (i.e. fault diagnosis, etc.).
- The same problem exists in a nuclear plant. Did the fault diagnosis
system properly diagnose the problem and report it? More
importantly did the operator act/reason on the diagnosis properly?
- Where would the blame end? Is it the expert system's fault or is it
the sensor's fault - or is it human error?
- Systems that learn and reason based on non-original programmed
functionalities have not been developed/deployed but we'll see ...


- Whether AI can be used in life-death situations:

- If you were in a situation in which a decision had to be made within
seconds, i.e. most life and death situations, would you:
1. Rely on a toss of a coin?
2. Make a "shot in the dark" decision?
3. Make a quickly reasoned decision based on two or three inferences in
your mind?
OR
4. Use the decision of a computer (if it had knowledge of the situation's
domain and could perform thousands of logical inferences/second)?
- One gets you even odds. Two gets you a random number for the odds.
Three gives you slightly better odds based on minimal decision making.
And four provides you with a recommendation based on the knowledge of
maybe a set of experts and with the speed of computer processing.
- If you're an emotional person you probably pick two. Maybe if you
have a quick, "accessable" hunch you pick three. But if you're a
logical, disciplined, person you would go with the greatest backing
which is four (and a combination of one through three if the commander
is experienced!).
- Which one characterizes a commander in the Navy?
- Personally I'm not sure which I'd take.


Jim King
NCR Corporation
j.a.king@dayton.ncr.com

Remember these ideas do not represent the position or ideas of my
employer.

------------------------------

Date: 13 Jul 88 22:07:19 GMT
From: ssc-vax!ted@beaver.cs.washington.edu (Ted Jardine)
Subject: Re: does AI kill?

In article <1376@daisy.UUCP>, klee@daisy.UUCP (Ken Lee) writes:
> This appeared in my local paper yesterday. I think it raises some
> serious ethical questions for the artificial intelligence R&D community.
> ... [Washington Post article about Aegis battle management computer
> system being held responsible for providing erroneous conclusions
> which led to the downing of the Iranian airliner -- omitted]
>
> The article continues with evidence and logic the AI should have used to
> conclude that the plane was not hostile.
>
> Some obvious questions right now are:
> 1. is AI theory useful for meaningful real world applications?
> 2. is AI engineering capable of generating reliable applications?
> 3. should AI be used for life-and-death applications like this?
> Comments?

First, to claim that the Aegis Battle Management system has an AI component
is patently ridiculous. I'm not suggesting that this is Ken's claim, but it
does appear to be the claim of the Washington Post article. What Aegis is
capable of doing is deriving conclusions based on an analysis of the radar
sensor information. Its conclusions, while I wouldn't consider them AI, may
be so considered by someone else, but without a firm basis. Let's first
agree that a mistake was made. And let's also agree that innocent lives were
lost, and tragicly so. What the key issue here is, I believe, that tentative
conclusions were generated based on partial information. There is nothing
but good design and good usage that will prevent this from occurring, regardless
whether the system is AI or not. I believe that a properly created AI system
would have made visible the conclusion and it tentative nature. I believe
that AI systems can be built for meaningful real world application. But there
is a very real pitfall waiting along the road to producing such a system.

It's the pitfall that permits us to invest some twenty years of time and some
multiple thousands of dollars (hundreds of thousands?) into the training and
education of a person with a Doctorate in a scientific or engineering discipline
but not to permit a similar investment into the creation of the knowledge base
for an AI system. Most of the people I have spoken to want AI, are convinced
that they need AI, but when I say that it costs money, time and effort just
like anything else, the backpedaling speed goes from 0 to 60 mph in less time
than a Porsche or Maseratti!

I think we need to be concerned about this issue. But I hope we can avoid
dumping AI technology down the drain because it won't give us GOOD answers
unless we INVEST some sweat and some money. (Down off the soap box, boy! :-)

TJ {With Amazing Grace} The Piper
aka Ted Jardine CFI-ASME/I
Usenet: ...uw-beaver!ssc-vax!ted
Internet: ted@boeing.com
--
TJ {With Amazing Grace} The Piper
aka Ted Jardine CFI-ASME/I
Usenet: ...uw-beaver!ssc-vax!ted
Internet: ted@boeing.com

------------------------------

Date: 14 Jul 88 00:06:10 GMT
From: nyser!cmx!billo@itsgw.rpi.edu (Bill O)
Subject: Re: does AI kill?

In article <1376@daisy.UUCP> klee@daisy.UUCP (Ken Lee) writes:
>Computer-generated mistakes aboard the USS Vincennes may lie at the root
>...
>If this is the case, it raises the possibility that the 280 Iranian
>passengers and crew may have been the first known victims of "artificial
>intelligence,"
the technique of letting machines go beyond monitoring to
>actually making deductions and recommendations to humans.
>...
>is out there in the sky or water beyond his eyesight but also can deduce
>for him whether the unseen object is friend or foe and say so in words
>displayed on the console.
>

This is an interesting question -- it touches on the whole issue of
accountability in the use of AI. If an AI system makes a
recommendation that followed (by humans or machines), and the end
result is that humans get hurt (physically, financially,
psychologically, or whatever), who is accountable:

1) the human(s) who wrote the AI system

2) the human(s) who followed the injurious recommendation (or
who, by inaction, allowed machines to follow the
recommendation)

3) (perhaps absurdly) the AI program itself, considered as a
responsible entity (in which case I guess it will have to
be "executed" -- pun intended).

4) no one

However, as interesting as this question is (and I'm not sure where
the discussion of it should be), lets not jump to the conclusion that
AI was involved in the Vincennes incident. We cannot assume that the
Post's writers know what AI is, and the "top military officials" may
also be confused, or may have ulterior motives for blaming AI. Maybe
AI is involved, maybe it isn't. For instance, a system that simply
matches radar image size and/or characteristics -- probably coupled
with information from the FOF transponder signal -- to the words
friend or foe "printed on the screen" is very likely not AI by most
definitions. Perhaps the Iranians were the first victims of "table
look-up"
, (although I have my doubts about "first"). Does anyone out
there know about Ageis -- does it use AI? (Alas, this is probably
classified).

Bill O'Farrell, Northeast Parallel Architectures Center at Syracuse University
(billo@cmx.npac.syr.edu)

------------------------------

Date: 14 Jul 88 06:32:58 GMT
From: portal!cup.portal.com!tony_mak_makonnen@uunet.uu.net
Subject: Re: does AI kill?

no AI does not kill, but AI-people do. The very people that can
write computer programs for you should be the last to decide how
when and how much the computer should compute. The reductive process
coexist in the human brain with the synthetic process. The human in the
loop could override a long computation by bringing in factors that could
not practically be foreseen: 'why did the Dubai tower say..?, 'why is the
other cruiser reporting a different altitude" ; these small doubts from all
over the environment could trigger experiences in the brain which could
countermand the neat calculated decision. Ultimately the computer is equipped
with a sensory system so poor compared to the human brain . From an extreemely
small slice of what is out there we expect a real life conclusion.

------------------------------

Date: 14 Jul 88 14:17:52 GMT
From: uflorida!novavax!proxftl!tomh@gatech.edu (Tom Holroyd)
Subject: Re: does AI kill?

In article <1376@daisy.UUCP>, klee@daisy.UUCP (Ken Lee) writes:
> Computer-generated mistakes aboard the USS Vincennes may lie at the root
> of the downing of Iran Air Flight 655 last week, according to senior
> military officials being briefed on the disaster.
> ...
> The officials said Rogers believed the machines - which wrongly
> identified the approaching plane as hostile - and fired two missiles at
> the passenger plane, knocking it out of the sky over the Strait of
> Hormuz.
> ...
> Some obvious questions right now are:
> 1. is AI theory useful for meaningful real world applications?
> 2. is AI engineering capable of generating reliable applications?
> 3. should AI be used for life-and-death applications like this?

Let's face it. That radar system *was* designed to kill. It was only
doing its job. In a real combat situation, you can't afford to make the
mistake of *not* shooting down the enemy, so you err on the side of
shooting down friends. War zones are dangerous places.

Now, before y'all start firing cruise missiles at me, I am *NOT*, I repeat
NOT praising the system that killed 280 people. What I am saying is that
it wasn't the fault of the computer program that incorrectly identified
the airliner as hostile. The blame lies entirely on the captain of the
USS Vincennes and his superiors for using the system in a zone where
commercial flights are flying.

The question is not whether AI should be used for life-and-death applications,
but whether it should be switched on in a situation like that.

In my opinion.

P.S. And it could have been done on purpose, by one side or the other.

Tom Holroyd
UUCP: {uflorida,uunet}!novavax!proxftl!tomh

The white knight is talking backwards.

------------------------------

Date: 14 Jul 88 15:20:22 GMT
From: ockerbloom-john@yale-zoo.arpa (John A. Ockerbloom)
Subject: Re: AI and Life & Death Situations

In article <496@rd1632.Dayton.NCR.COM> James King writes:
>- Whether AI can be used in life-death situations:
>
> - If you were in a situation in which a decision had to be made within
> seconds, i.e. most life and death situations, would you:
> 1. Rely on a toss of a coin?
> 2. Make a "
shot in the dark" decision?
> 3. Make a quickly reasoned decision based on two or three inferences in
> your mind?
> OR
> 4. Use the decision of a computer (if it had knowledge of the situation's
> domain and could perform thousands of logical inferences/second)?
> - One gets you even odds. Two gets you a random number for the odds.
> Three gives you slightly better odds based on minimal decision making.
> And four provides you with a recommendation based on the knowledge of
> maybe a set of experts and with the speed of computer processing.
> - If you're an emotional person you probably pick two. Maybe if you
> have a quick, "
accessable" hunch you pick three. But if you're a
> logical, disciplined, person you would go with the greatest backing
> which is four (and a combination of one through three if the commander
> is experienced!).

I don't think this is a full description of the choices. If you indeed
have a great deal of expertise in the area, you will have a very large set
of explicit and implicit inferences to work from, fine-tuned over years
of experience. You will also have a good idea about the relative
importance of different facts and rules, and can thereby find the relevant
decision paths very quickly. In short, your mental decision would be
based on "
deep" knowledge of the situation, and not just on "two or three
inferences."

Marketing hype aside, it is very difficult to get a computer program
to learn from experience and pick out relevant details in a complex
problem. It's much easier just to give it a "
shallow" form of
knowledge in a set of inference rules. In a high-pressure situation,
I would not have time to find out *how* a given computer program arrived
at a decision, unless I was very familiar with its workings to begin with.
So if I were experienced myself, I'd trust my own judgment over the
program's in a life-or-death scenario.

John Ockerbloom
------------------------------------------------------------------------------
ockerbloom@cs.yale.EDU ...!{harvard,cmcl2,decvax}!yale!ockerbloom
ockerbloom@yalecs.BITNET Box 5323 Yale Station, New Haven, CT 06520

------------------------------

Date: 14 Jul 88 15:27:56 GMT
From: rti!bdrc!jcl@mcnc.org (John C. Lusth)
Subject: Re: does AI kill?

In article <1376@daisy.UUCP> klee@daisy.UUCP (Ken Lee) writes:
<This appeared in my local paper yesterday. I think it raises some
<serious ethical questions for the artificial intelligence R&D community.
<------------------------------
<COMPUTERS SUSPECTED OF WRONG CONCLUSION
<from Washington Post, July 11, 1988
<
<Computer-generated mistakes aboard the USS Vincennes may lie at the root
<of the downing of Iran Air Flight 655 last week, according to senior
<military officials being briefed on the disaster.
<
text of article omitted...
<
<The article continues with evidence and logic the AI should have used to
<conclude that the plane was not hostile.
<
<Some obvious questions right now are:
< 1. is AI theory useful for meaningful real world applications?
< 2. is AI engineering capable of generating reliable applications?
< 3. should AI be used for life-and-death applications like this?
<
<Comments?

While I am certainly no expert on the Aegis system, I have my doubts
as to whether Artificial Intelligence techniques (rule-based technology,
especially) were used in constructing the system. While certain parts
of the military are hot on AI, overall I don't think they actually trust
it enough to deploy it as yet.

If what I suspect is correct (does anyone out there know otherwise?), perhaps
the developers of Aegis could have avoided this disaster by incorporating
a rule-based system rather than a less robust methodology such as decision
trees or decision support systems.

John C. Lusth
Becton Dickinson Research Center
RTP, NC

...!mcnc!bdrc!jcl

------------------------------

Date: 14 Jul 88 17:24:37 GMT
From: ut-emx!juniper!wizard@sally.utexas.edu (John Onorato)
Subject: Re: AI and Killing


The situation is perfectly clear to me. See, the Aegis system (which the
Vincennes used to track the airbus) does not tell the operator what size
or any other identifying this about the planes that it is tracking. It
just gives the operator information that it gets from the transponder in
the belly of the plane. The jet that got shot down had both civilian and
military transponders, as it was also being used as a troop transport from
time to time. Therefore, the captain of the Vincennes DID NOT KNOW that
the object was a civilian one -- the pilot of the airbus did not answer any
hailings, he was deviating from the normal flight path for civilian planes
crossing the gulf, and the airbus was late (there were no planes scheduled
to be overhead at the time that the airbus was flying (was 27 minutes late
I think). Therefore, the captain, seeing that this COULD POSSIBLY BE a hostile
plane, blew it out of the sky. I think he did the right thing... I feel
remorse and regret for the lives that were lost, but captain Will Rogers
was just acting to save the lives of himself and his crew.

The computer is not at fault here. Of course, SOME of the blame can be put
on it by saying that it didn't show the size, etc of the airbus (which
would then identify it as a civilian plane). However, this oversight is
a design fault, not an equipment fault. It (Aegis) was never designed
to be used in a half-peace/half-war type situation like today's Gulf.
I do not feel that the blame can be placed on Captain Will Rogers' head,
either, for he was just acting on what information he had. If it is
anyone's fault, it was the pilot of the airbus's fault... he was late,
he deviated from the accepted flight path, he was carrying two sets of
transponders, and he never answered any hails from the Vincennes.

That's MY view, anyway. If anyone cares to differ, go right ahead.


wizard


--
|--------------------|------------------------------| Joy is in the
|wizard@juniper.UUCP | juniper!wizard@emx.utexas.edu| ears that hear...
|juniper!wizard | ut-emx!juniper!wizard |
|--------------------|------------------------------| -- Stephen R Donaldson

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

guest's profile picture
@guest
12 Nov 2024
It is very remarkable that the period of Atlantis’s destruction, which occurred due to earthquakes and cataclysms, coincides with what is co ...

guest's profile picture
@guest
12 Nov 2024
Plato learned the legend through his older cousin named Critias, who, in turn, had acquired information about the mythical lost continent fr ...

guest's profile picture
@guest
10 Nov 2024
الاسم : جابر حسين الناصح - السن :٤٢سنه - الموقف من التجنيد : ادي الخدمه - خبره عشرين سنه منهم عشر سنوات في كبرى الشركات بالسعوديه وعشر سنوات ...

lostcivilizations's profile picture
Lost Civilizations (@lostcivilizations)
6 Nov 2024
Thank you! I've corrected the date in the article. However, some websites list January 1980 as the date of death.

guest's profile picture
@guest
5 Nov 2024
Crespi died i april 1982, not january 1980.

guest's profile picture
@guest
4 Nov 2024
In 1955, the explorer Thor Heyerdahl managed to erect a Moai in eighteen days, with the help of twelve natives and using only logs and stone ...

guest's profile picture
@guest
4 Nov 2024
For what unknown reason did our distant ancestors dot much of the surface of the then-known lands with those large stones? Why are such cons ...

guest's profile picture
@guest
4 Nov 2024
The real pyramid mania exploded in 1830. A certain John Taylor, who had never visited them but relied on some measurements made by Colonel H ...

guest's profile picture
@guest
4 Nov 2024
Even with all the modern technologies available to us, structures like the Great Pyramid of Cheops could only be built today with immense di ...

lostcivilizations's profile picture
Lost Civilizations (@lostcivilizations)
2 Nov 2024
In Sardinia, there is a legend known as the Legend of Tirrenide. Thousands of years ago, there was a continent called Tirrenide. It was a l ...
Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT