Copy Link
Add to Bookmark
Report
AIList Digest Volume 8 Issue 017
AIList Digest Wednesday, 20 Jul 1988 Volume 8 : Issue 17
Today's Topics:
Does AI kill? -- Third in a series ...
----------------------------------------------------------------------
Date: 18 Jul 88 12:53:30 GMT
From: linus!marsh@gatech.edu (Ralph Marshall)
Subject: Re: does AI kill?
In article <4449@fluke.COM> kurt@tc.fluke.COM (Kurt Guntheroth) writes:
>I am surprised that nobody (in this newsgroup anyway) has pointed out yet
>that AEGIS is a sort of limited domain prototype of the Star Wars software.
>Same mission (identify friend or foe, target selection, weapon selection and
>aiming, etc.) Same problem (identifying and classifying threats based on
>indirect evidence like radar signatures perceived electronically at a
>distance). And what is scary, the same tendency to identify everything as a
>threat. Only when Star Wars perceives some Airbus as a threat, it will
>initiate Armageddon. Even if all it does is shoot down the plane, there
>won't be any human beings in the loop (though a lot of good they did on the
>Vincennes (sp?)).
>
I had the same reaction when I heard about the chain of events that
caused the AEGIS system to screw up. Basically it was a system DESIGN problem,
in that it is not really designed to handle a mix of civilian and military
targets in the same area. Obviously, in the type of large scale war for
which AEGIS was designed nobody is going to be flying commercial jets
into a battle zone, so you don't have to account for that when you decide
how to identify unknown targets. I see much the same problem with Star Wars,
in that even if you can actually build the thing so that it works properly
without any system-level testing, you are going to have a hell of a time
deciding what the operational environment is going to be like. If in fact
the US/USSR combined mission to Mars goes forward, can you imagine the
number of orbiting and launched "targets" the Soviet Union will have for the
system to deal with ?
Another point that I haven't heard discussed is one that was raised
in the current Time magazine: Captain Rogers received permission from the
Admiral (I think) who was in charge of the battle group _before_ firing the
missiles. This is a pretty decent example of proper decision-making procedures
for lethal situations, since several humans had to concur with the analysis
before the airplane was shot down.
---------------------------------------------------------------------------
Ralph Marshall (marsh@mitre-bedford.arpa)
Disclaimer: Often wrong but never in doubt... All of these opinions
are mine, so don't gripe to my employer if you don't like them.
------------------------------
Date: 18 Jul 88 13:43:51 GMT
From: att!whuts!spf@bloom-beacon.mit.edu (Steve Frysinger of Blue
Feather Farm)
Subject: Re: Re: does AI kill?
> Is anybody saying that firing the missle was the wrong decision under
> the circumstances? The ship was, afterall, under attack by Iranian
> forces at the time, and the plane was flying in an unusual manner
> for a civilian aircraft (though not for a military one). Is there
> any basis for claiming the Captain would have (or should have) made
> a different decision had the computer not even been there.
I think each of us will have to answer this for ourselves, as the
Captain of the cruiser will for the rest of his life. Perhaps one
way to approach it is to consider an alternate scenario.
Suppose the Captain had not fired on the aircraft. And suppose the
jet was then intentionally crashed into the ship (behavior seen in
WWII, and made plausible by other Iranian suicide missions and the
fact that Iranian forces were already engaging the ship). Would
we now be criticizing the Captain for the death of his men by NOT
firing?
As I said, we each have to deal with this for ourselves.
Steve
------------------------------
Date: 18 Jul 88 16:32:05 GMT
From: fluke!kurt@beaver.cs.washington.edu (Kurt Guntheroth)
Subject: Re: does AI kill?
>> In article <4449@fluke.COM>, kurt@tc.fluke.COM (Kurt Guntheroth) writes:
>> Only when Star Wars perceives some Airbus as a threat, it will
>> initiate Armageddon. Even if all it does is shoot down the plane, there
>> won't be any human beings in the loop (though a lot of good they did on the
>> Vincennes (sp?)).
> Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907
> There are some major differences. One is that the time scale will be a little
> longer. Another is that it is very unlikely that _one_ missile will be fired.
> A third is that I believe that one can almost guarantee that commercial
> airliners will identify themselves when asked by the military.
1. Huh? Time scale longer? That's not what the Star Wars folks are saying.
2. What about a power with a limited first strike capability. Maybe they
launch ten missiles on a cloudy day, and we only see one.
3. The Airbus didn't respond. KAL 007 didn't respond. History is against you.
> The AI aspects of the AEGIS system are designed for an open ocean war.
> That it did not avoid a situation not anticipated...is not an adequate
> criticism of the designers.
This is exactly a valid criticism of the design. For the AEGIS system to
make effective life-and-death decisions for the battle units it protects, it
must be flexible about the circumstances of engagement. Why is AEGIS
specified for open ocean warfare? Because that is the simplest possible
situation. Nothing in the entire radar sphere but your ships, your planes,
and hostile targets. No commercial traffic, no land masses concealing
launchers, reducing time scales, and limiting target visibility. What does
this have to do with Star Wars? No simplifying assumptions may be made for
Star Wars (or any effective computerized area defense system). It has to
work under worst case assumptions. It has to be reliable. And for Star
Wars at least, it must not make mistakes that cause retailatory launches.
I don't know enough about defense to know that these requirements are not
contradictory. Can a system even be imagined that could effectively
neutralize threats under all circumstances, and still always avoid types of
mistakes that kill large numbers of people accidentally? What are the
tradeoff curves like? What do the consumers (military and civilian)
consider reasonable confidence levels?
It also does no good to say "The AEGIS isn't really AI, so the question
is moot." Calling a complex piece of software AI is currently a marketing
decision. Acedemic AI types have at least as much stake as anyone else in
NOT calling certain commercial or military products AI. It would cheapen
their own work if just ANYBODY could do AI. In fact, I don't even think
AI-oid programs should be CALLED AI. It should be left to the individual
user to decide if the thing is intelligent, the way we evaluate the
behavior of pets and politicians, on a case by case basis.
------------------------------
Date: 18 Jul 88 18:17:32 GMT
From: otter!sn@hplabs.hp.com (Srinivas Nedunuri)
Subject: Re: Did AI kill? (was Re: does AI kill?)
/ otter:comp.ai / tws@beach.cis.ufl.edu (Thomas Sarver) /
Thomas Sarver writes:
>The point that everyone is missing is that there is a federal regulation that
>makes certain that no computer has complete decision control over any
>military component. As the article says, the computer RECOMMENDED that the
> ~~~~~~~~~~~
>blip was an enemy target. The operator was at fault for not ascertaining the
>computer's reccomendation.
Perhaps this is also undesirable, given the current state of AI
technology. Even a recommendation amounts to the program having taken some
decision. It seems to me that the proper place for AI (if AI was used) is
in filtering the mass of information that would normally overwhelm a human.
In fact not only filtering but collecting this information and presenting it
in a more amenable form _ based on simple, robust wont-usually-fail
heuristics. In this way it is clear that AI is offering an advantage - a
human simply could not take in all the information in its original form and
come to a sensible decision in a reasonable time.
We don't know yet what actually happened on the Vincennes but the
computer's recommendation could well have swayed the Captain's decision,
psychologically.
------------------------------
Date: 19 Jul 88 00:36:35 GMT
From: trwrb!aero!venera.isi.edu!smoliar@bloom-beacon.mit.edu
(Stephen Smoliar)
Subject: Re: does AI kill?
In article <143100002@occrsh.ATT.COM> tas@occrsh.ATT.COM writes:
>
> Lets face it, I am sure ultimately it will be easier to place the
> blame on a computer program (and thus on the supplier) than on a
> single individual. Isn't that kind of the way things work, or am
> I being cynical?
>
If you want to consider "the way things work," then I suppose we have to
go back to the whole issue of blame which is developed in what we choose
to call our civilization. We all-too-humans are not happy with complicated
answers, particularly when they are trying to explain something bad. We
like our answers to be simple, and we like any evil to be explained in
terms of some single cause which can usually be attributed to a single
individual. This excuse for rational thought probably reached its nadir
of absurdity with the formulation of the doctrine of original sin and the
principle assignment of blame to the first woman. Earlier societies realized
that it was easier to lay all blame on some dispensible animal (hence, the
term scapegoat) than to pick on any human . . . particularly when any one
man or woman might just as likely be the subject of blame as any other.
Artificial intelligence has now given us a new scapegoat. We, as a society,
can spare ourselves all the detailed and intricate thought which goes into
understanding how a plane full of innocent people can be brought to a fiery
ruin by dismissing the whole affair as a computer error. J. Preser Eckert,
who gave the world both Eniac and Univac, used to say that man was capable
of going to extraordinary lengths just to avoid thinking. When it comes to
thinking about disasterous mistakes, the Aegis disaster has demonstrated, if
nothing else, just how right Eckert was.
------------------------------
Date: 19 Jul 88 03:02:06 GMT
From: anand@vax1.acs.udel.edu (Anand Iyengar)
Subject: Re: does AI kill?
In article <4471@fluke.COM> kurt@tc.fluke.COM (Kurt Guntheroth) writes:
>contradictory. Can a system even be imagined that could effectively
>neutralize threats under all circumstances, and still always avoid types of
>mistakes that kill large numbers of people accidentally? What are the
>tradeoff curves like? What do the consumers (military and civilian)
>consider reasonable confidence levels?
It seems that even people can't do this effectively: look at the
justice system.
Anand.
------------------------------
Date: 19 Jul 88 14:23:48 GMT
From: att!whuts!spf@bloom-beacon.mit.edu (Steve Frysinger of Blue
Feather Farm)
Subject: Re: AI kills?
>In article <4563@whuts.UUCP> you write:
>>Suppose the Captain had not fired on the aircraft. And suppose the
>>jet was then intentionally crashed into the ship (behavior seen in
>>WWII, and made plausible by other Iranian suicide missions and the
>>fact that Iranian forces were already engaging the ship). Would
>>we now be criticizing the Captain for the death of his men by NOT
>>firing?
>
>Do you really think the captain and all his men would have stood there
>mesmerized for the several minutes it would take for an Irani suicide
>flight to come in and crash?
Yes, if you accept the hypothesis that they had decided not to fire on
the (apparently) civilian aircraft, especially if a deceptive ploy was
used (e.g. disabled flight, &c.). They wouldn't have realized its
hostile intent until several SECONDS before the aircraft would
impact. Put all the missiles you want into it, the fuel (and possibly
explosives) laden fuselage of the jet airliner would have a high
probability of impact, with attendent loss of life.
>I mean, *think* for at least 10 seconds
>about the actual situation before posting such obvious nonsense.
Apart from the needlessly offensive tone of your remarks, I believe you
should pay some attention to this yourself. Every recent Naval surface
casualty has exactly these elements. Your blind faith in Naval weapon
systems is entirely unjustified - ask someone who knows, or maybe go
serve some time in the Med. yourself.
>Kamikazis might have worked in WWII, but the AEGIS, even if can't tell
>the difference between a commerical airliner on a scheduled flight and
>an F-14 in combat, has proven its ability to blow the hell out of nearby
>aircraft.
No. It has proven its ability to disable/destroy the aircraft, not
vaporize the aircraft. If you research Kamikazi casualties in WWII
you'll find that significant damage was done by aircraft which had
been hit several times and whose pilots were presumed dead in the air.
I cannot comment on the capabilities of the weapons available to the
AEGIS fire control; I only encourage you to research the open literature
(at least) before you make such assumptions.
> There was absolutely nothing for him to fear.
Right. Stand on the bridge, in the midst of hostile fire, with an
adversary known to use civilian and military resources in deceptive
ways, and THEN tell me there was nothing for him to fear. You might
be able to critique football this way, but don't comment on war.
At first blush I categorized your position as bullshit, but then it
occurred to me that it might only be wishful thinking. The uninitiated
public in general WANTS to believe our technology is capable of meeting
the threats. You're not to blame for believing it; our leaders try to
make you believe it so you'll keep paying for it. Your ignorance is,
therefore, innocent. In the future, however, please do not corrupt
a constructive discussion with hostile remarks. If you can't
participate constructively, please keep your thoughts to yourself.
Steve
P.S. Incidently, we haven't even touched on the possibility of the
airliner having been equipped with missiles, quite an easy thing
to do. I don't claim the Captain did the RIGHT thing; I just think
the populace (as usual) has 20/20 hindsight, but hasn't got the
slightest idea about what the situation was like in REAL-TIME. It's
always a good idea to look at both sides, folks.
------------------------------
Date: 19 Jul 88 09:13:24 PDT (Tuesday)
From: Rodney Hoffman <Hoffman.es@Xerox.COM>
Subject: Re: does AI kill?
The July 18 Los Angeles Times carries an op-ed piece by Peter D. Zimmerman, a
physicist who is a senior associate at the Carnegie Endowment for International
Peace and director of its Project on SDI Technology and Policy:
MAN IN LOOP CAN ONLY BE AS FLAWLESS AS COMPUTERS.
[In the Iranian Airbus shootdown,] the computers aboard ship use
artificial intelligence programs to unscramble the torrent of infor-
mation pouring from the phased array radars. These computers decided
that the incoming Airbus was most probably a hostile aircraft, told
the skipper, and he ordered his defenses to blast the bogey (target)
out of the sky. The machine did what it was supposed to, given the
programs in its memory. The captain simply accepted the machine's
judgment, and acted on it....
Despite the fact that the Aegis system has been exhaustively tested at
the RCA lab in New Jersey and has been at sea for years, it still failed
to make the right decision the first time an occasion to fire a live
round arose. The consequences of a similar failure in a "Star Wars"
situation could lead to the destruction of much of the civilized world.
[Descriptions of reasonable scenarios ....]
The advocates of strategic defense can argue, perhaps plausibly, that
we have now learned our lesson. The computers must be more sophisticated,
they will say. More simulations must be run and more cases studied so
that the artificial intelligence guidelines are more precise.
But the real lesson from the tragedy in the Persian Gulf is that
computers, no matter how smart, are fallible. Sensors, no matter how
good, will often transmit conflicting information. The danger is not
that we will fail to prepare the machines to cope with expected situa-
tions. It is the absolute certainty that crucial events will be ones
we have not anticipated.
Congress thought we could prevent a strategic tragedy by insisting that
all architectures for strategic defense have the man in the loop. We
now know the bitter truth that the man will be captive to the computer,
unable to exercise independent judgment because he will have no indepen-
dent information, he will have to rely upon the recommendations of his
computer adviser. It is another reason why strategic defense systems
will increase instability, pushing the world closer to holocaust --
not further away.
- - - - -
I'm not at all sure that Aegis really uses much AI. But many lessons implicit
in Zimmerman's piece are well-taken. Among them:
* The blind faith many people place in computer analysis is rarely
justified. (This of course includes the hype the promoters use to
sell systems to military buyers, to politicians, and to voters.
Perhaps the question should be "Does hype kill?")
* Congress's "man in the loop" mandate is an unthinking palliative,
not worth much, and it shouldn't lull people into thinking the problem
is fixed.
* To have a hope of being effective, "people in the loop" need additional
information and training and options.
* Life-critical computer systems need stringent testing by disinterested
parties (including operational testing whenever feasible).
* Many, perhaps most, real combat situations cannot be anticipated.
* The hazards at risk in Star Wars should rule out its development.
-- Rodney Hoffman
------------------------------
Date: 19 Jul 88 09:34 PDT
From: Harley Davis <HDavis.pa@Xerox.COM>
Subject: Does AI Kill?
I used to work as the artificial intelligence community of the Radar Systems
Division at Raytheon Co, the primary contractors for the Aegis detection radar.
(Yes, that's correct - I was the only one in the community.) Unless the use of
AI in Aegis was kept extremely classified from everyone there, it did not use
any of the techniques we would normally call AI, including rule-based expert
systems. However, it probably used statistical/Bayesian techniques to interpret
and correlate the data from the transponder, the direct signals, etc. to come up
with a friend/foe analysis. This analysis is simplified by the fact that our
own jets give off special transponder signals.
But I don't think this changes the nature of the question - if anything, it's
even more horrifying that our military decision makers rely on programs ~even
less~ sophisticated than the most primitive AI systems.
-- Harley Davis
HDavis.PA@Xerox.Com
------------------------------
Date: Tue, 19 Jul 88 17:50 PDT
From: Gavan Duffy <Gavan@SAMSON.CADR.DIALNET.SYMBOLICS.COM>
Subject: Re: does AI kill?
no AI does not kill, but AI-people do.
Only if they have free will.
------------------------------
End of AIList Digest
********************