Copy Link
Add to Bookmark
Report
AIList Digest Volume 8 Issue 013
AIList Digest Monday, 18 Jul 1988 Volume 8 : Issue 13
Today's Topics:
does AI kill? - Continued
----------------------------------------------------------------------
Date: 15 Jul 88 07:03:30 GMT
From: heathcliff!alvidrez@columbia.edu (Jeff Alvidrez)
Subject: Re: AI and Killing
In the case of the Iranian airliner snafu, to say that AI kills just opens
up the whole "Guns don't kill people..." can of worms.
We all know the obvious: even if someone does point the finger at an expert
system, EVEN if it has direct control of the means of destruction (which I
don't believe is the case with Aegis), the responsibility will always be
passed to some human agent, whether it be those who put the gun the the
computer's power or those who designed the system. No one wants to take
this kind of responsibility; who would? So it is passed on until there
is no conceivable way it could rest with anyone else (something like
the old Conrad cartoon with Carter pointing his finger at Ford, Ford
pointing to Nixon, Nixon pointing to (above, in the clouds), Johnson,
and so on, until the buck reaches Washington, and he points back at
Carter).
With a gun, it is quite clear where the responsibility lies, with the
finger on the trigger. But with machines doing the work for us, it is no
longer so evident. For the Iranian airliner, who goes up against the wall?
Rogers? "I was just going on what the computer told me..." The people
behind Aegis? "Rogers used his own judgment; Aegis was intended only as
an advisory system". The machine adds a level of indirection which makes
way for a lot of finger-pointing but no real accountability. Though
I don't think we'll see much of that this time around with public opinion
of the Iranians what it is, but wait until it's someone else's plane
("Mr. President, we've got this Air-Traffic Controller expert system we'd
like you to see... an excellent opportunity to put this strike to rest...").
That is why the issue of the DOD use of AI is important: like all the other
tools of war we have developed through centuries of practice, AI allows
us to be even more detached from the consequences of our actions. Soon,
we will not even need a finger to push the button, and our machines will
do the dirty work. No blood on my fingers, so why should I care?
Now that we have established our raw destructive capabilities quite
clearly (and to the point of absurdity, as we talk of kill-ratios and
measure predicted casualties of a single weapon in the millions), AI is
the next logical step. And like gun control, trying to avoid this is just
sticking your head in the sand. If the technology is there, SOMEONE will
manage to use it, no matter how much effort is put into suppressing it.
One thing I have noticed is the vast amount of AI research funded by
DOD grants, more so (at least from what I have seen) than in other CS
fields. Is there any doubt as to what use the fruits of this research
will go? Certainly the possibilities for peaceful uses of any knowledge
gained are staggering, but I think it is quite clear what the DOD wants
it for.
They are, after all, in only one business.
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Jeff Alvidrez
alvidrez@heathcliff.cs.columbia.edu
The opinions expressed in this article are fictional. Any resemblence
they may bear to real opinions (especially those of Columbia University)
is purely coincidental.
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
------------------------------
Date: 15 Jul 88 13:02:28 GMT
From: mailrus!uflorida!beach.cis.ufl.edu!tws@ohio-state.arpa (Thomas
Sarver)
Subject: Did AI kill? (was Re: does AI kill?)
In article <2091@ssc-vax.UUCP> ted@ssc-vax.UUCP (Ted Jardine) writes:
>
>First, to claim that the Aegis Battle Management system has an AI component
>is patently ridiculous. I'm not suggesting that this is Ken's claim, but it
>does appear to be the claim of the Washington Post article.
>
>It's the pitfall that permits us to invest some twenty years of time and some
>multiple thousands of dollars (hundreds of thousands?) into the training and
>education of a person with a Doctorate in a scientific or engineering
>discipline
>but not to permit a similar investment into the creation of the knowledge base
>for an AI system.
>
>TJ {With Amazing Grace} The Piper
>aka Ted Jardine CFI-ASME/I
>Usenet: ...uw-beaver!ssc-vax!ted
>Internet: ted@boeing.com
--
The point that everyone is missing is that there is a federal regulation that
makes certain that no computer has complete decision control over any
military component. As the article says, the computer RECOMMENDED that the
blip was an enemy target. The operator was at fault for not ascertaining the
computer's reccomendation.
I was a bit surprised Ted Jardine from boeing didn't bring this up in his
comment.
As for the other stuff about investing in an AI program: I think there needs
to be sound, informed guidelines for determining whether a program can enter
a particular duty. 1) People aren't given immediate access to decision-making
procedures, neither should a computer. 2) however, there are certain
assumptions one can make about a person one can't make about a computer.
3) The most important module of an AI program is the one that says "I DON'T
KNOW, you take over." 4) The second most important is the one that says, "
I Think its blah blah WITH CERTAINTY X" 5) Just as there are military
procedures for relieving humans of their decision-making status, there should
be some way to do so for the computer.
Summary: No, AI did not kill. Operator didn't look any farther than screen.
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
But hey, its the best country in the world!
Thomas W. Sarver
"The complexity of a system is proportional to the factorial of its atoms. One
can only hope to minimize the complexity of the micro-system in which one
finds oneself."
-TWS
Addendum: "... or migrate to a less complex micro-system."
------------------------------
Date: 15 Jul 88 13:29:00 GMT
From: att!occrsh!occrsh.ATT.COM!tas@bloom-beacon.mit.edu
Subject: Re: does AI kill?
>no AI does not kill, but AI-people do. The very people that can
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Good point!
Here is a question. Why blame the system when there the human in
the loop makes the final decision? I could understand if the Aegis
system had interpreted the incoming plane as hostile AND fired the
missiles, but it did not.
If the captain relied solely on the information given to him by the
Aegis system, then why have the human in the loop? The idea is as
I always thought was for the human to be able to add in unforeseen
factors not accounted for in the programming of the Aegis system.
Lets face it, I am sure ultimately it will be easier to place the
blame on a computer program (and thus on the supplier) than on a
single individual. Isn't that kind of the way things work, or am
I being cynical?
Tom
------------------------------
Date: 15 Jul 88 15:58:48 GMT
From: fluke!kurt@beaver.cs.washington.edu (Kurt Guntheroth)
Subject: Re: does AI kill?
I am surprised that nobody (in this newsgroup anyway) has pointed out yet
that AEGIS is a sort of limited domain prototype of the Star Wars software.
Same mission (identify friend or foe, target selection, weapon selection and
aiming, etc.) Same problem (identifying and classifying threats based on
indirect evidence like radar signatures perceived electronically at a
distance). And what is scary, the same tendency to identify everything as a
threat. Only when Star Wars perceives some Airbus as a threat, it will
initiate Armageddon. Even if all it does is shoot down the plane, there
won't be any human beings in the loop (though a lot of good they did on the
Vincennes (sp?)).
I will believe in Star Wars only once they can demonstrate that AEGIS works
under realistic battlefield conditions. The history of these systems is
really bad. Remember the Sheffield, smoked in the Falklands War because
their Defense computer identified an incoming Exocet missile as friendly
because France is a NATO ally? What was the name of our other AEGIS cruiser
that took a missile in the gulf, because they didn't have their guns turned on,
because their own copter pilots didn't like the way the guns tracked them in
and out.
------------------------------
Date: 15 Jul 88 19:21:13 GMT
From: l.cc.purdue.edu!cik@k.cc.purdue.edu (Herman Rubin)
Subject: Re: does AI kill?
In article <4449@fluke.COM>, kurt@tc.fluke.COM (Kurt Guntheroth) writes:
............
> Only when Star Wars perceives some Airbus as a threat, it will
> initiate Armageddon. Even if all it does is shoot down the plane, there
> won't be any human beings in the loop (though a lot of good they did on the
> Vincennes (sp?)).
.............
There are some major differences. One is that the time scale will be a little
longer. Another is that it is very unlikely that _one_ missile will be fired.
One can argue that that is a possible ploy, but a single missile from the USSR
could bring retaliation with or without SDI. A third is that I believe that
one can almost guarantee that commercial airliners will identify themselves when
asked by the military.
The AI aspects of the AEGIS system are designed for an open ocean war with many
possible enemy aircraft. That it did not avoid a situation not anticipated
in a narrow strait is not an adequate criticism of the designers. However, it
is not AI, nor do I think that there are many AI systems, although many are
so called. It is precisely the mode of failure. I define intelligence to be
the ability to handle a totally unforeseen situation. I see no way that a
deterministic system can be intelligent. SDI will not be intelligent; it will
do what it is told, not what one thinks it has been told. This is the nature
of computers at the present time.
--
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907
Phone: (317)494-6054
hrubin@l.cc.purdue.edu (Internet, bitnet, UUCP)
------------------------------
Date: 15 Jul 88 19:35:00 GMT
From: smythe@iuvax.cs.indiana.edu
Subject: Re: does AI kill?
/* Written 10:58 am Jul 15, 1988 by kurt@fluke in iuvax:comp.ai */
- [star wars stuff deleted]
-I will believe in Star Wars only once they can demonstrate that AEGIS works
-under realistic battlefield conditions. The history of these systems is
-really bad. Remember the Sheffield, smoked in the Falklands War because
-their Defense computer identified an incoming Exocet missile as friendly
-because France is a NATO ally? What was the name of our other AEGIS cruiser
-that took a missile in the gulf, because they didn't have their guns turned on
-because their own copter pilots didn't like the way the guns tracked them in
-and out.
-/* End of text from iuvax:comp.ai */
Lets try and get the facts straight. In the Falklands conflict the
British lost two destroyers. One because they never saw the missle
coming until it was too late. It is very hard to shoot down an
Exocet. In the other case, the problem was that the air defense
system was using two separate ships, one to do fire control
calculations and the other to actually fire the missle. The ship that
was lost had the fire control computer. It would not send the command
to shoot down the missle because there was another British ship in the
way. The ship in the way was actually the one that was to fire the
surface-to-air missle. Screwy. I don't know which event involved the
Sheffield, but there was no misidentification in either case.
The USS Stark, the ship hit by the Iraqi-fired exocet, is not an AEGIS
cruiser at all but a missile frigate, a smaller ship without the
sophisticated weapons systems found on the cruiser. The captain did
not activate the close-support system because he did not think the
Iraqi jet was a threat. Because of this some of his men died. This
incident is now used as a training exercise for ship commanders and
crew. In both the Stark's case and the Vincennes' case the captains
made mistakes and people died. In both cases the captains (or
officers in charge, in the case of the Stark) had deactivated the
automatic systems. On the Stark, it may have saved lives. On the
Vincennes, the tragedy would have occurred sooner.
I don't really think that AI technology is ready for either of these
systems. Both decisions involved weighing the risks of losing human
lives against conflicting and incorrect information, something that AI
systems do not yet do well. It is clear that even humans will make
mistakes in these cases, and will likely continue to do so until the
quality and reliability of their information improves.
Erich Smythe
Indiana University
smythe@iuvax.cs.indiana.edu
iuvax!smythe
------------------------------
Date: 16 Jul 88 00:06:21 GMT
From: garth!smryan@unix.sri.com (Steven Ryan)
Subject: Re: AI and Killing
Many years ago, Vaughn Bode' wrote about machines that took over the
world and exterminated humanity (The Junkwaffel Papers, et cetera).
Of course, they were just cartoons.
------------------------------
Date: 16 Jul 88 07:26:46 GMT
From: portal!cup.portal.com!tony_mak_makonnen@uunet.uu.net
Subject: Re: AI and Killing
responding to a response by David Gagliano: I already disavow every word
I wrote since in committing to a few lines I had to compromise a lot.
What is permanent is what we can learn from the experience. It is easy to a
accept that there is a problem in the human-binary relation; we do not
have neat guides on the proper mix. What is apparent to me is that the
expanded discrete input the computer allows expands rather than reduce
the qualitative input of the human component, which is required. You can
say the human is required to become a faster and more intelligent thinker.
While the computer takes up the discursive or calculative function from
the brain; one has a sense that it shifts a much heavier burden to its
synthetic functions: the imagination, intuition , that which seems to
go on subliminally. I feel this area must be addressed before we conclude
that computers are getting too fast for us to allow practical use in real
time decision making. While we wonder what aspect of mind the computer
is pushing us to understand the problem will remain.
We can assume that the human component must form a bond, a feel
of what the info on the display means; just as a police officeer
might with a police dog; the formation of tacit knowledge which
may or may not be capable of explication. To use an expression
the processing of display info must become second nature. We are
back to the immensely obvious requirement of first hand experience
and the following questions among others. Can the skipper take
the attitude that there is always someone else who will readthe
machine for him? What happens when he is really the receptor of
info processed by the radarman? Can the Navy afford the type
of individual it will take to effectively integrate computer info
into decision in real time.? Gee I this point I am not really
sure this means anything! but what the heck....
------------------------------
Date: 16 Jul 88 13:37:58 GMT
From: ruffwork@cs.orst.edu (Ritchey Ruff)
Subject: Re: does AI kill? (Human in the loop)
In article <7333@cup.portal.com> tony_mak_makonnen@cup.portal.com writes:
>[...] The human in the
>loop could override a long computation by bringing in factors that could
>not practically be foreseen: 'why did the Dubai tower say..?, 'why is the...
Ah, Yes! The DoD says "sure, we have humans in the loop." When most
of the programs (like IFF) are of the type where the computer says
"press the red button" and the human in the loop presses the red
button! (in this case it's, IFF says it's a foe, so shoot it down...).
Everything I've read thus far (and I might have missed something ;-)
implies the IFF was the desciding factor...
--ritchey ruff ruffwork@cs.orst.edu -or- ...!tektronix!orstcs!ruffwork
------------------------------
Date: 16 Jul 88 14:29:29 GMT
From: sunybcs!stewart@rutgers.edu (Norman R. Stewart)
Subject: Re: does AI kill?
Is anybody saying that firing the missle was the wrong decision under
the circumstances? The ship was, afterall, under attack by Iranian
forces at the time, and the plane was flying in an unusual manner
for a civilian aircraft (though not for a military one). Is there
any basis for claiming the Captain would have (or should have) made
a different decision had the computer not even been there.
While I'm at it, the Iranians began attacking defenseless commercial
ships in international waters, killing innocent crew members, and
destroying non-military targets (and dumping how much crude oil into
water?). Criticizing the American Navy for coming to defend these
ships is like saying that if I see someone getting raped or mugged
I should ignore it if it is not happening in my own yard.
The Iranians created the situation, let them live with it.
Norman R. Stewart Jr. * How much more suffering is
C.S. Grad - SUNYAB * caused by the thought of death
internet: stewart@cs.buffalo.edu * than by death itself!
bitnet: stewart@sunybcs.bitnet * Will Durant
------------------------------
Date: 17 Jul 88 14:07:08 GMT
From: uvaarpa!virginia!uvacs!cfh6r@umd5.umd.edu (Carl F. Huber)
Subject: Re: does AI kill?
In article <1376@daisy.UUCP> klee@daisy.UUCP (Ken Lee) writes:
>This appeared in my local paper yesterday. I think it raises some
>serious ethical questions for the artificial intelligence R&D community.
>------------------------------
>COMPUTERS SUSPECTED OF WRONG CONCLUSION
>from Washington Post, July 11, 1988
>
>Computer-generated mistakes aboard the USS Vincennes may lie at the root
[other excerpts from an article in Ken's local paper deleted]
This sounds more like the same old story of blaming the computer. Also,
it is not clear about where the "intelligence" comes in to play here,
artificial or otherwise (not-so-:-). It really sounds like the user was
not very well trained to use the program, and the program may not have been
informative enough, but this also is not presented in the article.
I don't see any new ethical questions being raised at all. I see a lot of
organic material going through the air-conditioning at the pentagon.
-carl huber
------------------------------
End of AIList Digest
********************