Copy Link
Add to Bookmark
Report

AIList Digest Volume 8 Issue 024

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest            Tuesday, 26 Jul 1988      Volume 8 : Issue 24 

Today's Topics:

Does AI kill? Fifth in a series ...

----------------------------------------------------------------------

Date: 20 Jul 88 19:31:09 GMT
From: hpl-opus!hpccc!hp-sde!hpfcdc!hpislx!hpmtlx!ed@hplabs.hp.com
($Ed Schlotzhauer)
Subject: Re: Re: AI kills?

Steve Frysinger brings a valuable viewpoint into this discussion. It sure
is easier to be a self righteous Monday-morning quarterback when *I'm* not
the one in a battle zone having to make an immediate, life-and-death decision.

I am reading the book "Very Special Intelligence" by Patrick Beesly. It is
the story of the British Admiralty's Operational Intelligence Center during
WWII. It shows vividly how a major part of the process of making sense of
the bits and pieces of intelligence data relies on the experience, intuition,
hunches, and just plain luck of the intelligence officers. Granted our
"modern" techniques are not nearly so "primitive" (I wonder), but the same
must be true today.

I haven't finished the book yet, but two observations that immediately come to
mind are:

1. In a war situation, you almost *never* have complete or even
factual information at the time that decisions *must* be made.

2. I would *never* trust a machine to make operational recommendations.

Ed Schlotzhauer

------------------------------

Date: Wed, 21 Jul 1988 20:00
From: Ferhat DJAVIDAN <DJAVI@TRBOUN>
Subject: Re: Re: does AI kill? by <stewart@sunybcs.bitnet>

In article V8 #13 <stewart@sunybcs.bitnet> Norman R. Stewart "Jr" writes:

>While I'm at it, the Iranians began attacking defenseless commercial
>ships in international waters, killing innocent crew members, and
>destroying non-military targets (and dumping how much crude oil into
>water?). Criticizing the American Navy for coming to defend these
>ships is like saying that if I see someone getting raped or mugged
>I should ignore it if it is not happening in my own yard.
>
>The Iranians created the situation, let them live with it.
>
>
>
>Norman R. Stewart Jr. * How much more suffering is
>C.S. Grad - SUNYAB * caused by the thought of death
>internet: stewart@cs.buffalo.edu * than by death itself!
>bitnet: stewart@sunybcs.bitnet * Will Durant

I am very sorry that this is not the subject for AIlist.
We are not discussing the politics, economics or strategies of USA
in this list.
According to you (I mean NORMAN), USA sees herself as a body-guard
of the world and she has the power to kill. In some cases this is true.
But only a fanatic person may say
>The Iranians created the situation, let them live with it.
First of all this is wrong. Iraqi started the war, including the situation.
I think you heard wrong, correct yourself.
Also both tribes are still buying some of their guns from USA directly.
I don't want to say "killing innocent people is the job of USA".
Please don't push me to do this.
Humanity will never forget Vietnam, Cuba, Nikaragua, Palestine,...
and indirect results occured because of the guns made in U.S.A.

Please don't discuss anything except AI in this list, else God knows!?

P.S. Good (or bad for you) news for peace acception from Iran.

Ferhat DJAVIDAN
<djavi@trboun.bitnet>

------------------------------

Date: Fri, 22 Jul 88 10:26 EDT
From: <MORRISET%URVAX.BITNET@MITVMA.MIT.EDU> (Cow Lover)
Subject: Re: Does AI kill?


Thinking on down the line...

Suppose we eventually construct an artificially intelligent
"entity." It thinks, "feels", etc... Suppose it kills someone
because it "dislikes" them. Should the builder of the entity
be held responsible?

Just a thought...

Greg Morrisett
BITNET%"MORRISET@URVAX"

------------------------------

Date: 22 Jul 88 18:29:35 GMT
From: smithj@marlin.nosc.mil (James Smith)
Subject: Re: does AI kill?

In article <754@amelia.nas.nasa.gov> Michael S. Fischbein writes:
>Nonsense. The Navy Tactical Data System (NTDS)... had very similar
>capabilities as far as tracking and individual's displays go.

Yes and no. While the individual display consoles are similar in
function, the manner in which the data is displayed is quite different.
With an NTDS display, you get both radar video and computer-generated
symbols; AEGIS presents computer-generated symbology ONLY. Therefore,
the operator doesn't have any raw video to attempt to classify a target
by size, quality, etc. (As a side note, it is VERY difficult to
accurately classify a target solely on the basis of its radar return -
the characteristics of a radar return depend on a number of factors
(e.g. target orientation, target altitude and range (fade zones),
external target loads (fuel tanks, ordnance, etc)) that it is simply
not true to say 'an Airbus 300 will have a larger radar blip than an
F-14'.)

>I would appreciate any reference that compares the SPY-1 to the SPS-48
>and shows significantly greater precision to either.

I can find no _specific_ references in the unclassified literature,
however, it is commonly accepted that the SPY-1 radar has significantly
greater accuracy in both bearing (azimuth and elevation) and range
than other operational systems; in addition, the SPY-1 provides the
weapon control systems with several target updates per second, as
opposed to the roughly 5 second update intervals for the SPS-48 (Jane's
Weapon Systems, 1987-1988). The SPS-48(E), while providing an improvement
in performance over the older SPS-48(C), does not significantly alter
this.


+-----------------------------------------------------------------+
| Jim Smith | If we weren't crazy, |
| (619) 553-3704 | we'd all go insane |
| | |
| UUCP: nosc!marlin!smithj | - Jimmy Buffett |
| DDN: smithj@marlin.nosc.MIL | |
+-----------------------------------------------------------------+

------------------------------

Date: 22 Jul 88 22:19:22 GMT
From: smithj@marlin.nosc.mil (James Smith)
Subject: Re: AI...Shoot/No Shoot

In article <854@lakesys.UUCP> tomk@lakesys.UUCP (Tom Kopp) writes:
>Does anybody know if Naval Command Candidates go through similar
>testing (re: Shoot/No-Shoot decision)?

Yes, they do, starting as junior officers 'playing' a naval warfare
simulation known as "NAVTAG". NAVTAG is used to teach basic tactics of
sensor and weapon employment. Though somewhat restricted in terms of
the types of platforms it can simulate, within these limitations
NAVTAG does a fairly good job of duplicating the characteristics of
existing shipboard systems, and provides a useful tool in the
education of young naval officers.

Before someone becomes the commanding officer of a warship (e.g.
USS Vincennes), an officer will have been a "Tactical Action
Officer" during a previous shipboard tour. This individual has the
authority to release (shoot) ship's weapons in the absence of the
CO. TAO school involves a lot of simulated combat, and forces the
candidate to make numerous shoot/no-shoot decisions under conditions
as stressful as can be attained in a school (simulator) environment.
In addition, before assuming command, a prospective commanding officer
goes through a series of schools designed to hone his knowledge/decision-
making skills in critical areas.

>I still don't understand WHY the computers mis-lead the crew as to the
>type of aircraft...

Read article 660 on comp.society.futures.

Also, IAF 655 was 'squawking' MODE-2 IFF, which is used EXCLUSIVELY
for military aircraft/ships. The code they had set had, apparently, been
previously correlated with an Iranian F-14; thus, the computer made
the correlation between the aircraft taking off from Bandar Abbas
(which is known to be used as a military airbase) and the MODE-2 IFF.
This, coupled with the lack of response to radio challenges on both
civil and military channels, resulted in the aircraft being declared
'hostile' and subsequently destroyed.

------------------------------

Date: Sat, 23 Jul 88 17:36 EST
From: steven horst 219-289-9067
<GKMARH%IRISHMVS.BITNET@MITVMA.MIT.EDU>
Subject: Does AI kill? (long)

I must say I was taken by surprise by the flurry of messages about
the tragic destruction of a commercial plane in the Persian Gulf.
But what made the strongest impression was the *defensive* tone
of a number of the messages. The basic defensive themes seemed to be:
(1) The tracking system wasn't AI technology,
(2) Even if it WAS AI technology, IT didn't shoot down the plane
(3) Even if it WAS AI and DID shoot down the plane, we mustn't
let people use that as a reason for shutting off our money.
Now all of these are serious and reasonable points, and merit some
discussion. I think we ought to be careful, though, that we
don't just rationalize away the very real questions of moral
responsibility involved in designing systems that can affect (and
indeed terminate) the lives of many, many people. These questions
arise for AI systems and non-AI systems, for military systems and
commercial expert systems.
Let's start simple. We've all been annoyed by design flaws
in comparatively simple and extensively tested commercial software,
and those who have done programming for other users know how hard
it is to idiotproof programs much simpler than those needed by the
military and by large private corporations. If we look at expert
systems, we are faced with additional difficulties: if the history of
AI has shown anything, it has shown that "reducing" human reasoning
to a set of rules, even within a very circumscribed domain, is much
harder than people in AI 30 years ago imagined.
But of course most programs don't have life-and-death consequences.
If WORD has bugs, Microsoft loses money, but nobody dies. If SAM
can't handle some questions about stories, the Yale group gets a grant
to work on PAM. But much of the money that supports AI research
comes from DOD, and the obvious implication is that what we design
may be used in ways that result in dire consequences. And it really
won't do to say, "Well, that isn't OUR fault....after all, LOTS of
things can be used to hurt people. But if somebody gets hit by a car,
it isn't the fault of the guy on the assembly line." First of all,
sometimes it IS the car company's fault (as was argued against Audi).
But more to the point, the moral responsibility we undertake in
supplying a product increases with the seriousness of the consequences
of error and with the uncertainty of proper performance. (Of course
even the "proper" performance of weapons systems involves the designer
in some moral responsibility.) And the track record of very large
programs designed by large teams - often with no one on the team
knowing the whole system inside and out - is quite spotty, especially
when the system cannot be tested under realistic conditions.
My aim here is to suggest that lots of people in AI (and other
computer-related fields) are working on projects that can affect lots
of people somewhere down the road, and that there are some very
real questions about whether a given project is RIGHT - questions
which we have a right and even an obligation to ask of ourselves, and
not to leave for the people supplying the funding. Programs that
can result in the death or injury of human beings are not morally
neutral. Nor are programs that affect privacy or the distribution
of power or wealth. We won't all agree on what is good, what is
necessary evil and what is unpardonable, but that doesn't mean we
shouldn't take very serious account of how our projects are
INTENDED to be used, how they might be used in ways we don't intend,
how flaws we overlook may result in tragic consequences, and how
a user who lacks our knowledge or uses our product in a context it
was not designed to deal with can cause grave harm.
Doctors, lawyers, military professionals and many other
professionals whose decisions affect other people's lives have ethical
codes. They don't always live up to them, but there is some sense
of taking ethical questions seriously AS A PROFESSION. It is good to
see groups emerging like Computer Professionals for Social
Responsibility. Perhaps it is time for those of us who work in AI
or in computer-related fields to take a serious interest, AS A
PROFESSION, in ethical questions.
--Steve Horst BITNET address....gkmarh@irishmvs
SURFACE MAIL......Department of Philosophy
Notre Dame, IN 46556

------------------------------

Date: 25 Jul 88 17:35:13 GMT
From: rochester!ken@cu-arpa.cs.cornell.edu (Ken Yap)
Subject: Re: does AI kill?

I find this discussion as interesting as the next person but it is
straying from the charter of these newsgroups. Could we please continue
in soc.politics.arms-d or talk.politics?

Ken




[Editor's note:

I couldn't agree more.

There is a time and a place for discussions like this. It is
*not* in a large multi-network mailing list (gatwayed to several
thousand readers world-wide) intended for discussion of AI related
topics.


- nick]

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT