Copy Link
Add to Bookmark
Report
AIList Digest Volume 3 Issue 126
AIList Digest Sunday, 22 Sep 1985 Volume 3 : Issue 126
Today's Topics:
Expert Systems - Hype
----------------------------------------------------------------------
Date: Sat 21 Sep 85 17:40:46-PDT
From: Gary Martins <GARY@SRI-CSLA.ARPA>
Subject: Are We Ready For This ?
In reading and replying to Bill Anderson's msg in a recent edition of
AIList, it occurred to me that there may be broader interest out there
in a discussion of the question of objectivity and truthfulness in the
fields of "AI", "expert systems", "knowledge engineering", etc.
I have repeatedly observed the following at conferences, symposia,
briefings, etc., and in "AI" journals and magazines:
- non-existent systems are described and debated as if
they were real
- unsuccessful developments are described as operationally
validated
- expensive and ineffective methods and "tools" are said to
solve difficult problems in computing
This happens too often to be accidental or aberrational. Those
responsible include highly placed figures from some of our most
prestigious institutions.
It would greatly interest me to understand:
- why this happens ?
- is this good or bad for "AI" ?
- does this happen in all high-tech fields, or is "AI"
unique ?
- what can or should be done about it ? by whom ?
Would AIList readers and contributors like to address these issues,
or is this set of topics too "sensitive" for the community to address
at this time ?
Many thanks!
Gary R. Martins
------------------------------
Date: Fri 20 Sep 85 20:33:22-PDT
From: Gary Martins <GARY@SRI-CSLA.ARPA>
Subject: Phony claims in AI
Bill -
I read your recent AIList comment with a mixture of amusement and
amazement. Will you pardon my asking: where have you been ? So far
as I have been able to probe, the entire edifice of "AI" -- and most
especially "expert systems" -- is built on a foundation of the most
outrageous baloney. At an endless procession of conferences,
symposia, press briefings, etc., the most outrageous claims are
relentlessly presented for the performance of a number of "AI"
systems. I have personally been associated with several of these
developments, and I can testify that not a single one of them can
honestly be said to "work" in any authentic sense of that word.
There are apparently a few utterly trivial systems that "work", after a
fashion. But the astronomical development costs of these toys should
make any rational user shrink from following a similar path.
Yet, despite these plain and rather obvious facts, military and industrial
audiences are daily told about the "successes" of "AI" and "expert
systems". The sorts of claims you refer to are entirely ordinary and
routine in this context.
One has to wonder, is this what modern science and technology is all
about ? Or is "AI" just a special case of extreme dishonesty ?
Gary R. Martins
PS -- Yes, send me the references, just for the record!
------------------------------
Date: Fri, 20 Sep 85 23:50:57 PDT
From: Richard K. Jennings <jennings@AEROSPACE.ARPA>
Subject: AI and Satellite Control
As one of the military (Capt USAF) standing guard against
over-enthusiastic claims by DoD contractors, I would like to assure
readers of this net that the USAF is not as gullable as Bill Anderson
fears in his recent message.
My job for the past year is to develop an architecture which
is compatible with SDI which incorporates Expert Systems as appropriate.
Essentially Beau Shields conveyed the essence of our thinking in his
recent article published in Computer Design. Expert Systems assist
experts with tasked well understood by experts, providing more time
for experts to handle the tough problems.
To my knowledge, no person is seriously considering cutting
people out of the loop any more than you can completely replace
the engine in your car with a turbo charger. We (the Air Force
Satellite Control Facility, Plans & Programs Division) plan to issue
the next version of our masterplan in late December, which will
describe our thinking in detail. If you can come up with a good
reason why I should send you a copy, send a request to:
AFSCF/XRP
Attn: Capt Richard Jennings
PO Box 3430
Sunnyvale, CA 94088-3430
on letterhead requesting a copy of REVISION G, and stating your reason.
The document is not classified, but for obvious reasons, distribution
is limited, and I may have to argue for your copy on a case by case
basis.
People have to do a lot more than write fiction about their
expert systems to get our serious interest. Rome Air Development
Center, and the AFSCF both now have Symbolics 3670's to evaluate
these systems: RADC in the context of the current state of the art,
and the AFSCF in the context of satellite control. The AFSCF is
also in the process of putting IBM-PC AT's into operations areas to
evaluate the PC-AT as a delivery vehicle in the context of distributed
shells such as KEE.
One large defense contractor also know for dominating the
PC marketplace, as in *** PC, has tried for 4 years to solve one
of our operational problems *using their own funds* without acceptance.
Back to GE, they are visiting us next week, and will get a chance
to show their system to the people who fly satellites for a living if
they sufficiently impress the Plans and Programs people.
Regards, Rich.
PS. Standard Disclaimer: I definitely hold opinions, but I am not always
successful at conveying them to my employer -- so they should be considered
my own personal opinions.
PSS. Perhaps I have been going after a gnat with a terminal air defense
system, but the concerns raised by Bill Anderson seemed to be a good
leadin to a request for information (by me) about what we should be
doing.
ARPA: jennings@aerospace
AV: 799-6427
ATT: 408 744-6427
sNAIL: AFSCF/XRP(Jennings), PO Box 3430, Sunnyvale CA, 94088-3430.
------------------------------
Date: Sat, 21 Sep 1985 03:57 EDT
From: MINSKY%MIT-OZ@MIT-MC.ARPA
Subject: AIList Digest V3 #125
I am amused at Bill Anderson's flame, in view of his righteous indignation
about an AI author's boasts about the merits of AI programming.
It's simply ludicrous to state that current AI systems are better
in battlefield situations than humans.
>From what we could see of the paper in question, there was no
suggestion about being better than humans. The suggestion was that AI
systems are better than conventional software.
What was the last AI system that could drive a tank, carry on a
conversation, and fix a broken radio whilst under enemy fire?
To my knowledge, ONLY AI systems, so far, can drive cars, carry on
conversations, and debug electronic systems. They don't do these jobs
very well yet, but they're coming along -- and have no competition in
those areas from any other kind of software. So one could certainly
say that Anderson's representation is "appalling".
The second comment is equally misleading. To contrast "formal and
algorithmic" with "robust" seems to imply that algorithms and
formal procedures are inherently not robust. On what is this claim
based? It sounds like a recipe for unreliable software to me.
I think it is a fair claim. This is because it appears infeasible to
prove the correctness of non-trivially complex programs, especially
because of the profound weakness of all known formal specification
languages, without which formal systems are entirely impotent. In
such circumstances, AI methods which employ many kinds of redundancy
and plausibility tests, e.g., comparing plans to models, are the best
methods available. They have a long way to go, but I suspect that,
for sufficiently complicated jobs, even the crude "expert systems" of
today are already ahead of all other practical forms of commercial or
theoretical programming methods.
How can someone write this stuff? I know, to make money. But if
this is the kind of information that is presented to the
military, and upon which they make decisions, then how can we
expect any kind of fair assessment of the possible projects in
the Strategic Computing (and Defense) Initiatives? How can this
kind of misinformation be rebutted?
This is a complicated subject and it won't clarify it by making that
kind of accusation.
I have not heard enough good ideas to be convinced, incidentally, that
SDI is a feasible replacement for MAD. But, I certainly am dismayed
by the strange arguments I've heard from the CS community about why
the hard problem with it is "software". It seems to me that the hard
problem is centered around the proposed defensive weapons, and if
they're any good (which I doubt) then aiming and controlling them
should not be unusally difficult. The arguments I've seen to the
contrary all seem to be political flames.
So is this, I suppose, hence I don't plan to defend what I've said
here in more messages.
------------------------------
Date: Sat, 21 Sep 85 10:35 EST
From: Carole D Hafner <hafner%northeastern.csnet@CSNET-RELAY.ARPA>
Subject: Honesty and the SDI: Reply to Bill Anderson
I was pleased to see Bill Anderson's comments on AILIST, and I admire his
courage for publicly criticizing the "Expert Battlefield Management System"
research. He is not alone - there are many people in the AI field and
Computer Science in general who view these projects - with their definite
schedules for implementation of the various components - with alarm.
However, to say or imply that unscrupulous people are making these promises
just to "make money" is an oversimplification.
>From the beginning of Computer Science as a separate subject in the early
1960's (or perhaps a bit earlier), much if not most of the research funding
has come from the DOD. This is especially true for AI. This is not the
case in most other Western countries, where there are specific agencies
for funding scientific research, and the military services are more focused
on military projects.
Over the years, the DOD-supported research groups provided a wonderful
environment to pursue AI research - the most money, the best equipment, access
to ARPANET, etc. And the DOD allowed the researchers a great deal of freedom
in pursuing their interests. As a result, many of the best researchers ended
up working in these groups.
I have always felt that this is, structurally, a very dangerous situation both
for Computer Science and for the country. For a very simple reason: if the
livelihood of a majority of AI researchers is dependent on the DOD, how can we
have a free and open debate on the use of computers in the military? Now it
looks as if some of my fears are coming true. I know some of the researchers
involved in SDI have severe misgivings about it - but if you have a 20-30
person group who all depend on the DOD for their jobs, it's a tough situation.
I personally would like to see groups such as "Computer Professionals for
Social Responsibility" attack the structural problem rather than
"Star Wars" per se.
It is likely that the promises being made for the battlefield management
systems will not be fulfilled - since researchers in industrial labs are still
struggling with the problem of recognizing stationary objects in a "bin of
parts", and most industrial vision systems have to use special lighting
to recognize separate parts moving on a conveyer belt. Furthermore, speech
understanding systems require ideal conditions and even then only work with
very limited vocabularies. So it's interesting to wonder how the description
of the complex events taking place during the battle will be communicated to
the computer.
Perhaps the field of AI is fortunate that the SDI is happening now instead of
20 years from now. We have a chance to get our research funded on a different
basis, so that this unfortunate situation won't happen again.
Carole Hafner
College of Computer Science
Northeastern University
csnet: hafner@northeastern
------------------------------
End of AIList Digest
********************