Copy Link
Add to Bookmark
Report

AIList Digest Volume 1 Issue 105

eZine's profile picture
Published in 
AIList Digest
 · 11 months ago

AIList Digest            Tuesday, 29 Nov 1983     Volume 1 : Issue 105 

Today's Topics:
AI - Challenge & Responses & Query
----------------------------------------------------------------------

Date: 21 Nov 1983 12:25-PST
From: dietz%usc-cse%USC-ECL@SRI-NIC
Reply-to: dietz%USC-ECL@SRI-NIC
Subject: Re: The AI Challenge

I too am skeptical about expert systems. Their attraction seems to be
as a kind of intellectual dustbin into which difficulties can be swept.
Have a hard problem that you don't know (or that no one knows) how to
solve? Build an expert system for it.

Ken Laws' idea of an expert system as a very modular, hackable program
is interesting. A theory or methodology on how to hack programs would
be interesting and useful, but would become another AI spinoff, I fear.

------------------------------

Date: Wed 23 Nov 83 18:02:11-PST
From: Michael Walker <WALKER@SUMEX-AIM.ARPA>
Subject: response to response to challenge

Tom,

I thought you made some good points in your response to Ralph
Johnson in the AIList, but one of your claims is unsupported, important,
and quite possibly wrong. The claim I refer to is

"Expert systems can be built, debugged, and maintained more cheaply
than other complicated systems. And hence, they can be targeted at
applications for which previous technology was barely adequate."

I would be delighted if this could be shown to be true, because I
would very much like to show friends/clients in industry how to use AI to
solve their problems more cheaply.

However, there are no formal studies that compare a system built
using AI methods to one built using other methods, and no studies that have
attempted to control for other causes of differences in ease of building,
debugging, maintaining, etc. such as differences in programmer experience,
programming language, use or otherwise of structured programming techniques,
etc..

Given the lack of controlled, reproducible tests of the effectiveness
of AI methods for program development, we have fallen back on qualitative,
intuitive arguments. The same sort of arguments have been and are made for
structured programming, application generators, fourth-generation languages,
high-level languages, and ADA. While there is some truth in the various
claims about improved programmer productivity they have too often been
overblown as The Solution To All Our Problems. This is the case with
claiming AI is cheaper than any other methods.

A much more reasonable statement is that AI methods may turn out
to be cheaper / faster / otherwise better than other methods if anyone ever
actually builds an effective and economically viable expert system.

My own guess is that it is easier to develop AI systems because we
have been working in a LISP programming environment that has provided tools
like interpreted code, interactive debugging/tracing/editing, masterscope
analysis, etc.. These points were made quite nicely in Beau Shiel's recent
article in Datamation (Power Tools for Programming, I think was the title).
None of these are intrinsic to AI.

Many military and industry managers who are supporting AI work are
going to be very disillusioned in a few years when AI doesn't deliver what
has been promised. Unsupported claims about the efficacy of AI aren't going
to help. It could hurt our credibility, and thereby our funding and ability
to continue the basic research.

Mike Walker
WALKER@SUMEX-AIM.ARPA

------------------------------

Date: Fri 25 Nov 83 17:40:44-PST
From: Tom Dietterich <DIETTERICH@SUMEX-AIM.ARPA>
Subject: Re: response to response to challenge

Mike,

While I would certainly welcome the kinds of controlled studies that
you sketched in your msg, I think my claim is correct and can be
supported. Virtually every expert system that has been built has been
targeted at tasks that were previously untouched by computing
technology. I claim that the reason for this is that the proper
programming methodology was needed before these tasks could be
addressed. I think the key parts of that methodology are (a) a
modular, explicit representation of knowledge, (b) careful separation
of this knowledge from the inference engine, and (c) an
expert-centered approach in which extensive interviews with experts
replace attempts by computer people to impose a normative,
mathematical theory on the domain.

Since there are virtually no cases where expert systems and
"traditional" systems have been built to perform the same task, it is
difficult to support this claim. If we look at the history of
computers in medicine, however, I think it supports my claim.
Before expert systems techniques were available, many people
had attempted to build computational tools for physicians. But these
tools suffered from the fact that they were often burdened with
normative theories and often ignored the clinical aspects of disease
diagnosis. I blame these deficiencies on the lack of an
"expert-centered" approach. These programs were also difficult to
maintain and could not produce explanations because they did not
separate domain knowledge from the inference engine.

I did not claim anywhere in my msg that expert systems techniques are
"The Solution to All Our Problems". Certainly there are problems for
which knowledge programming techniques are superior. But there are
many more for which they are too expensive, too slow, or simply
inappropriate. It would be absurd to write an operating system in
EMYCIN, for example! The programming advances that would allow
operating systems to be written and debugged easily are still
undiscovered.

You credit fancy LISP environments for making expert systems easy to
write, debug, and maintain. I would certainly agree: The development
of good systems for symbolic computing was an essential prerequisite.
However, the level of program description and interpretation in EMYCIN
is much higher than that provided by the Interlisp system. And the
"expert-centered" approach was not developed until Ted Shortliffe's
dissertation.

You make a very good point in your last paragraph:

Many military and industry managers who are supporting AI work
are going to be very disillusioned in a few years when AI
doesn't deliver what has been promised. Unsupported claims
about the efficacy of AI aren't going to help. It could hurt
our credibility, and thereby our funding and ability to
continue the basic research.

AI (at least in Japan) has "promised" speech understanding, language
translation, etc. all under the rubric of "knowledge-based systems".
Existing expert-systems techniques cannot solve these problems. We
need much more research to determine what things CAN be accomplished
with existing technology. And we need much more research to continue
the development of the technology. (I think these are much more
important research topics than comparative studies of expert-systems
technology vs. other programming techniques.)

But there is no point in minimizing our successes. My original
message was in response to an accusation that AI had no merit.
I chose what I thought was AI's most solid contribution: an improved
programming methodology for a certain class of problems.

--Tom

------------------------------

Date: Fri 25 Nov 83 17:52:47-PST
From: Tom Dietterich <DIETTERICH@SUMEX-AIM.ARPA>
Subject: Re: Clarifying my "AI Challange"

Although I've written three messages on this topic already, I guess
I've never really addressed Ralph Johnson's main question:

My question, though, is whether AI is really going to change
the world any more than the rest of computer science is
already doing. Are the great promises of AI going to be
fulfilled?

My answer: I don't know. I view "the great promises" as goals, not
promises. If you are a physicalist and believe that human beings are
merely complex machines, then AI should in principle succeed.
However, I don't know if present AI approaches will turn out to be
successful. Who knows? Maybe the human brain is too complex to ever
be understood by the human brain. That would be interesting to
demonstrate!

--Tom

------------------------------

Date: 24 Nov 83 5:00:32-PST (Thu)
From: pur-ee!uiucdcs!smu!leff @ Ucb-Vax
Subject: Re: The AI Challenge - (nf)
Article-I.D.: uiucdcs.4118


There was a recent discussion of an AI project that was done at
ONR on determining the cause of a chemical spill in a large chemical
plant with various ducts and pipes and manholes, etc. I argued that
the thing was just an application of graph algorithms and searching
techniques.

(That project was what could be done in three days by an AI team as
part of a challenge from ONR and quite possibly is not representative.)

Theorem proving using resolution is something that someone with just
a normal algorithms background would not simply come up with 'as an
application of normal algorithms.' Using if-then rules perhaps might
be a search of the type you might see an algorithms book. Although, I
don't expect the average CS person with a background in algorithms to
come up with that application although once it was pointed out it would
be quite intuitive.

One interesting note is that although most of the AI stuff is done in
LISP, a big theorem proving program discussed by Wos at a recent IEEE
meeting here was written in PASCAL. It did some very interesting things.
One point that was made is that they submitted a paper to a logic journal.
Although the journal agreed the results were worth publishing, the "computer
stuff" had to go.

Continuing on this rambling aside, some people submitted results in
mechanical engineering using a symbolic manipulator referencing the use
of the program in a footnote. The poor referee conscientiously
tried to duplicate the derivations manually. Finally he noticed the
reference and sent a letter back saying that they must put symbolic
manipulation by computer in the covering.

Getting back to the original subject, I had a discussion with someone
doing research in daemons. After he explained to me what daemons were,
I came to the conclusion they were a fancy name for what you described
as a hack. A straightforward application of theorem proving or if-then
rule techniques would be inefficient or otherwise infeasable so one
puts an exception in to handle a certain kind of a case. What is the
difference between that an error handler for zero divides rather than
putting a statement everywhere one does a division?

Along the subject of hacking, a DATAMATION article on 'Real Programmers
Don't Use PASCAL.' in which he complained about the demise of the person
who would modify a program on the fly using the switch register, etc.
He remarkeed at the end that some of the debugging techniques in
LISP AI environments were starting to look like the old style techniques
of assembler hackers.

------------------------------

Date: 24 Nov 83 22:29:44-PST (Thu)
From: pur-ee!notes @ Ucb-Vax
Subject: Re: The AI Challenge - (nf)
Article-I.D.: pur-ee.1148

As an aside to this discussion, I'm curious as to just what everyone
thinks of when they think of AI.

I am a student at Purdue, which has absolutely nothing in the way of
courses on what *I* consider AI. I have done a little bit of reading
on natural language processing, but other than that, I haven't had
much of anything in the way of instruction on this stuff, so maybe I'm
way off base here, but when I think of AI, I primarily think of:

1) Natural Language Processing, first and foremost. In
this, I include being able to "read" it and understand
it, along with being able to "speak" it.
2) Computers "knowing" things - i.e., stuff along the
lines of the famous "blocks world", where the "computer"
has notions of pyramids, boxes, etc.
3) Computers/programs which can pass the Turing test (I've
always thought that ELIZA sort of passes this test, at
least in the sense that lots of people actually think
the computer understood their problems).
4) Learning programs, like the tic-tac-toe programs that
remember that "that" didn't work out, only on a much
more grandiose scale.
5) Speech recognition and understanding (see #1).

For some reason, I don't think of pattern recognition (like analyzing
satellite data) as AI. After all, it seems to me that this stuff is
mostly just "if <cond 1> it's trees, if <cond 2> it's a road, etc.",
which doesn't really seem like "intelligence".

[If it were that easy, I'd be out of a job. -- KIL]

What do you think of when I say "Artificial Intelligence"? Note that
I'm NOT asking for a definition of AI, I don't think there is one. I
just want to know what you consider AI, and what you consider "other"
stuff.

Another question -- assuming the (very) hypothetical situation where
computers and their programs could be made to be "infinitely" intelligent,
what is your "dream program" that you'd love to see written, even though
it realistically will probably never be possible? Jokingly, I've always
said that my dream is to write a "compiler that does what I meant, not
what I said".

--Dave Curry
decvax!pur-ee!davy
eevax.davy@purdue

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT