Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 254

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest            Friday, 30 Oct 1987      Volume 5 : Issue 254 

Today's Topics:
Comments - Methodology & The Success of AI

----------------------------------------------------------------------

Date: 26 Oct 87 19:26:01 GMT
From: rosevax!rose3!starfire!merlyn@uunet.uu.net (Brian Westley)
Subject: Re: The Success of AI

In one article...
> But AI research could at least be disciplined to study the existing work
> on the phenomena they seek to study. Exploratory, anarchic,
> uninformed, self-indulgent research at public expense could be stopped.

and, in another article...
>..Thus, I am not avoiding hard work; I am avoiding
>*fruitless* work...
> --
> Gilbert Cockton, Scottish HCI Centre, Ben Line Building, Edinburgh, EH1 1TN

Tell me, how do you know WHICH AI methods WILL BE fruitless? You certainly
must know, for you to call it anarchic, uninformed, and self-indulgent (but why
'exploratory' is used as a put-down, I'll never know - I guess Gilbert already
knows how to build thinking machines, and just won't tell us).

Research is like advertising - most of the money spent is fruitless, but
you won't KNOW that until after you've TRIED it. (Of course it isn't
entirely wasted; you now know what doesn't work).

Fortunately, you have not convince me nor many other people that your
view is to be held paramount, and all other avenues of work are doomed
to failure.

By the way, I am not interested in duplicating or otherwise developing
models of how humans think; I am interested in building machines that
think. You may as well tell a submarine designer how difficult it is
to build artificial gills - it's irrelevant.

---
Merlyn LeRoy
"Anything a computer can do is immediately removed from those activities
that require thinking, such as calculations, chess, and medical diagnoses."


------------------------------

Date: 28 Oct 87 15:06:21 GMT
From: ig!uwmcsd1!uwm-cs!litow@jade.Berkeley.EDU (Dr. B. Litow)
Subject: AI

Recently postings have focused on the topic: 'AI - success or failure'. Some
postings have been concerned with epistemological or metaphysical matters.
Other postings have taken the view that AI is a vast collection of design
problems for which much of the metaphysical worry is irrelevant. Based
upon its history and current state it seems to me that AI is an area of
applied computer science largely aimed at design problems. I think that
AI is an unfortunate moniker because AI work is basically fuzzy programming
(more accurately the design of systems supporting fuzzier and fuzzier
programming) where the term 'fuzzy' is not being used in a pejorative sense.

All of the automation issues in AI work are support issues for really fuzzy
programming i.e. where humans can extend the interface with automata so
that human/automata interaction becomes increasingly complex and
'undisciplined'. Thus in a large sense AI is the frontier part of software
science. It could be claimed that at some stage of extension the interface
becomes so complex (by human standards at the time) that cognition can be
ascribed to the systems. Personally I doubt this will happen. On the other
hand the free use of play-like interfaces must have unforeseeable and
gigantic consequences for humans. This is where I see the importance of AI.

I distinguish between cognitive studies and AI. The metaphysics belongs to
the former,not the latter.

------------------------------

Date: 28 Oct 87 18:04:45 GMT
From: tgd@orstcs.cs.orst.edu (Tom Dietterich)
Subject: Re: Lenat's AM program


David West (dwt@zippy.eecs.umich.edu) writes:

Some possible contributing reasons for this sort of difficulty in AI:

1) The practitioners of AI routinely lack access at the nuts-and-bolts level
to the products of others' work. (At a talk he gave here three years ago,
Lenat said that he was preparing a distribution version of AM. Has
anyone heard whether it is available? I haven't.) Perhaps widespread
availability and use of Common Lisp will change this. Perhaps not.

In the biological sciences, publication of an article reporting a new
clone obligates the author to provide that clone to other researchers
for non-commercial purposes. I think we need a similar policy in
computer science. Publication of a description of a system should
obligate the author to provide listings of the system (a running
system is probably too much to ask for) to other researchers on a
non-disclosure basis.

2) The supporting institutions (and most practitioners) have little
patience for anything as unexciting and 'unproductive' as slow,
painstaking post-mortems.
3) We still have no fruitful paradigm for intelligence and discovery.
4) We are still, for the most part, too insecure to discuss difficulties
and failures in ways that enable others as well as ourselves to learn
from them. (See an article on the front page of the NYTimes book review
two or three weeks ago for a review of a book claiming that twentieth-
century science writing in general is fundamentally misleading in this
respect.)

I disagree with these other points. I think the cause of the problem
is lack of methodological training for AI and CS researchers. Anyone
could have reimplemented an approximation of AM based on the published
papers anytime in the past decade. I think the fact that people are
now beginning to do this is a sign that AI is becoming
methodologically healthier. A good example is the paper Planning for
Conjunctive Goals by D. Chapman in Artificial Intelligence, Vol 32,
No. 3, which provides a critical review and rational reconstruction of
the NOAH planning system. I encourage all students who are looking
for dissertation projects to consider doing work of this kind.

--Tom

------------------------------

Date: Thu 29 Oct 87 00:25:55-PST
From: Ken Laws <Laws@KL.SRI.Com>
Subject: Gilding the Lemon

Tom Dietterich suggests that AI students should consider doing
critical reviews and rational reconstructions of previous AI
systems. [There, isn't a paraphrase better than a lengthy
quotation?] I wouldn't discourage such activities for those
who relish them, but I disagree that this is the best way for
AI to proceed AT THE PRESENT TIME.

Rigorous critical analysis is necessary in a mature field where
deep understanding is needed to avoid the false paths explored
by previous researchers. I don't claim that shallow understanding
is preferable in AI, but I do claim that it is adequate.

AI should not be compared to current Biology or Psychology, but
to the heyday of mechanical invention epitomized by Edison. We
do need the cognitive scientists and logicians, but progress in
AI is driven by the hackers and the graduate students who "don't
know any better"
than to attempt the unreasonable.

Progress also comes from applications -- very seldom from theory.
The "neats" have been worrying for years (centuries?) about temporal
logics, but there has been more payoff from GPSS and SIMSCRIPT (and
SPICE and other simulation systems) than from all the debates over
consistent point and interval representations. The applied systems
are ultimately limited by their ontologies, but they are useful up to
a point. A distant point.

Most Ph.D. projects have the same flavor. A student studies the
latest AI proceedings to get a nifty idea, tries to solve all the
world's problems from his new viewpoint, and ultimately runs into
limitations. He publishes the interesting behaviors he was able
to generate and then goes on the lecture circuit looking for his
next employment. The published thesis illuminates a new corner of
mankind's search space, provided that the thesis advisor properly
steered the student away from previously explored territory.

An advisor who advocates duplicating prior work is cutting his
students' chances of fame and fortune from the discovery of the
one true path. It is always true that the published works can
be improved upon, but the original developer has already gotten
80% of the benefit with 20% of the work. Why should the student
butt his head against the same problems that stopped the original
work (be they theoretical or practical problems) when he could
attach his name to an entirely new approach?

I am not suggesting that "artificial intelligence" will ever be
achieved through one graduate student project or by any amount
of hacking. We do need scientific rigor. I am suggesting that we
must build hand-crank phonographs before inventing information
theory and we must study the properties of atoms before debating
quarks and strings. Only when we have exploited or reached impass
on all of the promising approaches will there be a high probability
that critical review of already explored research will advance the
field faster than will trying something new.


[Disclaimer: The views expressed herein do not apply to my own
field of computer vision, where I'm highly suspicious of any youngster
trying to solve all our problems by ignoring the accumulated knowledge
of the last twenty years. My own tendency is toward critical review
and selective integration of existing techniques. But then, I'm not
looking for a hot new Ph.D. topic.]

-- Ken Laws

------------------------------

Date: Wed, 28 Oct 87 08:36:34 -0200
From: Eitan Shterenbaum <eitan%wisdom.bitnet@jade.berkeley.edu>
Subject: Success of AI


Had it ever come into you mind that simulating/emulating the human brain is
NP problem ? ( Why ? Think !!! ). Unless some smartass comes out with a proof
for NP=P yar can forget de whole damn thing ...

Eitan Shterenbaum

(*
As far as I know one can't solve NP problems even with a super-duper
hardware, so building such machine is pointless (Unless we are living on
such machine ...) !
*)

Eitan

------------------------------

Date: 29 Oct 87 13:15:51 GMT
From: Gilbert Cockton <mcvax!hci.hw.ac.uk!gilbert@uunet.uu.net>
Reply-to: Gilbert Cockton <mcvax!hci.hw.ac.uk!gilbert@uunet.uu.net>
Subject: Re: THE MIND


In article <8710120559.AA17517@ucbvax.Berkeley.EDU>
UUCJEFF@ECNCDC.BITNET writes:
>I read some of the MIND theories espoused in the Oct 2 list, and am
>frankly disappointed. All those debates are based on the Science vs
>Mysticism debates that were going on 10 years ago when I was an undergrad.

Isn't it a shame that so many people in AI are so ignorant of the
substance of these debates?

>5) AI should concern itself with solving problems, discovering new ways to
>solve and conceptialize problems. It is not as glamorous as making
>artificial souls, but more practical and fruitful.

Fortunately, this highly sensible view is attracting more support, and,
with luck, it should establish itself as the raison d'etre of AI
research. A change of name would help (viz demise of cyberbetics),
despite the view of many old hands (e.g. Simon), that they wouldn't
have chosen the name, but we are stuck with it now. I can't see how any
sensible person would want to stick with a term with such distasteful
connotations.

However, this orientation for post-AI advanced computer applications
research needs extension. It is not enough to develop new computerised
support for new problem solving techniques. Research is also needed
into the comprehensibility, ease of learning and validity of these
techniques. Determinants of their acceptability in real organisational
settings are also a vital research topic. Is research in medical expert
systems, for example, worth public funding when it seems that NO
medical expert system is being used in a real clinical setting? What
sorts of systems would be acceptable? Similarly, the theorem prover
based proof editors under development for software engineering seem to
require knowledge and skills which few practising software
professionals will have time to develop, so one can't really see proof
editors developing into real work tools until a major shift in their
underlying models occur.

Such a user-oriented change of direction is a major problem for AI
researchers, as few of them seem to have any real experience of
succesfully implementing a working system and installing it in a real
organisational setting, and then maintaining it. DEC's XCON is one of
the few examples. How much is PROSPECTOR used these days?
--
Gilbert Cockton, Scottish HCI Centre, Ben Line Building, Edinburgh, EH1 1TN
JANET: gilbert@uk.ac.hw.hci ARPA: gilbert%hci.hw.ac.uk@cs.ucl.ac.uk
UUCP: ..{backbone}!mcvax!ukc!hwcs!hci!gilbert

------------------------------

Date: 29 Oct 87 00:23:00 GMT
From: uxc.cso.uiuc.edu!osiris.cso.uiuc.edu!goldfain@a.cs.uiuc.edu
Subject: Re: The Success of AI (continued, a


Who says that ping-pong, or table tennis isn't a sport? Ever been to China?

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT