Copy Link
Add to Bookmark
Report
AIList Digest Volume 5 Issue 263
AIList Digest Monday, 9 Nov 1987 Volume 5 : Issue 263
Today's Topics:
Comments - NP Completeness & Research Methodology & AI Languages
----------------------------------------------------------------------
Date: 5 Nov 87 14:31:12 GMT
From: eitan%WISDOM.BITNET@wiscvm.wisc.edu (Eitan Shterenbaum)
Reply-to: eitan%H@wiscvm.arpa (Eitan Shterenbaum)
Subject: Re: Success of AI
In article <> honavar@speedy.wisc.edu (A Buggy AI Program) writes:
>
>Discovering that a problem is NP-complete is usually just the
>beginning of the work on the problem. The knowledge that a problem is
>NP-complete provides valuable information on the lines of attack that
>have the greatest potential for success. We can concentrate on algorithms
>that are not guaranteed to run in polynomial time but do so most
>of the time or those that give approximate solutions in polynomial time.
>After all, the human brain does come up with approximate (reasonably good)
>solutions to a lot of the perceptual tasks although the solution may not
>always be the best possible. Knowing that a problem is NP-complete only
>tells us that the chances of finding a polynomial time solution are minimal
>(unless P=NP).
>
You are right and so am I,
a) There're no polynomial algorithms, which are known to us, that can
solve NP problems.
b) There are approximate and probabilistic *partial* solutions for NP
problems.
As to the claim "the brain does it so why shouldn't the computer" -
It seem to me that you forget that the brain is built slightly differently
than a Von-Neuman machine ... It's a distributed enviorment lacking boolean
algebra. I can hardly believe that even with all the partial solutions for
all the complicated sets of NP problems that emulating a brain brings up, one
might be able to present a working program. If you'd able to emulate mouse's
brain you'd become a legend in your lifetime !
Anyway, no one can emulate a system which has no specifications.
if the neuro-biologists would present them then you'd have something to start
with.
And last - Computers aren't meta-capable machines they have constraints,
not every problem has an answer and not every answermakes sense,
NP problems are the best example.
Eitan Shterenbaum
------------------------------
Date: Tue, 03 Nov 87 07:57:49 PST
From: Stephen Smoliar <smoliar@vaxa.isi.edu>
Reply-to: smoliar@vaxa.isi.edu.UUCP (Stephen Smoliar)
Subject: Re: Gilding the Lemon
In article <12346288066.15.LAWS@KL.SRI.Com> Laws@KL.SRI.COM (Ken Laws) writes:
>
>Progress also comes from applications -- very seldom from theory.
A very good point, indeed: Bill Swartout and I were recently discussing
the issue of the respective contributions of engineering and science.
There is a "classical" view that science is responsible for those
fundamental principles without which engineering could "do its thing."
However, whence come those principles? If we look at history, we see
that, in most fields, engineers are "doing their thing" long before
science has established those principles. Of course things don't always
go as smoothly as one would like. This pre-scientific stage of engineering
often involves sometimes-it-works-sometimes-it-doesn't experiences; but
the engineering practices are still useful. Often a major contribution
of the discovery of the underlying scientific principles is a better
understanding of WHEN "it doesn't work" and WHY that is so. Then
engineering takes over again to determine what is to be done about
those situations in which things don't work. At the risk of being
called on too broad a generality, I would like to posit that science
is concerned with the explanation of observed phenomena, while engineering
is concerned with achieving phenomena with certain desired properties.
From this point of view, engineering provides the very substance from
which scientific thought feeds.
I fear that what is lacking in the AI community is a respect for the
distinction between these two approaches. A student is likely to get
a taste of both points of view in his education, but that does not
necessarily mean that he will develop an appreciation for the merits
of each or the ways in which they relate to each other. As a consequence,
he may very well become very quickly channeled along a narrow path
involving the synthesis of some new artifact. If he has any form
of success, then he assumes that all his thesis requires is that he
write up his results.
I hope there is some agreement that theses which arise from this process
are often "underwhelming" (to say the least). There are usually rather
hefty tomes which devote significant page space to the twists and turns
in the path that leads to the student's achievement. There is also usually
a rather heavy chapter which surveys the literature, so that the student
can demonstrate the front along which his work has advanced. However,
such retrospective views tend to concentrate more on the artifacts of
the past than on the principles behind those artifacts.
Is it too much to ask that doctoral research in AI combine the elements
of both engineering and science? I have nothing against that intensely
focused activity which leads up to a new artifact. I just worry that
students tend to think the work is done once the artifact is achieved.
However, this is the completion of an engineering phase. Frustrating
as it may sound, I do not think the doctoral student is done yet. He
should now embark upon some fundamental portion of a scientific phase.
Now that he has something that works, he should investigate WHY it
works; and THIS is where the literature search should have its true
value. Given a set of hypothesized principles regarding the behavior
of his own artifact, how applicable are those principles to those
artifacts which have gone before? Once such an investigation has been
pursued, the student can write a thesis which provides a balanced diet
of both engineering and science.
------------------------------
Date: 3 Nov 87 18:31:13 GMT
From: gary%roland@sdcsvax.ucsd.edu (Gary Cottrell)
Reply-to: roland!gary@sdcsvax.ucsd.edu (Gary Cottrell)
Subject: Re: Gilding the Lemon
Note that the article Tom was referring to (David Chapman's "Planning
for Conjunctive Goals", AIJ 32 No. 3) is based on a MASTER's Thesis:
Even if Ken objects to PhD thesi being rational reconstructions, he may
be less inclined to object to Master's thesi in this vein. Of course,
this is probably equivalent to a PhD thesis at n-k other places, where
k is some small integer.
gary cottrell
cse deot
ucsd
------------------------------
Date: 5 Nov 87 17:13:39 GMT
From: Gilbert Cockton <mcvax!hci.hw.ac.uk!gilbert@uunet.uu.net>
Reply-to: Gilbert Cockton <mcvax!hci.hw.ac.uk!gilbert@uunet.uu.net>
Subject: Re: Gilding the Lemon
In article <12346288066.15.LAWS@KL.SRI.Com> Laws@KL.SRI.COM (Ken Laws) writes:
>......, but there has been more payoff from GPSS and SIMSCRIPT (and
>SPICE and other simulation systems)
e.g.?
>Most Ph.D. projects have the same flavor. A student ...
>... publishes the interesting behaviors he was able to generate
e.g.?
> ... we must build hand-crank phonographs before inventing information
>theory and we must study the properties of atoms before debating
>quarks and strings.
Inadmissable until it can be established that such relationships exist
in the study of intelligence - there may be only information theory
and quarks, in which case you have to head right for them now.
Anything else is liable to be a social construct of limited generality.
Most work today in fact suggests that EVERYTHING is going to be a social
construct, even the quarks. Analogies with the physical world do not
necessarily hold for the mental world, anymore than does animism for the
physical world.
>An advisor who advocates duplicating prior work is cutting his
>students' chances of fame and fortune from the discovery of the
>one true path. .... Why should the student
>work (be they theoretical or practical problems) when he could
>attach his name to an entirely new approach?
The aim of PhD studies is to advance knowledge, not individuals.
This amounts to gross self-indulgeance where I come from. I recognise
that most people in AI come from somewhere else though :-)
Perhaps there are no new approaches, perhaps the set of all imaginable
metaphysics, epistemology and ontology is closed. In the History of
Ideas, one rarely sees anything with no similar antecedents. More
problematic for AI, the real shifts of thinkers like Machiavelli, Bacon,
Hume, Marx and Freud did not involve PhD studies centred on computer
programming. I really do think that the *ABSENCE* of a computer is more
likely to produce new approaches, as the computational paradigm
severely limits what you can do, just as the experimental paradigm of
psychology puts many areas of study beyond the pale.
--
Gilbert Cockton, Scottish HCI Centre, Ben Line Building, Edinburgh, EH1 1TN
JANET: gilbert@uk.ac.hw.hci ARPA: gilbert%hci.hw.ac.uk@cs.ucl.ac.uk
UUCP: ..{backbone}!mcvax!ukc!hwcs!hci!gilbert
------------------------------
Date: Fri, 6 Nov 87 15:32:30 WET
From: Martin Merry <mjm%hplb.csnet@RELAY.CS.NET>
Reply-to: Martin Merry <mjm%hplb.csnet@RELAY.CS.NET>
Subject: FORTRAN
After the recent discussion on AIList I feel compelled to admit that I wrote
the entry on FORTRAN for the Catalogue of AI techniques, and that it was
roginally intended as a joke.
However, after subsequent exposure to Common Lisp, I'm not so sure....
Martin Merry
HP Labs Bristol Research Centre
------------------------------
Date: 05 Nov 87 12:03:55 EST (Thu)
From: sas@bfly-vax.bbn.com
Subject: FORTRAN for list processing
Check out Douglas K. Smith's article: An Introduction to the
List-Processing Language SLIP (anthologized in Rosen's 1960's classic
Programming Systems and Languages).
SLIP is a list processing language system distinguished by the
symmetry of its lists; each element is linked to both its
predecessor and its successor. It differs from most list
processing languages in that it does not prepresent an
independent language, but is intended to be embedded in a
general purpose [sic] language such as FORTRAN. Thus the
flexibility of the latter is combined with the specific
facility for manipulating lists. This paper will describe
SLIP as embedded in FORTRAN IV.
SLIP was developed by Professor Joseph Weizenbaum of MIT.
His original paper [1], published in 1963 while he was at
General Electric, presents a complete documentation of the
system, including a FORTRAN listing and a statement of the
underlying philosophy. The system has been implemented at
several installations, find application in the symbolic
manipulation of algebraic expressions [2], [3], [4], and in
other areas [5].
[1] Weizenbaum, J.: Symmetric List Processor, Comm. ACM, p 524,
Sept 1963
[5] Weizenbaum, J.: ELIZA - A Computer Program for the Study of Natural
Language Communication Between Man and Machine, Comm. ACM,
p 36, Jan 1966
Gee - I've even heard of ELIZA!
Seth
------------------------------
Date: 5 Nov 87 09:46:20 est
From: Walter Hamscher <hamscher@ht.ai.mit.edu>
Subject: In Defense of FORTRAN
In any discussion where C and Fortran are defended as languages
for doing AI, if only they provided the constructs that Lisp and
Prolog already provide, I am reminded of the old Yiddish saying
(here poorly transliterated) ``Wenn mein Bubba zul huben
Bietzem, vol tzi gevain mein Zayda.'' Or, loosely, ``IF is a
big word.''
Date: Mon 2 Nov 87 14:29:09-PST
From: Ken Laws <LAWS@IU.AI.SRI.COM>
* * *
The problem with AI languages is neither their capability nor
their efficiency, but the way that they limit thought. * * *
Exactly so. Using Fortran or any language where you have to
spend mental energy thinking about the issues that Lisp and
Prolog already handle ``cuts your chances of fame and fortune
from the discovery of the one true path,'' to quote an earlier
contributor. Fortran's a fine language for writing programs
where the problem is well understood, but it's just a lousy
language for tackling new problems in. This doesn't just go for
academic research, either; same goes for doing applications that
have never been tackled before.
------------------------------
Date: Thu 5 Nov 87 08:55:59-PST
From: Ken Laws <LAWS@IU.AI.SRI.COM>
Subject: Re: In Defense of FORTRAN
Good points.
I happen to program in C and have built a software environment that
does provide many of the capabilities of LISP. It has taken me many
years, and I would not recommend that others follow this path.
My real point, though, was that LISP and PROLOG are also at too low
a level. The Lisp Machine environment, with its 10,000 predefined
functions, is a big factor in the productivity of LISP hackers. If
similar (or much better!) libraries were available to FORTRAN hackers,
similar productivity would be observed. LISP does permit many clever
programming techniques, as documented in Abelson and Sussman's book,
but a great deal can be done with the simple conditionals, loops,
and other control structures of a language like FORTRAN.
The AI community is spending too much time reprogramming graph search
algorithms, connected-component extraction, cluster analysis, and
hundreds of other solved problems. Automated programming isn't coming
to our rescue. As Fred Brooks has pointed out, algorithm development
is one of the most intricate, convoluted activities ever devised;
software development tools are not going to make the complexities
vanish. New parallel architectures will tempt us toward brute-force
solutions, ultimately leaving us without solutions. It's time we
recognize that sharable, documented subroutine libraries are essential
if AI programs are ever to be developed for real-world problems.
Such subroutines, which I envision in an object-oriented style, should
be the language of AI. Learned papers would discuss improvements to the
primitive routines or sophisticated ways of coordinating them, seldom
both together -- just as an earlier generation separated A* and
garbage collection. This would make it easier for others to repeat
important work on other computer systems, aiding scientific verification
and tech transfer as well as facilitating creativity.
-- Ken Laws
[This applies particularly in my own field of computer vision, where many
graduate students and engineers spend years reinventing I/O code, display
drivers, and simple image transformations. Trivial tasks such as mapping
buffers into display windows cease to be trivial if attempted with any
pretense to generality. Code is not transportable and even images are
seldom shared. The situation may not be so bad in mainstream AI research,
although I see evidence that it is.]
------------------------------
End of AIList Digest
********************