Copy Link
Add to Bookmark
Report

AIList Digest Volume 4 Issue 041

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest            Friday, 28 Feb 1986       Volume 4 : Issue 41 

Today's Topics:
Query - CERES and CASCADE Projects,
Literature - Prolog Books & Lisp & Dreyfus on Skill Acquisition,
Philosophy - The Dreyfus Controversy

----------------------------------------------------------------------

Date: 26 Feb 86 20:07:41 GMT
From: hplabs!turtlevax!weitek!kens@ucbvax.berkeley.edu (Ken Stanley)
Subject: Request for info on CERES and/or the CASCADE project

Can anyone tell me anything about CERES, POLO, LASCAR or the CASCADE project?
Is the CASCADE project state of the art or just an effort to catch up
to work in the U.S.?
I know nothing about any of the above. Hence, simple responses and
references would be the most helpful.
Ken Stanley weitek!kens

------------------------------

Date: 28-Feb-1986 0843
From: kevin%logic.DEC@decwrl.DEC.COM (Kevin LaRue -- You can hack
anything you want with TECO and DDT)
Subject: Re: Prolog Books


``Introduction to Logic Programming''
Christopher John Hogger
Academic Press, Inc.
1984
ISBN 0-12-352092-4

------------------------------

Date: 28-Feb-1986 1129
From: kevin%logic.DEC@decwrl.DEC.COM (Kevin LaRue -- You can hack
anything you want with TECO and DDT)
Subject: Re: Lisp in the classroom.


Lisp is the language used in the undergraduate introductory course of the CS
curriculum at Syracuse University. In the past there wasn't a textbook for the
course; I believe that they are using Winston and Horn's ``Lisp'' now.

------------------------------

Date: Thu 27 Feb 86 23:34:38-PST
From: Sang K. Cha <ChaSK@SU-SUSHI.ARPA>
Subject: Dreyfus on Skill Acquisition

[Forwarded from the Stanford bboard by Laws@SRI-AI.]


[...]

Actually, the five-stage developmental model of skill acquisition that
Hubert Dreyfus stressed in his talk abstract appears in the following
paper of Stuart Dreyfus :

"Formal Models vs Human Situational Understanding : Inherent Limitations
on the Modelling of Business Expertise,"
Office Technology and People,1(1982) 133-165
by Stuart Dreyfus, Dept of IE & OR, UC Berkeley


-- Sang

------------------------------

Date: 18 Feb 86 23:45:53 GMT
From: decwrl!glacier!kestrel!ladkin@ucbvax.berkeley.edu (Peter Ladkin)
Subject: Re: Re: "self-styled philosophers"

(ladkin on Dreyfus)
> > He is also a professional philosopher, holding a chair at
> > U.C. Berkeley. His criticisms of AI claims are thoroughly thought
> > through, with a rigor that a potential critic of his views would
> > do well to emulate. He has done AI great service by forcing
> > practitioners to be more self-critical. AAAI should award him
> > distinguished membership!

(benjamin)
> Baloney.
> [comments on Dreyfus on chess .....]
> It seems arrogant
> for him to reach conclusions about fields in which he is not
> accomplished. This applies to both chess and AI.

Before you cry *baloney*, how about addressing the issue?
As I pointed out, but you deleted, his major argument is that
there are some areas of human experience related to intelligence
which do not appear amenable to machine mimicry.
Do you (or anyone) think that this statement is obviously false?
(Negate it and see if that sounds right).
People reach (good and bad) conclusions about fields in which
they are not accomplished all the time. That's how AI got started,
and that's how computers got invented.
Why is it that people get so heated about criticism of AI that
they stoop to name-calling rather than addressing the points made?
(That question has probably also been asked by Dreyfus).

Peter Ladkin

------------------------------

Date: 20 Feb 86 04:27:50 GMT
From: tektronix!uw-beaver!uw-june!jon@ucbvax.berkeley.edu (Jon Jacky)
Subject: Re: Technology Review article

> (Technology Review cover says...)
> After 25 years Artificial Intelligence has failed to live up to its promise
> and there is no evidence that it ever will.

Most of the comment in this newsgroup has addressed the second clause in
this provocative statement. I think the first clause is more important, and
it is indisputable. The value of the Dreyfuss brothers' article is to
remind readers that when AI advocates make specific predictions, they are
often over-optimistic. Personally, I do not find all of the Dreyfuss'
speculations convincing. So what? AI work does not get funded
to settle philosophical arguments, but because the funders hope to derive
specific benefits. In particular, the DARPA Strategic Computing Program,
the largest source of funds for AI work in the country,
asserts that specific technologies (rule based expert systems, parallel
processing) will deliver specific results (unmanned vehicles that can
drive at 40 km/hr through battlefields, natural language systems with
10,000 word vocabularies) at a specific time (the early 1990's). One
lesson of the article is that people should regard such claims
skeptically.
Jonathan Jacky, ...!ssc-vax!uw-beaver!uw-june!jon or jon@uw-june
University of Washington

------------------------------

Date: 20 Feb 86 19:35:05 GMT
From: ihnp4!ihwpt!olaf@ucbvax.berkeley.edu (olaf henjum)
Subject: Re: "self-styled philosophers"

Is there any other kind of "lover of wisdom" than a "self-styled" one?
-- Olaf Henjum (ihnp4!ihwpt!olaf)
(and, of course, my opinions are strictly my own ...)

------------------------------

Date: 20 Feb 86 18:26:12 GMT
From: decvax!genrad!panda!talcott!harvard!bbnccv!bbncc5!mfidelma@ucbvax
.berkeley.edu (Miles Fidelman)
Subject: Re: Technology Review article

About 14 years ago Hubert Dreyfus wrote a paper titled "Why Computers Can't
Play Chess" - immediately thereafter, someone at the MIT AI lab challenged
Dreyfus to play one of the chess programs - which trounced him royally -
the output of this was an MIT AI Lab Memo titled "The Artificial Intelligence
of Hubert Dreyfus, or Why Dreyfus Can't Play Chess".
The document was hilarious. If anyone still has a copy, I'd like to arrange
a xerox of it.
Miles Fidelman (mfidelman@bbncc5.arpa)

------------------------------

Date: 20 Feb 86 18:28:27 GMT
From: amdcad!amdimage!prls!philabs!dpb@ucbvax.berkeley.edu (Paul Benjamin)
Subject: Re: Re: Re: "self-styled philosophers"

> (ladkin on Dreyfus)
> > > He is also a professional philosopher, holding a chair at
> > > U.C. Berkeley. His criticisms of AI claims are thoroughly thought
> > > through, with a rigor that a potential critic of his views would
> > > do well to emulate. He has done AI great service by forcing
> > > practitioners to be more self-critical. AAAI should award him
> > > distinguished membership!
> (benjamin)
> > Baloney.
> > [comments on Dreyfus on chess .....]
> > It seems arrogant
> > for him to reach conclusions about fields in which he is not
> > accomplished. This applies to both chess and AI.
>
> Before you cry *baloney*, how about addressing the issue?
> As I pointed out, but you deleted, his major argument is that
> there are some areas of human experience related to intelligence
> which do not appear amenable to machine mimicry.
> Do you (or anyone) think that this statement is obviously false?
> (Negate it and see if that sounds right).
>
> Why is it that people get so heated about criticism of AI that
> they stoop to name-calling rather than addressing the points made?
> (That question has probably also been asked by Dreyfus).
>
> Peter Ladkin

I DID address the issue. I deleted your reference because reproducing
entire postings leads to extremely large postings. But I am addressing
his argument about areas of human experience which supposedly will
never be amenable to machine implementation. My whole point, which I
thought was rather obvious, is that he conjures up examples which are
poorly thought out, and experiments which are poorly executed. Thus,
his entire analysis is worthless to any investigators in the field.
I would welcome any analysis which would point out areas which I should
not waste time investigating. I receive this sort of input occasionally,
in the form of "it is better to investigate this than that, for this reason"
and this is very helpful. I certainly don't love wasting time looking at
dead ends. If Dreyfus' work were carefully constructed, it could be very
valuable. But all I see when I read his stuff is vague hypotheses, backed
up with bad research.
So I am not calling him names. I am characterizing his research, and
therefore AM addressing the issue.
Paul Benjamin

------------------------------

Date: Sun, 23 Feb 86 18:21:59 PST
From: albert@kim.berkeley.edu (Anthony Albert)
Reply-to: albert@kim.berkeley.edu (Anthony Albert)
Subject: Re: Technology Review article

In article <8602110348.2860@redwood.UUCP>, ucdavis!lll-crg!amdcad!amd!hplabs!
fortune!redwood!rpw3@ucbvax.berkeley.edu (Rob Warnock) writes:
>
>
>+
>| The [Technology Review] article was written by the Dreyfuss brothers, who
>| claim... that people do not learn to ride a bike by being told how to do
>| it, but by a trial and error method that isn't represented symbolically.
>+
>
>Hmmm... Something for these guys to look at is Seymour Papert's work
>in teaching
>such skills as bicycle riding, juggling, etc. by *verbal* and *written* means.
>That's not to say that some trial-and-error practice is not needed, but that
>there is a lot more that can be done analytically than is commonly assumed.

The Dreyfuses (?) understand that learning can occur analytically and
consciously at first. But in the stages from beginner to expert, the actions
become less and less conscious. I imagine Mr. Warnock's juggling (mentioned
further on in the article) followed the same path; when practicing a skill,
one doesn't think about it constantly, one lets it blend into the background.

Anthony Albert
..!ucbvax!kim!albert
albert@kim.berkeley.edu

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT