Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 176

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest            Sunday, 12 Jul 1987      Volume 5 : Issue 176 

Today's Topics:
Binding - Interactive Fiction List,
Philosophy of Science - Is AI a Science

----------------------------------------------------------------------

Date: 8 Jul 87 20:16:34 GMT
From: engst@tcgould.tn.cornell.edu (Adam C. Engst)
Subject: Re: Interactive fiction


For those of you who cannot (or don't want to) read the Usenet or Bitnet
discussion groups on interactive fiction we are back in mailing list form.
If you want to send mail to the list, the address is . . . . . . . . . .
>>>> gamemasters@parcvax.xerox.com <<<<
Just include "Interactive fiction" on the Subject line so the moderator can
separate it out from the adventure game discussion messages. If you want to
add yourself to the mailing list (so you get digests every day or so) send a
request to . . . . . . . .
>>>> gamemasters-request@parcvax.xerox.com <<<<
and ask to be added. You can also ask to be deleted from the list, ask for
archived mail, or report a mailer failure at the request address. I will be
sending the messages that come from Bitnet and Usenet as well, so everyone
will have access to all the messages. If anyone has any questions, just
email me at either of the below addresses and I'll try to help. Thanks a
lot for the discussion up to now and I hope that it will improve even more
with the increased audience.

Adam C. Engst

engst@tcgould.tn.cornell.edu
pv9y@cornella.bitnet

------------------------------

Date: 9 Jul 87 14:37 PDT
From: Tony Wilkie /DAC/ <TLW.MDC@OFFICE-1.ARPA>
Subject: Is AI a Science? A Pragmatic Test Offered!

I'm inclined to belive that Don Norman is right, and that AI is not a science;
which is okay, there being a number of perfectly good, self-respecting fields
of study out there that are not sciences.

Still, its likely that there have been sensitivities offended and a defense is
to be anticipated. In lieu of a more respectable and formal argument in defense
of AI being a science, I am prepared to steal from William James and proffer a
pragmatic test. The rationale is as follows:

1. Grant moneys are issued by various public and private agencies for the
support of research in both sciences and non-sciences

2. Issuing agencies are generally authorized to finance projects falling
within their scope of study only.

3. These agencies have some criteria for determining what appropriate
projects are.

THEREFORE:

4. Any projects funded by an agency as a science (e.g. NSF) are science
projects reflecting scientific work (except for method or instrumentation
projects).

The challenge, then, is to find any researcher working on an AI project funded
by a science-supportive agency.

If only it were all this easy...

Tony Wilkie <TLW.MDC@Office-1.ARPA>

------------------------------

Date: Fri, 10 Jul 87 10:34:39 n
From: Paul Davis <DAVIS%EMBL.BITNET@wiscvm.wisc.edu>
Subject: AI, science & Don Norman


Briefly - seems to me that most everyone (including DN himself) has
missed out on two key points. First, after Searle, there isn't only
*one* AI but two (Searle's strong and weak AI): the first is a suitable
target of DN's critique since its whole raison d'etre can be summed up
in its idea of AI as `cognitive science', ie; that computer science is
a way to approach an understanding of what *existing* intelligent systems
do and how they do it. However, let us not forget `weak' AI, which makes
no such claims - there is no assumption that the products of weak AI
function analagously to "real" intelligent systems, only that they
are capable of doing X by some means or another.

Second, given that `strong' AI *does* claim to have some intimate relation-
ship with cognitive science, its worth asking "is there any other way to
study the brain/mind ?"
. Don Norman castigates (probably correctly) AI
for not being a science, but he also fails to point out the likely
impossibility of any non-AI-stimulated approaches ever coming to terms
with the complexity of the brain. AI models are *NOT* testable!!
Just imagine that a keen AI worker comes up with an implementation
of his/her model of human brain activity, and that this implementation
is so good, and so powerful that it saunters through Mr. Harnad's TTT
like a knife through butter.... it is vital to see that there is very
little information in this result bearing on the question "is this
the correct model of the brain ?"
. The ONLY way to confirm (test)
a `strong' AI model is to demonstrate functionally equivalent hardware
behaviour, and psychology is a century or more from being able to do this.
Norman seems right to castigate AI workers for excessive speculation
unsupported by `real experiments', and undoubtedly, if the aim of
`strong' AI is ever to succeed, then we *must* know what it is that
we are trying to model, but he should also recognize that AI
cannot be tested or developed as other sciences simply because it is
unique in studying one domain (computers) with the idea of understanding
another (the brain). When AI *is* a science, it will be called psychology..

too long..,

paul davis

EMBL, Heidelberg, FRG

bitnet: davis@embl arpa: davis%embl.bitnet@wiscvm.wisc.edu
uucp: ...!psuvax1!embl.bitnet!davis

------------------------------

Date: Fri 10 Jul 87 09:43:03-PDT
From: Douglas Edwards <EDWARDS@Stripe.SRI.Com>
Subject: Don Norman on AI as nonscience

Don Norman assumes that he knows enough about scientific methods to
assert that AI doesn't use them.

I don't believe that he, or anyone else, has a good general
characterization of how science discovers what it discovers.
Especially, I don't believe that he has used scientific methods in
determining what scientific methods are. Attempts at characterizing
the methods of science typically come from intuitive reflection, or
from philosophy, not from science. There are some questions we have
to make educated guesses at, because scientific answers are not yet
available.

Norman's attack on AI is vitiated by the same weakness that vitiated
Dresher and Hornstein's earlier attack on AI. The critics'
characterizations of scientific methods are far *less* firmly grounded
than most assertions being made from within the discipline being
attacked.

Among intuitive and philosophical theories of scientific method--the
only kind yet available--a priori reasoning of the type used in AI
plays a prominent role. Exactly what relation such a priori reasoning
must have to experimental data is very much an open question.

My own background is in philosophy. I have gotten involved in AI
partly because I believe, on intuitive grounds, that it *is* a
science, and that it has a better shot at giving rise to a truly
scientific characterization of scientific methods than philosophy,
psychology, linguistics, or neuroscience. (I am not saying anything
against interdisciplinary cross-fertilization.) I am now trying to
work out a logical characterization of hypothesis formation.

Douglas D. Edwards
EK225
SRI International
333 Ravenswood Ave.
Menlo Park CA 94025
(edwards@warbucks.sri.com)
(edwards@stripe.sri.com)

------------------------------

Date: 10 Jul 87 18:37:00 GMT
From: jbn@glacier.STANFORD.EDU (John B. Nagle)
Reply-to: jbn@glacier.UUCP (John B. Nagle)
Subject: Re: AIList Digest V5 #171


In article <8707062225.AA18518@brillig.umd.edu> hendler@BRILLIG.UMD.EDU
(Jim Hendler) writes:

>When I publish work on planning and
>claim ``my system makes better choices than <name of favorite
>planning program's>'' I cannot verify this other than by showing
>some examples that my system handles that <other>'s can't. But of
>course, there is no way of establishing that <other> couldn't do
>examples mine can't and etc. Instead we can end up forming camps of
>beliefs (the standard proof methodology in AI) and arguing -- sometimes
>for the better, sometimes for the worse.

Of course there's a way of "establishing that <other> couldn't do
examples mine can't and etc."
You have somebody try the same problems on
both systems. That's why you need to bring the work up to the point that others
can try your software and evaluate your work. Others must repeat your
experiments and confirm your results. That's how science is done.

I work on planning myself. But I'm not publishing yet. My planning
system is connected to a robot and the plans generated are carried out in the
physical world. This keeps me honest. I have simple demos running now;
the first videotaping session was last month, and I expect to have more
interesting demos later this year. Then I'll publish. I'll also distribute
the code and the video.

So shut up until you can demo.

John Nagle

------------------------------

Date: Fri, 10 Jul 87 20:32:07 GMT
From: Caroline Knight <cdfk%hplb.csnet@RELAY.CS.NET>
Subject: AI applications

This is sort of growing out from the discussion on whether AI is a
science or not, although I'm more concerned with the status of AI
applications.

Ever since AI applications started to catch on there has been a
growing divide between those who build software as some form of
experiment (no comment on the degree of scientific method applied) and
those who are building software *FOR ACTUAL USE* using techniques
associated with AI.

Many people try to go about the second as though it were the first.
This is not so: an experimental piece of software has every right to
be "toy" in all those dimensions which can be shown to be unnecessary
for testing the hypotheses. A fancy interface with graphics does not
necessarily make this into a usable system. However most pieces of
software built to do a job have potential users some of whom can be
consulted right from the start.

I am not the first person to notice this, I know. See, for instance,
Woods' work on human strengths and weakness or Alty and Coombes
alternative paradigm for expert systems or Kidd's work on expert
systems answering the wrong questions (sorry I haven't the refs to
hand - if you want them let me know and I'll dig them out).

I think I have a good name for it: complementary intelligence. By this
I mean complementary to human intelligence. I am not assuming that the
programmed part of the system need been seen as intelligent at all.
However this does not mean that it has nothing to do with AI or
cognitive psychology:

AI can help build up the computer's strengths and define what
will be weaknesses for sometime yet.

Cog psy can help define what human's strengths and weaknesses
are.

Somehow we then have to work out how to put this information together
to support people doing various tasks. It is currently much easier to
produce a usable system if the whole task can be given to a machine
the real challenge for complementary intelligence is in how to share
tasks between people and computers.

All application work benefits from some form of systems analysis or
problem definition. This is quite different from describing a system
to show off a new theory. It also allows the builder to consider the
people issues:

Job satisfaction - if the tool doesn't enrich the job how are you
going to persuade the users to adopt it?.

Efficient sharing of tasks - just because you can automate some
part does not mean you should!

Redesign of process?

I could go on for ages about this. But back to the main point about
whether AI is a science or not.

AI is a rather fuzzy area to consider as a science. Various sub-parts
might well have gained the status. For instance, vision has good
criteria to measure the success of a hypothesis against.

I suggest that the area that I am calling complementary intelligence
consists of both a science and an engineering discipline. It is a
science in which experiments such as those of cog psy can be applied.
They are hard to make clear cut but so are many others (didn't you
ever have a standard classroom physics experiment fail at school?).
It is engineering because it must build a product.

And if we want to start a new debate off how about whether it is more
profitable to apply engineering methods to software production or to
consider it an art - I recently saw a film of Picasso painting in
front of a camera and I could see more parallels with some of the
excellent hackers I've observed than with what I've seen of engineers
at work. (This is valid AI stuff rather than just a software
engineering issue because it is about how people work and anyone
interested in creating the next generation of programmer's assistants
must have some views on this subject!).

Caroline Knight This is my personal view.
Hewlett-Packard Ltd
Bristol, UK

------------------------------

Date: 11 Jul 87 04:48:04 GMT
From: isis!csm9a!japplega@seismo.CSS.GOV (Joe Applegate)
Subject: Re: Why AI is not a science


> From jlc@goanna.OZ.AU.UUCP Sat Feb 5 23:28:16 206
>
> May be AI is such unorthodox Science, or perhaps an Art.
> Let us keep AI this way!

I'm not sure there is any maybe about it! AI development, is in my humble
opinion, the most creative expression of the programmers art. Any semi-
educated fool can code a program... but the creation of a useful, productivity
enhancing application or system is far more art than science! The same is
more so in AI development, a query and answer style expert system can be
coded in basic by a high school hacker... but the true application for AI
is in sophisticated applications that employ high quality presentation
techniques that eliminate the ambiguities so often present in a text only
presentation.

One benefit of the advent of the personal computer is the redirection of
software product developent away from data driven environment of DP and
accounting and towards the presentation style environment of the non-DP
professional. Fortunately, most AI development systems are acknowledging
this trend by providing graphical interfaces.

Art mimics science and the application of science is an art!

Joe Applegate - Colorado School of Mines Computing Center
{seismo, hplabs}!hao!isis!csm9a!japplega
or
SYSOP @ M.O.M. AI BBS - (303) 273-3989 - 300/1200/2400 8-N-1 24 hrs.

*** UNIX is a philosophy, not an operating system ***
*** BUT it is a registered trademark of AT&T, so get off my back ***

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT