Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 171

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest             Monday, 6 Jul 1987      Volume 5 : Issue 171 

Today's Topics:
Queries - AI Expert Source for Hopfield Nets &
Liability of Expert System Developers,
Programming - Software Reuse,
Scientific Method - Psychology vs. AI & Why AI is not a Science

----------------------------------------------------------------------

Date: 2 Jul 87 20:45:24 GMT
From: ucsdhub!dcdwest!benson@sdcsvax.ucsd.edu (Peter Benson)
Subject: AI Expert source for Hopfield Nets

I am looking for the source mentioned in Bill Thompson's
article on Hopfield Nets in the July, 1987 issue of
AI Expert magazine. At one time, someone was posting all the
sources, but has, apparently, stopped. Could that person,
or some like-minded citizen post the source for this
Travelling Salesman solution.

Thanks in advance !!

--
Peter Benson | ITT Defense Communications Division
(619)578-3080 | 10060 Carroll Canyon Road
ucbvax!sdcsvax!dcdwest!benson | San Diego, CA 92131
dcdwest!benson@SDCSVAX.EDU |

------------------------------

Date: 5 Jul 87 22:00:58 GMT
From: bloom-beacon!bolasov@husc6.harvard.edu (Benjamin I Olasov)
Subject: Liability of Expert System Developers

I'm told that a hearing is now underway which would set a legal precedent
for determining the extent of liability to be borne by software developers
for the performance of expert systems authored by them. Does anyone have
details on this?

------------------------------

Date: 4 Jul 87 21:19:48 GMT
From: jbn@glacier.stanford.edu (John B. Nagle)
Subject: Re: Software Reuse -- do we really know what it is ? (long)


The trouble with this idea is that we have no good way to express
algorithms "abstractly". Much effort was put into attempting to do so
in the late 1970s, when it looked as if program verification was going to
work. We know now that algebraic specifications (of the Parnas/SRI type)
are only marginally shorter than the programs they specify, and much
less readable. Mechanical verification that programs match formal
specifications turned out not to be particularly useful for this reason.
(It is, however, quite possible; a few working systems have been
constructed, including one by myself and several others described in
ACM POPL 83).

We will have an acceptable notation for algorithms when each algorithm
in Knuth's "Art of Computer Programming" is available in machineable form
and can be used without manual modification for most applications for which
the algorithm is applicable. As an exercise for the reader, try writing
a few of Knuth's algorithms as Ada generics and make them available to
others, and find out if they can use them without modifying the source
text of the generics.

In practice, there now is a modest industry in reusable software
components; see the ads in any issue of Computer Language. Worth noting
is that most of these components are in C.

John Nagle

------------------------------

Date: 02 Jul 87 09:55:35 EDT (Thu)
From: sas@bfly-vax.bbn.com
Subject: Don Norman's comments on time perception and AI
philosophizing


Actually, many studies have been done on time perception. One rather
interesting one reported some years back in Science showed that time
and size scale together. Smaller models (mannikins in a model office
setting) move faster. It was kind of neat paper to read.

I agree that AI suffers from a decidedly non-scientific approach.
Even when theoretical physicists flame about liberated quarks and the
anthropic principle, they usually have some experiments in mind. In
the AI world we get thousands of bytes on the "symbol grounding
problem"
and very little evidence that symbols have anything to do
with intelligence and thought. (How's that for Drano[tm] on troubled
waters?)

There have been a lot of neat papers on animal (and human) learning
coming out lately. Maybe the biological brain hackers will get us
somewhere - at least they look for evidence.

Probably overstating my case,
Seth

------------------------------

Date: Thu 2 Jul 87 12:10:08-PDT
From: PAT <HAYES@SPAR-20.ARPA>
Subject: Re: AIList Digest V5 #165

HEY, DON!!! RIGHT ON!

Pat Hayes


[Donald Norman, I presume. -- KIL]

------------------------------

Date: 3 Jul 87 18:01:33 GMT
From: nosc!humu!uhccux!stampe@sdcsvax.ucsd.edu (David Stampe)
Subject: Submission for comp-ai-digest

Path: uhccux!stampe
From: stampe@uhccux.UUCP (David Stampe)
Newsgroups: comp.ai.digest
Subject: Re: On how AI answers psychological issues
Message-ID: <651@uhccux.UUCP>
Date: 3 Jul 87 18:01:33 GMT
References: <8706301418.AA08078@sunl.ICS>
Distribution: world
Organization: U. of Hawaii, Manoa (Honolulu)
Lines: 44
In-reply-to: norman%ics@SDCSVAX.UCSD.EDU's message of 30 Jun 87 14:18:40 GMT

norman%ics@SDCSVAX.UCSD.EDU (Donald A. Norman) writes:
> Thinking about "how the mind works" is fun, but not science, not
> the way to get to the correct answer.

In fact it's the ONLY way to get the correct answer. Experiments
don't design themselves, and they don't interpret their own results.

We don't see with outward eyes or hear with outward ears alone. The
outward perception or behavior does not exist without the inward one.
If you practice your remembered violin in your imagination, while your
actual violin is being repaired, you, as well as the violin, may sound
much better when the repairs are finished.

I am a linguist. I write a tongue twister on the board that they
haven't hear before: 'Unique New York Unique New York Unique New
York....' My students watch silently, but when I ask them what errors
this tongue twister induces, they immediately name the very errors I
discovered before class, when I tried to pronounce it aloud. You
didn't have to say it aloud, either, did you?

It is not introspection that is AI's trouble. It is that an expert
system, for example, isn't likely to model expertise correctly until
it is designed by someone who is himself the expert, or who knows how
to discover the nature of the expert's typically unconscious wisdom.
Linguistics has struggled for over a century to develop tools for
learning how human beings acquire and use language. It seems likely
that a comparable struggle will be required learn how the expert
diagnostician, welder, draftsman, or reference librarian does what he
or she does.

I often feel that when a good student of language takes a job building
a natural language interface for some AI project, in her work --
though it may be viewed by others in the project as marginal, if not
menial -- she is more likely to turn up something of scientific import
than are those working on the core of the project. This is just
because she has spent years learning to learn how experts -- in this
case language users -- do what they do. On the other hand, she is not
likely to believe that programs can realistically model much of the
human linguistic faculty, at least in the imaginable future. For
example, computer parsers presuppose grammars. But it is not clear
whether children, the only devices so far known to have mastered any
natural language, come equipped with any analogous utilities.

David Stampe, Linguistics, Univ. of Hawaii

------------------------------

Date: Thu, 2 Jul 87 22:36:05 edt
From: amsler@flash.bellcore.com (Robert Amsler)
Subject: Re: thinking about thinking not being science


I think Don Norman's argument is true for cognitive psychologists,
but may not be true for AI researchers. The reason is that the two
groups seek different answers. If AI were only the task of finding
out how people work, then it would be valid to regard armschair
reasoning as an invalid form of speculation. One can study
people directly (this is the old ``stop arguing over the number of
teeth in a horse's mouth and go outside and count them'' argument).
However, some AI researchers are really engineers at heart. The
question then is not how do people work, but how could processes
providing comparable performance quality to those of humans be made
to work in technological implementations. `Could' is important.
Airplanes are clearly not very good imitations of birds. They are
too big, for one thing. They have wheels instead of feet, and the
list goes on and on (no feathers!). Speculating about flight might
lead to building other types of aircraft (as certainly those now
humorous old films of early aviation experiments show), but it would
certainly be a bad procedure to follow to understand birds and how
they fly. Speculating about why the $6M man appears as he does
while running is a tad off the beaten path for AILIST, but that
process of speculation is hardly worthless for arriving at novel
means of representing memory or perception FOR COMPUTER SYSTEMS.

Lets not squabble over the wrong issue. The problem is that the
imagery of the $6M man's running is just too weak as a springboard for
much directed thought and the messages (including my own earlier
reply) are just rambling off in directions more appropriate to
SF-Lovers than AILIST. I do agree that the CURRENT discussion isn't
likely to lead anywhere--but not that the method of armchair
speculation is invalid in AI.

------------------------------

Date: Fri, 3 Jul 87 07:29:41 pdt
From: norman%ics@sdcsvax.ucsd.edu (Donald A. Norman)
Subject: Why AI is not a science


A private message to me in response to my recent AI List posting,
coupled with general observations lead me to realize why so many of us
otherwise friendly folks in the sciences that neighbor AI can be so
frustrated with AI's casual attitude toward theory: AI is not a science
and its practitioners are woefuly untutored in scientific method.

At the recent MIT conference on Foundations of AI, Nils Nilsson stated
that AI was not a science, that it had no empirical content, nor
claims to emperical content, that it said nothing of any emperical
value. AI, stated Nilsson, was engineering. No more, no less. (And
with that statement he left to catch an airplane, stopping further
discussion.) I objected to the statement, but now that I consider it
more deeply, I believe it to be correct and to reflect the
dissatisfaction people like me (i.e., "real scientists") feel with AI.
The problem is that most folks in AI think they are scientists and
think they have the competence to pronounce scientific theories about
almost any topic, but especially about psychology, neuroscience, or
language. Note that perfectly sensible dsciplines such as
mathematics and philosophy are also not sciences, at least not in the
normal intrerpretation of that word. It is no crime not to be a
science. The crime is to think you are one when you aren't.

AI worries a lot about methods and techniques, with many books and
articles devoted to these issues. But by methods and techniques I
mean such topics as the representation of knowledge, logic,
programming, control structures, etc. None of this method includes
anything about content. And there is the flaw: nobody in the field of
Artificial Intelligence speaks of what it means to study intelligence,
of what scientific methods are appropriate, what emprical methods are
relevant, what theories mean, and how they are to be tested. All the
other sciences worry a lot about these issues, about methodology,
about the meaning of theory and what the appropriate data collection
methods might be. AI is not a science in this sense of the word.
Read any standard text on AI: Nilsson or Winston or Rich or
even the multi-volumned handbook. Nothing on what it means to
test a theory, to compare it with others, nothing on what
constitutes evidence, or with how to conduct experiments.
Look at any science and you will find lots of books on
experimental method, on the evaluation of theory. That is why
statistics are so important in psychology or biology or
physics, or why counterexamples are so important in
linguistics. Not a word on these issues in AI.
The result is that practitioners of AI have no experience in the
complexity of experimental data, no understanding of scientific
method. They feel content to argue their points through rhetoric,
example, and the demonstration of programs that mimic behavior thought
to be relevant. Formal proof methods are used to describe the formal
power of systems, but this rigor in the mathematical analysis is not
matched by any similar rigor of theoretical analysis and evaluation
for the content.

This is why other sciences think that folks in AI are off-the-wall,
uneducated in scientific methodology (the truth is that they are), and
completely incompetent at the doing of science, no matter how
brilliant at the development of mathematics of representation or
formal programming methods. AI will contribute to the A, but will
not contribute to the I unless and until it becomes a science and
develops an appreciation for the experimental methods of science. AI
might very well develop its own methods -- I am not trying to argue
that existing methods of existing sciences are necessarily appropriate
-- but at the moment, there is only clever argumentation and proof
through made-up example (the technical expression for this is "thought
experiment"
or "gadanken experiment"). Gedanken experiments are not
accepted methods in science: they are simply suggestive for a source
of ideas, not evidence at the end.

don norman

Donald A. Norman
Institute for Cognitive Science C-015
University of California, San Diego
La Jolla, California 92093
norman@nprdc.arpa {decvax,ucbvax,ihnp4}!sdcsvax!ics!norman
norman@sdics.ucsd.edu norman%sdics.ucsd.edu@RELAY.CS.NET

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT