Copy Link
Add to Bookmark
Report

AIList Digest Volume 6 Issue 026

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest             Friday, 5 Feb 1988       Volume 6 : Issue 26 

Today's Topics:
Methodology - Two Extreme Approaches to AI & AI vs. Linguistics,
Expert Systems - Interviewing Experts

----------------------------------------------------------------------

Date: 01 Feb 88 1153 PST
From: John McCarthy <JMC@SAIL.Stanford.EDU>
Subject: two extreme approaches to AI

1. The logic approach (which I follow).

Understand the common sense world well enough to express in
a suitable logical language the facts known to a person. Also
express the reasoning methods as some kind of generalized logical
inference. More details are in my Daedalus paper.

2. Using nano-technology to make an instrumented person.
(This approach was suggested by Drexler's book and by
Eve Lewis's commentary in AILIST. It may even be what
she is suggesting).

Sequence the human genome. Which one? Mostly they're the
same, but let the researcher sequence his own. Understand
embryology well enough to let the sequenced genome develop
in a computer to birth. Provide an environment adequate for
human development. It doesn't have to be very good, since
people who are deaf, dumb and blind still manage to develop
intelligence. Now the researcher can have multiple copies
of this artificial human - himself more or less. Because
it is a program running in a superduper computer, he can
put in science programs that find what structures correspond
to facts about the world and to particular behaviors. It is
as though we could observe every synaptic event. Experiments
could be made that involve modifying the structure, blocking
signals at various points and injecting new signals.

Even with the instrumented person, there would be a huge scientific
task in understanding the behavior. Perhaps it could be solved.

My exposition of the "instrumented man" approach is rather
schematic. Doing it as described above would take a long time, especially
the part about understanding embryology. Clever people, serious about
making it work, would discover shortcuts. Even so, I'll continue
to bet on the logic approach.

3. Other approaches. I don't even want to imply that the above two
are the main approaches. I only needed to list two to make my main
point.

How shall we compare these approaches? The Dreyfus's
use the metaphor "AI at the crossroads again". This is wrong.
AI isn't a person that can only go one way. The headline should
be "A new entrant in the AI race" - to the extent that they
regard connectionism as new, or "An old horse re-enters the
AI race" to the extent that they regard it as a continuation
of earlier work. There is no a priori reason why both approaches
won't win, given enough time. Still others are viable.

However, experience since the 1950s shows that AI is
a difficult problem, and it is very likely that fully understanding
intelligence may take of the order of a hundred years. Therefore,
the winning approach is likely to be tens of years ahead of the
also-rans.

The Dreyfus's don't actually work in AI. Therefore, they take
this "Let's you and him fight" approach by babbling about a crossroads.
They don't worry about dissipating researchers' energy in writing articles
about why other researchers' are on the wrong track and shouldn't be
supported. Naturally there will still be rivalry for funds, and even more
important, to attract the next generation of researchers. (The
connectionists have reached a new level in this latter rivalry with their
summer schools on connectionism). However, let this rivalry mainly take
the form of advancing one's own approach rather than denouncing others.
(I said "mainly" not "exclusively". Critical writing is also important,
especially if it takes the form of "Here's a problem that I think gives
your approach difficulty for the following reasons. How do you propose to
solve it?" I hope my forthcoming BBS commentary on Smolensky's "The
Proper Treatment of Connectionism" will be taken in this spirit.)

The trouble is "AI at the Crossroads" suggests that partisans of each
approach should try to grab all the money by zapping all rivals.
Just remember that in the Communist Manifesto, Marx and Engels mentioned
another possible outcome to a class struggle than the one they
advocated - "the common ruin of the contending classes".

------------------------------

Date: 3 Feb 88 02:37:00 GMT
From: alan!tutiya@labrea.stanford.edu (Syun Tutiya)
Subject: Re: words order in English and Japanese

In article <3725@bcsaic.UUCP> rwojcik@bcsaic.UUCP (Rick Wojcik) writes:
>Still,
>it seems to have little to do with the problems that AI researchers busy
>themselves with. And it has everything to do with what language
>scholars busy themselves with. Perhaps the participants realize
>instinctively that their views make more sense in this newsgroup.

I am no AI researcher or language scholar, so find it interesting to
learn that even in AI there could be an argument/discussion as to
whether this is a proper subject or that is not. Does what AI
researchers are busy with define the proper domain of AI research?
People who answer yes to this question can be safely said to live in
an established discipline called AI.

But if AI research is to be something which aims at a theory about
intelligence, whether human or machine, I would say interests in AI
and those in philosophy is almost coextensive.

I do not mind anyone taking the above as a joke but the following
seems to be really a problem for both AI researchers and language
scholars.

A myth has it that variation in language is a matter of what is called
parameter setting, with the same inborn universal linguistic faculty
only modified with respect to a preset range of parameters. That
linguistic faculty is relatively independent of other human faculties,
basically. But on the other hand, AI research seems to be based on the
assumption that all the kinds of intellectual faculty are realilzed in
essentially the same manner. So it is not unnatural for an AI
researcher try to come up with a "theory" which should "explain" the
way one of the human faculties is like, which endeavor sounds very odd
and unnatural to well-educated language scholars. Nakashima's
original theory may have no grain of truth, I agree, but the following
exchange of opinions revealed, at least to me, that AI researchers on
the netland have lost the real challenging spirit their precursors
shared when they embarked on the project of AI.

Sorry for unproductive, onlooker-like comments.

Syun
(tutiya@csli.stanford.edu)
[The fact that I share the nationality and affiliation with Nakashima
has nothing to do with the above comments.]

------------------------------

Date: 30 Jan 88 17:34:31 GMT
From: trwrb!aero!venera.isi.edu!smoliar@ucbvax.Berkeley.EDU (Stephen
Smoliar)
Subject: interviewing experts


I just read the article, "How to Talk to an Expert," by Steven E. Evanson
in the February 1988 issue of AI EXPERT. While I do not expect profound
technical insights from this magazine, I found certain portions of this
article sufficiently contrary to my own experiences that I decided a
bit of flaming was in order. Mr. Evanson is credited as being "a
practicing psychologist in Monterey, Calif., who works in the expert
systems area." Let me being with the observation that I am NOT a
practicing psychologist, nor is my training in psychology. What I
write will be based primarily on the four years of experience I had
at the Schlumberger-Doll Research Laboratory in Ridgefield, Connecticut
during which I had considerable opportunity to interact with a wide
variety of field experts and to attempt to implement the results of
those interactions in the form of software.

Mr. Evanson dwells on many approaches to getting an expert to explain
himself. For the most part, he address himself to the appropriate sorts
of probing questions the interviewer should ask. Unfortuntely, one may
conclude from Mr. Evanson's text that such interviewing is a unilateral
process. The knowledge engineer "prompts" the expert and records what
he has to say. Such a practice misses out on the fact that experts are
capable of listening, too. If a knowledge engineer is discussing how an
expert is solving a particular problem; then it is not only valuable, but
probably also important, that the interviewer be able to "play back" the
expert's solution without blindly mimicking it. In other words, if the
interviewer can explain the solution back to the expert in a way the
expert finds acceptable, then both parties can agree that the information
has been transferred. This seems to be the most effective way to deal
with one of Mr. Evanson's more important observations:

It is very important for the interviewer to understand
how the expert thinks about the problem and not assume
or project his or her favored modes of thinking into the
expert's verbal reports.

Maintaining bilateral communication is paramount in any encounter with an
expert. Mr. Evanson makes the following observation:

Shallowness of breathing or eyes that appear to defocus
and glaze over may also be associated with internal
visual images.

Unfortunately, it may also indicate that the expert is at a loss at the
stage of the interview. It may be that he has encountered an intractable
problem, but another possibility it that he really has not processed a
question from the interviewer and can't figure out how to reply. If
the interviewer cannot distinguish "deep thought" from "being at a loss,"
he is likely to get rather confused with his data. Mr. Evanson would have
done better to cultivate an appreciation of this point.

It is also important to recognize that much of what Mr. Evanson has to say
is opinion which is not necessarily shared "across the board." For
example:

As experts report how they are solving a problem, they
translate internal experiences into language. Thus
language becomes a tool for representing the experiences
of the expert.

While this seems rather apparent at face value, we should bear in mind that
it is not necessarily consistent with some of the approaches to reasoning
which have been advanced by researchers such as Marvin Minsky in his work
on memory models. The fact is that often language can be a rather poor
medium for accounting for one's behavior. This is why I believe that it
is important that a knowledge engineer should raise himself to the level
of novice in the problem domain being investigated before he even begins
to think about what his expert system is going to look like. It is more
important for him to internalize problem solving experiences than to simply
document them.

In light of these observations, the sample interview Mr. Evanson provides
does not serve as a particularly shining example. He claims that he began
an interview with a family practice physician with the following question:

Can you please describe how you go about making decisions
with a common complaint you might see frequently in your
practice?

This immediately gets things off on the wrong foot. One should begin with
specific problem solving experiences. The most successful reported interviews
with physicians have always begun with a specific case study. If the
interviewer does not know how to formulate such a case study, then he
is not ready to interview yet. Indeed, Mr. Evanson essentially documents
that he began with the wrong question without explicitly realizing it:

This question elicited several minutes of interesting
unstructured examples of general medical principles,
data-gathering techniques, and the importance of a
thorough examination but remained essentially unanswered.
The question was repeated three or four times with
slightly different phrasing with little result.

>From this point on, the level of credibility of Mr. Evanson's account
goes downhill. Ultimately, the reader of this article is left with
a potentially damaging false impression of what interviewing an expert
entails.

One important point I observed at Schlumberger is that initial interviews
often tend to be highly frustrating and not necessarily that fruitful.
They are necessary because of the anthropological necessity of establishing
a shared vocabulary. However, once that vocabulary has been set, the
burden is on the knowledge engineer to demonstrate the ability to use
it. Thus, the important thing is to be able to internalize some initial
problem solving experience enough so that it can be implemented. At
this point, the expert is in a position to do something he is very good
at: criticize the performance of an inferior. Experts are much better
at picking apart the inadequacies of a problem which is claiming to
solve problems than at giving the underlying principles of solution.
Thus, the best way to get information out of an expert is often to
give him some novice software to criticize. Perhaps Mr. Evanson has
never built any such software for himself, in which case this aspect
of interacting with an expert may never have occurred to him.

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT