Copy Link
Add to Bookmark
Report
AIList Digest Volume 5 Issue 071
AIList Digest Saturday, 7 Mar 1987 Volume 5 : Issue 71
Today's Topics:
News - IJCAI Research Excellence Award,
Expert Systems - Explanations,
Philosphy - Consciousness
----------------------------------------------------------------------
Date: Fri, 6 Mar 87 09:35:16 GMT
From: Alan Bundy <bundy%aiva.edinburgh.ac.uk@Cs.Ucl.AC.UK>
Subject: Announcement: IJCAI Research Excellence Award
THE 1987 IJCAI AWARD FOR RESEARCH EXCELLENCE
I regret to announce that the IJCAI-87 Awards Committee,
having considered all the candidates nominated for the Research
Excellence Award, have decided not to make an award.
The Award is given in recognition of an Artificial
Intelligence scientist who has carried out a program of research of
consistently high quality yielding several substantial results. The
first recipient of this award was John McCarthy in 1985. In the
opinion of the Awards Committee none of the nominated candidates
reached the high standard required. Several members of the Committee
afterwards suggested candidates that, in their opinion, did reach the
required standard, but who had not been nominated.
Nominations for the Award were invited from all in the
artificial intelligence international community. The Award Committee
was the union of the Programme, Conference and Advisory Committees of
IJCAI-87 and the Board of Trustees of IJCAII, with nominees excluded.
It is the sincere hope of all the Committee that, in future
years, a greater effort will be made by the artificial intelligence
community to nominate suitable candidates.
Alan Bundy
IJCAI-87 Conference Chairman
------------------------------
Date: Fri, 6 Mar 87 9:42:17 EST
From: Bruce Nevin <bnevin@cch.bbn.com>
Subject: Re: dear Abby
The bounds of a field are subject to redefinition. Many established
fields of today were interdisciplinary in the past. Thus, `when not to
step past them' is a complex matter.
Is it possible for users of an expert system to ask for information outside
its domain, and for it to answer naively, overstepping its proper bounds?
Has anyone worked with this level of meta-expertise? For instance, are there
systems that address multiple domains and select the appropriate one (or
combination) by deduction from interactions with the user?
Bruce Nevin
bn@cch.bbn.com
(This is my own personal communication, and in no way expresses or
implies anything about the opinions of my employer, its clients, etc.)
------------------------------
Date: Thu, 05 Mar 87 08:13:33 EST
From: sriram@ATHENA.MIT.EDU
Subject: Expert Systems and Explanations
Knowledge-based system technology is a programming methodology, which
facilitates the incorporation of "human or expert" knowledge. Hence, the
criterion that explanation facilitiy is a must for a knowledge based
system (or an expert system once you add the expert's knowledge) is
to be questioned.
In fact, our experience with building expert systems indicates that
the end user is least bothered about seeing things like:
Rule XX concluded that YY is true.
That stuff is good for debugging purposes. For the explanations
to be accepted by end users a more robust ENGLISH translation should be
provided (for example Swartout's work). Further, I feel the selling
point for any expert system will be the USER INTERFACE (with nice graphics).
Sriram
------------------------------
Date: 5 Mar 87 15:08:41 GMT
From: ihnp4!ihuxv!arrgh@ucbvax.Berkeley.EDU (Hill)
Subject: Explanations in expert systems
I have to put in my $0.02 into the expert systems discussion.
In real life, an expert system probably will not be used unless it possesses
a sound explanation facility. For most users, this does not mean merely
dumping rules or whatever, e.g., "the system is trying to satisfy rule-518", but
rather being able to turn the knowledge encoded in each unit of representation
into meaningful natural language.
An example may make this requirement clearer. One of the systems I have
built is the Michael Reese-IIT Stroke Consultant. This program is a large
neurology expert system designed to assist house physicians with the
diagnosis and treatment of stroke.
One of the treatments this system recommends is to prep the patient for surgery,
take him into the OR, remove the back of his head, and proceed to dig around
in the cerebellum for a hematoma.
Naturally enough, any reasonable physician will want to ask the machine "WHY?"
it recommends such a radical treatment, and expect an answer in a form that
a physician (not a computer scientist) can understand. The explanation system
will furnish: an English statement of the problem, e.g., "diagnosis is
hemorrhage into the cerebellum", and justifications for the treatment, e.g.,
"Evacuation of cerebellar hematoma is recommended because it greatly reduces
mortality when the following signs are present... Refer to the following
references [references to the neurology literature are cited]."
Lets take a more common case. Last spring, I built an expert system that
is designed to diagnose problems in candy wrapping machinery. In fact, if
you eat candy bars, you have almost certainly eaten candy wrapped on one of
these machines. The operators of these machines needed additional help
in diagnosing and troubleshooting problems in this new equipment, and we
built an expert system for this specific task.
Machine operators, unlike many of you, have absolutely no understanding of
production rules, and moreover, they are not interested. This system had to be
able to furnish the following explanations on line at all times: (1) how to
use system itself, (2) how the candy wrapping equipment was supposed to
operate (an on line tutorial on the machine), (3) how to answer the questions
the system was asking, e.g., where was IC 9 pin 7 on the Micro controller "A"
board, and (4) an explanation of the reasoning the system was using at that
time.
The moral of this rather long posting is that if you want to build expert
systems that will actually be used by real people you will need a good
explanation facility. While this is necessary, it is of course, not
sufficient. The knowledge engineer will need good debugging facilities
(something not provided on most tools today).
Hope this clears up some confusion.
--
Howard Hill, Ph.D.
------------------------------
Date: 6 Mar 87 23:01:45 GMT
From: cbatt!osu-eddie!tanner@ucbvax.Berkeley.EDU (Mike Tanner)
Subject: Re: Explanations in expert systems
In article <1800@ihuxv.ATT.COM> arrgh@ihuxv.ATT.COM (Hill) writes:
>
>In real life, an expert system probably will not be used unless it possesses
>a sound explanation facility. For most users, this does not mean merely
>dumping rules or whatever, e.g., "the system is trying to satisfy
>rule-518", but
>rather being able to turn the knowledge encoded in each unit of representation
>into meaningful natural language.
>
While I agree that spitting out rules is generally inadequate for
explanation I disagree that explanations *must* be in natural language.
For some kinds of explanation drawing and pointing is more useful.
"I think the wonkus is broken. Try replacing it."
"Wonkus!? What's that?"
"Take a look at the zweeble smasher. See this gizmo? That's the wonkus."
I'm not saying natural language is useless. But the above interaction
would have taken a lot more words without the picture. (With the
picture it might have needed no words at all. But I don't know what a
zweeble smasher is, much less how to draw one.) Sometimes a picture
really is worth a thousand words.
Keep in mind that when you talk about explanation as giving back rules
you're assuming expert systems are simple, flat rule-bases. This is
not necessarily true. If all your expert system knows is rules then:
(a) the system isn't doing anything interesting
OR
(b) you're actually using the rule language as a general
purpose programming language (because rules qua rules
don't give you the control features needed to navigate a
large knowledge base)
In case (a), there's no need to worry about real world usefulness. In
case (b), there should be no surprise that the rules themselves are
not informative explainers. No more than a listing of code would, in
general, be an explanation of a program.
-- mike
ARPA: tanner@ohio-state.arpa
UUCP: ...cbosgd!osu-eddie!tanner
------------------------------
Date: Thu, 5 Mar 87 09:24:51 pst
From: Eugene Miya N. <eugene@ames-pioneer.arpa>
Subject: Consciousness
Do not confuse consciousness with memory. Consciousness is not
a dualistic phenomena which your "speculation" (your word) tends
to imply. Consider that you did not mentioned subconscious (explicitly),
and but you did mention a dual unconscious.
Your comments on memory can also be refined by the cognitive
literature such as the distinction between recall, recognition, and the two
other types of memory tests I am forgetting. You also should make a
distinction between forgetting and interference (this is good).
My suggestion is for you to visit a nearby college or university and
get some literature on cognition (of which I am NOT a proponent).
>From the Rock of Ages Home for Retired Hackers:
--eugene miya
NASA Ames Research Center
eugene@ames-aurora.ARPA
"You trust the `reply' command with all those different mailers out there?"
"Send mail, avoid follow-ups. If enough, I'll summarize."
{hplabs,hao,ihnp4,decwrl,allegra,tektronix,menlo70}!ames!aurora!eugene
------------------------------
Date: Fri, 06 Mar 87 12:01:36 n
From: DAVIS%EMBL.BITNET@wiscvm.wisc.edu
Subject: philosphy - consciousness
Could an unconscious machine be a good psychologist ?
*****************************************************
During the recent discussions on consciousness, Stevan Harnad has,
in the face of many claims about its role/origin, given us the demanding
question "well, if X is achieved *with* consciousness, why couldn't it
be accomplished *without* ?" (I hope I have understood this much correctly).
I think that many of those arguing with Harnad, myself included, have not
appreciated the full implications of this question - I wish now to give
one example of "an X" designed to at least point in the direction of an
answer to Harnad's question.
I hope that Stevan would accept, as a relatively axiomatic truth,
that for complex systems (eg; ourselves, future compsys'), interaction
and 'social development' are a *good thing*. That is to say, a system will
do better if it can interact with others (particularly of its kind), and
even more so if such interactions are capable of development towards
structures resembling 'societies'. We can justify this simply on the grounds
of efficiency, information exchange, and altruistically-based mutual survival
arrangements (helping each other out). I think that this is as true of computer
systems as human beings, although its curent implementation lacks any real
capacity for self-development.
Given this axiom - that complex systems will do better if they
interact - we may return to the hypothesis of Armstrong, recently raised
by M.Brilliant on the ailist, that the selective advantage conferred by
being conscious is connected with the ability to form developing social
systems. Harnad's question in this context (previously raised) is "why couldn't
an unconscious TTT-indistinguishable automaton accomplish the same thing ?".
So, lets look at this proposition. In order to accomplish meaningful
social interactions in a way that opens up such relations to future development
it is necessary to be able to predict - not, of course with 100% accuracy,
but to an extent that permits mutual acts to occur without running through
all the verbal preliminaries every time (conceptually similar to installing
preamble macros in TeX - a facetious statement!). Our ability to do this
is described in every day experience as 'understanding other people', and
permits us to avoid asking the boss for a raise when he is obviously in
a foul mood.
Rephrasing Harnad's question in an even more specific (and revealing)
manner, we now have " why couldn't an unconscious TTT-indistinguishable
automaton make similarly good predictions about other conscious objects ?".
We now have a useful fusion of biological, psychological and computer terms.
What sort of computer systems do we know of that are able to make predictions?
Although the exact definition is currently under debate ( see the list ),it
seems that we may subsume such systems under the general term "expert systems"-
used here in the most general sense of being an electronic device with access
to a knowledge base and some method of drawing conclusions given this knowledge
and a specific query or situation. I hope that Stevan will go along with
this as a possible description of his TTT-indistinguishable automaton.
So, could such a system 'understand' other people ? I believe that
it could not, for the following reasons. As sophisticated as this 'inference
engine' may be, its methods of reasoning must still, even in some high level
sense, be instantiated by its designers. Moreover, its knowledge base is
expandable only by observation of the world. To behave in a way that was
TTT-indistinguishable from a human in its capacity to 'understand' people,
this automaton would either (1) have to have a built in model of human
psychology or (2) be capable of collecting information that enabled it to
form its own model over time.
Here we have reached the kernel of the problem. Do we have, or are
we ever likely to have our own model of human psychology that is capable
of being implented on a computer ? Obviously, this is open to debate, but
I think not. The human approach to psychology seems to me to be incapable
of developing in a context which does not take the participation and prior
knowledge of the psychologist into consideration. As sophisticated as it
gets, I feel (though you're welcome to try and change my mind) that psychology
will always be like a dictionary - you look up the meaning of one word,
and find you have to know 30 others to understand what it means. Alternatively,
suppose that our fabulous machine were to try and 'figure it out fo itself'.
It will very soon run into a problem. When it asks someone why they did
something, it will recieve a reply which often involves a reference to an
'inner self' - a world, which as any good psychologist will tell you, has
its own rules, its own objects and its own interactions. The machine asks,
and asks, observes and observes - will it ever be able to put together a
picture of the 'inner life' of these conscious humans ?
And now we are at the end. Its obviously a statement of faith, but
I believe that what consciousness gives us is the ability to do just what
this machine cannot - to be a good psychologist. It makes this possible
by allowing us to *compare and contrast* our own behavior and 'inner self'
with other's behaviour - and hence to make the leap of understanding that
gives rise to the possibility of meaningful social interaction and development.
We have our *own* picture of 'inner life' (this is not meant to be mystical!)
and hence we have no need to seek to develop a model by inference. I do
not believe (now!) that an unconscious device could do the latter, and hence
I do not think that it is possible, even in principle, to build an unconscious
TTT-indistinguishable automaton that is capable of interacting with conscious
objects.
Thankyou, and good night.
Paul Davis
wetmail: embl, postfach 10.2209, 6900 heidelberg, west germany
netmail: davis@embl.bitnet
petmail: homing pigeons to .......
------------------------------
End of AIList Digest
********************