Copy Link
Add to Bookmark
Report
AIList Digest Volume 4 Issue 258
AIList Digest Wednesday, 12 Nov 1986 Volume 4 : Issue 258
Today's Topics:
Query - Uncertainty and Belief in Expert Systems,
Seminars - Is Probability Adequate? (MIT) &
Uncertain Data Management (IBM) &
Qualitative Reasoning about Mechanisms (SMU) &
Diagnostic Systems (SMU) &
Programming Descriptive Analogies by Example (Rutgers) &
Explicit Contextual Knowledge in Learning (Rutgers) &
Verification in Model-Based Recognition (MIT) &
Formalizing the Notion of Context (SU),
Conference - 2nd Knowledge Acquisition Workshop
----------------------------------------------------------------------
Date: 5 NOV 86 20:37-EST
From: PJURKAT%SITVXA.BITNET@WISCVM.WISC.EDU
Subject: SPRING RESEARCH SEMINAR AT STEVENS INSTITUTE OF TECHNOLOGY
In association with two other faculty members of the Department of Management,
I plan to offer a semester long research seminar, in the Spring 1987 semester,
entitled
REPRESENTATION OF UNCERTAINTY AND BELIEF IN EXPERT SYSTEMS
To be covered are representations based on Bayesian theory, statistical
inference and sampling distributions, discriminant functions, Schafer's theory
of evidence, and fuzzy set theory. Participants will be asked to concentrate
on finding and testing evidence which supports (or not) any of these theories
as actually being related to the way experts deal with uncertainty and belief.
The other faculty will review the representation work of cognitive science and
experimental psychology.
This note is to ask readers to pass on any recent work in these areas,
particulary any experimental evidence on the actual workings of experts.
We have the Kahneman, Slovic and Tversky book "Judgment under uncertainty:
Heuristics and biases", published in 1982.
I will post any interesting ideas and work that comes out of the seminar.
Thank you for your consideration. Peter Jurkat (pjurkat@sitvxa)
------------------------------
Date: Wed 5 Nov 86 15:50:40-EST
From: Rosemary B. Hegg <ROSIE@XX.LCS.MIT.EDU>
Subject: Seminar - Is Probability Adequate? (MIT)
UNCERTAINTY SEMINAR ON MONDAY
Date: Monday, November 10, 1986
Time: 3.45 pm...Refreshments
4.00 pm...Lecture
Place: NE43-512A
UNCERTAINTY IN AI:
IS PROBABILITY EPISTEMOLOGICALLY AND HEURISTICALLY ADEQUATE?
MAX HENRION
Carnegie Mellon
New schemes for representing uncertainty continue to
proliferate, and the debate about their relative merits seems to
be heating up. I shall examine several criteria for comparing
probabilistic representations to the alternatives. I shall
argue that criticisms of the epistemological adequacy of
probability have been misplaced. Indeed there are several
important kinds of inference under uncertainty which are
produced naturally from coherent probabilistic schemes, but are
hard or impossible for alternatives. These include combining
dependent evidence, integrating diagnostic and predictive
reasoning, and "explaining away" symptoms. Encoding uncertain
knowledge in predictive or causal form, as in Bayes' Networks,
has important advantages over the currently more popular
diagnostic rules, as used in Mycin-like systems, which confound
knowledge about the domain and about inference methods.
Suggestions that artificial systems should try to simulate human
inference strategies, with all their documented biases and
errors, seem ill-advised. There is increasing evidence that
popular non-probabilistic schemes, including Mycin Certainty
Factors and Fuzzy Set Theory, perform quite poorly under some
circumstances. Even if one accepts the superiority of
probability on epistemological grounds, the question of its
heuristic adequacy remains. Recent work by Judea Pearl and
myself uses stochastic simulation and probabilistic logic for
propagating uncertainties through multiply connected Bayes'
networks. This aims to produce probabilistic schemes that are
both general and computationally tractable.
HOST: PROF. PETER SZOLOVITS
------------------------------
Date: Thu, 06 Nov 86 10:19:35 PST
From: IBM Almaden Research Center Calendar <CALENDAR@IBM.COM>
Subject: Seminar - Uncertain Data Management (IBM)
IBM Almaden Research Center
650 Harry Road
San Jose, CA 95120-6099
UNCERTAIN DATA MANAGEMENT
L. A. Zadeh, Computer Science Division, University of California, Berkeley
Computer Science Sem. Wed., Nov. 12 10:00 A.M. Room: Rear Audit.
The issue of data uncertainty has not received much attention in the
literature of database management even though the information resident
in a database is frequently incomplete, imprecise or not totally
reliable. Classical probability-based methods are of limited
effectiveness in dealing with data uncertainty, largely because the
needed joint probabilities are not known. Among the approaches which
are more effective are (a) support logic programming which is
Prolog-based, and (b) probabilistic logic. In our approach,
uncertainty is modeled by (a) allowing the entries in a table to be
set-values or, more generally, to be characterized as possibility
distributions, and (b) interpreting a column as a source of evidence
which may be fused with other columns. This model is closely related
to the Dempster-Shafer theory of evidence and provides a conceptually
simple method for dealing with some of the important types of
uncertainty. In its full generality, the problem of uncertain data
management is quite complex and far from solution at this juncture.
Host: S. P. Ghosh
------------------------------
Date: WED, 10 oct 86 17:02:23 CDT
From: leff%smu@csnet-relay
Subject: Seminar - Qualitative Reasoning about Mechanisms (SMU)
Dr. Benjamin Kuipers, Qualitative Reasoning About Mechanisms
10:00 AM, Friday, 7 November 1986
The first generation of diagnostic expert system is based on a simple
model of knowledge: weighted links between observations and diagnoses.
Experience with these systems has revealed a number of limitations in
their performance due to the fact that they do not understand the
mechanism by which a particular fault causes the associated
observations. Recently developed methods for qualitative reasoning
about these underlying mechanisms show promise of being able to extend
the understanding, and hence the power, of diagnostic systems. The
fundamental inference in qualitative reasoning derives the behavior of
a mechanism from a description of its structure. Since both structure
and behavior are represented in qualitative terms, this is essentially
a qualitative abstraction of differential equations. I will derive in
detail the QSIM approach to qualitative reasoning, and demonstrate a
medical example in which QSIM predicts the behavior of a healthy
mechanism, the "broken" mechanism corresponding to a particular
disease, and the response of that broken mechanism to therapy.
------------------------------
Date: WED, 10 oct 86 17:02:23 CDT
From: leff%smu@csnet-relay
Subject: Seminar - Diagnostic Systems (SMU)
Dr. William P. C. Ho
Department of Computer Science and Engineering
Southern Methodist University
IEEE Computer Society Meeting, October 23, 1986
Diagnosis is the process of determining the cause (set of one or more
physical component faults - "hypothesis" give the effect (set of one
or more behavior deviations - "signature"), for a given mechanism.
Ambiguity in interpreting fault signatures is the diagnosis problem.
I am developing an approach for functional diagnosis of multiple
component faults in mechanisms based on the "constraint satisfaction"
paradigm (as opposed to "heuristic search" of "hypothesize and test").
Component faults and behavior deviations are both represented
qualitatively by a set of 5 possible state values. Diagnostic
reasoning is performed with these representations based on an effect
calculus which combines more than one single fault effect into one
single multiple fault effect quickly, without simulation. Diagnostic
reasoning, encapsulated in a set of logical inference rules, is used
to generate constraints, as implications of observed effects, which
prune away subspaces of inconsistent hypotheses. The result is a
complete set of consistent hypotheses which can explain all of the
observed effects.
------------------------------
Date: 9 Nov 86 14:00:17 EST
From: Tom Fawcett <FAWCETT@RED.RUTGERS.EDU>
Subject: Seminar - Programming Descriptive Analogies by Example
(Rutgers)
On Tuesday November 25th, Henry Lieberman of MIT will speak on
"Programming Descriptive Analogies by Example". The abstract follows.
(The exact time will be decided later - it will probably be
10 AM in Hill-250.)
Programming Descriptive Analogies By Example
Henry Lieberman
Artificial Intelligence Laboratory
Massachusetts Institute of Technology
This paper describes a system for "programming by analogy", called Likewise.
Using this new approach to interactive knowledge acquisition, a programmer
presents specific examples and points out which aspects of the examples are
"slippable" to more general situations. The system constructs a general
rule which can then be applied to "analogous" examples. Given a new
example, the system can then construct an analogy with the old example by
trying to instantiate new descriptions which correspond to the descriptions
constructed for the first example. If a new example doesn't fit an old
concept exactly, a concept can be generalized or specialized incrementally
to make the analogy go through. Midway between "programming by example" and
inductive inference programs, Likewise attacks the more modest goal of being
able to communicate to the computer an analogy which is already understood
by a person. Its operation on a typical concept learning task is presented
in detail.
------------------------------
Date: 9 Nov 86 15:44:30 EST
From: Smadar <KEDAR-CABELLI@RED.RUTGERS.EDU>
Subject: Seminar - Explicit Contextual Knowledge in Learning (Rutgers)
Reminder: Dissertation Defense for Rich Keller
Time and Place: Thursday, Nov. 13, 1:30 p.m., Hill 423
Committee: Tom Mitchell (chair)
Thorne McCarty
Lou Steinberg
Jack Mostow
Abstract:
The Role of Explicit Contextual Knowledge in
Learning Concepts to Improve Performance
Richard M. Keller
(KELLER@RED.RUTGERS.EDU)
This dissertation addresses some of the difficulties encountered
when using artificial intelligence-based, inductive concept learning
methods to improve an existing system's performance. The underlying
problem is that inductive methods are insensitive to changes in the
system being improved by learning. This insensitivity is due to the
manner in which contextual knowledge is represented in an inductive
system. Contextual knowledge consists of knowledge about the context
in which concept learning takes place, including knowledge about the
desired form and content of concept descriptions to be learned (target
concept knowledge), and knowledge about the system to be improved by
learning and the type of improvement desired (performance system
knowledge). A considerable amount of contextual knowledge is
"compiled" by an inductive system's designers into its data structures
and procedures. Unfortunately, in this compiled form, it is difficult
for the learning system to modify its contextual knowledge to
accommodate changes in the learning context over time.
This research investigates the advantages of making contextual
knowledge explicit in a concept learning system by representing that
knowledge directly, in terms of express declarative structures. The
thesis of this research is that aside from facilitating adaptation to
change, explicit contextual knowledge is useful in addressing two
additional problems with inductive systems. First, most inductive
systems are unable to learn approximate concept descriptions, even
when approximation is necessary or desirable to improve performance.
Second, the capability of a learning system to generate its own
concept learning tasks appears to be outside the scope of current
inductive systems.
To investigate the thesis, this study introduces an alternative
concept learning framework -- the concept operationalization framework
-- that requires various types of contextual knowledge as explicit
inputs. To test this new framework, an existing inductive concept
learning system (the LEX system [Mitchell et al. 81]) was rewritten as
a concept operationalization system (the MetaLEX system). This
dissertation describes the design of MetaLEX and reports results of
several experiments performed to test the system. Results confirm the
utility of explicit contextual knowledge, and suggest possible
improvements in the representations and methods used by the system.
------------------------------
Date: Mon, 10 Nov 1986 21:08 EST
From: JHC%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU
Subject: Seminar - Verification in Model-Based Recognition (MIT)
THE USE OF VERIFICATION IN MODEL-BASED RECOGNITION
David Clemens, MIT AI Lab
The recognition of objects in images involves a gigantic and complex
search through a library of models. Even for a single model, the
correspondence between parts of the model and parts of the image can
be difficult, especially if parts of the object may be occluded in the
image. Verification is a general search strategy which can reduce the
amount of processing required to find the best image/model match, but
it cannot guarantee that the best match has been found. Verification
is the Test phase of the familiar Hypothesize and Test paradigm, and
is commonly used in the last stages of recognition to weed out final
hypotheses. However, the concept can be applied more generally and
used to drive the recognition process at much earlier stages. Also
called "hypothesis-driven" recognition, this approach allows a more
focused search for evidence to support, invalidate, or modify a
hypothesis, thus decreasing the amount of data processed and improving
the accuracy of the interpretation. Unfortunately, it requires a
commitment to a finite set of initial hypotheses which must include an
early version of correct hypotheses. Thus, there are trade-offs
between hypothesis-driven modules and "data-driven" modules, which
simply process all data uniformly without committing to early
hypotheses. Several recognition systems will be discussed in this
context, demonstrating the strengths and weaknesses of the two basic
approaches applied to visual object recognition.
Thursday, November 13, 4pm
NE43 8th floor playroom
------------------------------
Date: 10 Nov 86 1108 PST
From: Vladimir Lifschitz <VAL@SAIL.STANFORD.EDU>
Subject: Seminar - Formalizing the Notion of Context (SU)
Commonsense and Non-Monotonic Reasoning Seminar
FORMALIZING THE NOTION OF CONTEXT
John McCarthy
Thursday, November 13, 4pm
MJH 252
Getting a general database of common sense knowledge and
expressing it in logic requires formalizing the notion of context.
Since no context is absolutely general, any context must be elaboration
tolerant and we discuss this notion. Another formalism that seems
useful involves entering and leaving contexts; this is a generalization
of natural deduction.
------------------------------
Date: Mon, 10 Nov 86 10:57:39 pst
From: bcsaic!john@june.cs.washington.edu
Subject: Conference - 2nd Knowledge Acquisition Workshop
Call for Participation:
2ND KNOWLEDGE ACQUISITION FOR KNOWLEDGE-BASED SYSTEMS WORKSHOP
Sponsored by the:
AMERICAN ASSOCIATION FOR ARTIFICIAL INTELLIGENCE (AAAI)
Banff, Canada
October 19-23, 1987
A problem in the process of building knowledge-based systems is acquiring
appropriate problem solving knowledge. The objective of this workshop is to
assemble theoreticians and practitioners of AI who recognize the need for
developing systems that assist the knowledge acquisition process.
To encourage vigorous interaction and exchange of ideas the workshop will be
kept small - about 40 participants. There will be individual presentations
and ample time for technical discussions. An attempt will be made to define
the state-of-the-art and the future research needs. Attendance will be
limited to those presenting their work, one author per paper.
Papers are invited for consideration in all aspects of knowledge acquisition
for knowledge-based systems, including (but not restricted to)
o Transfer of expertise - systems that obtain knowledge from experts.
o Transfer of expertise - manual knowledge acquisition methods and
techniques.
o Apprenticeship learning systems.
o Issues in cognition and expertise that affect the knowledge
acquisition process.
o Induction of knowledge from examples.
o Knowledge acquisition methodology and training.
Five copies of an abstract (up to 8 pages) or a full-length paper (up to 20
pages) should be sent to John Boose before April 15, 1987. Acceptance notices
will be mailed by June 15. Full papers (20 pages) should be returned to the
chairman by September 15, 1987, so that they may be bound together for
distribution at the workshop.
Ideal abstracts and papers will make pragmatic or theoretical contributions
supported by a computer implementation, and explain them clearly in the
context of existing knowledge acquisition literature. Variations will be
considered if they make a clear contribution to the field (for example,
comparative analyses, major implementations or extensions, or other analyses
of existing techniques).
Workshop Co-chairmen:
Send papers via US mail to:
John Boose Brian Gaines
Advanced Technology Center Department of Computer Science
Boeing Computer Services University of Calgary
PO Box 24346 2500 University Dr. NW
Seattle, Washington, USA 98124 Calgary, Alberta, Canada T2N 1N4
Send papers via express mail to:
John Boose
Advanced Technology Center
Boeing Computer Services, Bldg. 33.07
2760 160th Ave. SE
Bellevue, Washington, USA 98008
Program and Local Arrangements Committee:
Jeffrey Bradshaw, Boeing Computer Services
B. Chandrasekaran, Ohio State University
Catherine Kitto, Boeing Computer Services
Sandra Marcus, Boeing Computer Services
John McDermott, Carnegie-Mellon University
Ryszard Michalski, University of Illinois
Mildred Shaw, University of Calgary
------------------------------
End of AIList Digest
********************