Copy Link
Add to Bookmark
Report
NL-KR Digest Volume 03 No. 04
NL-KR Digest (7/16/87 16:50:36) Volume 3 Number 4
Today's Topics:
An Unsearchable Problem of Elementary Human Behavior
Seminar - Automated Process Planning using Abstraction (CMU)
Seminar - Planning Actions with Context-Dependent Effects (SRI)
Talk on Visual Formalisms, by David Harel
Conference - SIMULATION AND AI
----------------------------------------------------------------------
Date: Tue, 14 Jul 87 17:21 EDT
From: berke@CS.UCLA.EDU
Subject: An Unsearchable Problem of Elementary Human Behavior
An Unsearchable Problem of Elementary Human Behavior
Peter Berke
UCLA Computer Science Department
The Artificial Intelligence assumption that all human behavior
can eventually be mimicked by computer behavior has been stated
in various ways. Since Newell stated his Problem Space
Hypothesis in 1980, it has taken on a clearer, and thus, more
refutable form. Newell stated his hypothesis thus:
"The fundamental organizational unit of all human goal-oriented
symbolic activity is the problem space." - Newell, 1980.
In the 1980 work, Newell says his claim "hedges on whether
all cognitive activity is symbolic." Laird, Rosenbloom, and
Newell (1985) ignore this hedge and the qualification "goal-
oriented symbolic" when they propose: "Our approach to
developing a general learning mechanism is based on the
hypothesis that all complex behavior - which includes
behavior concerned with learning - occurs as search in problem
spaces." They reference Newell (1980), but their claim is larger
than Newell's original claim.
The purpose of this note is to show that, to be true, Newell's
hypothesis must be taken to mean just that goal-search in a
state-space is a formalism that is equivalent to computing. Then
Newell's Problem Space Hypothesis is simply a true theorem. The
reader is invited to sketch a proof of the mutual
simulatability of Turing computation and a process of goal-search
in a state space. Such a proof has been constructed for every
other prospective universal formalism, e.g., lambda calculus,
recursive function theory, and Post tag systems. That such
universal formalisms are equivalent in this sense led Church
(1936, footnote 3) to speculate that human calculating activity
can be given no more general a characterization.
But human behavior is not restricted to calculating activity
(though it seems that at least some human behavior is
calculating). If the Problem Space Hypothesis is taken to
be a stronger statement, that is, as a statement about human
behavior rather than about the formalism of goal-search in a
state-space, then I claim that the following counter-example
shows it to be false.
Understanding a name is an inherently unsearchable problem; It
cannot be represented as search in a state or problem space.
Well, it can be so represented, but then it is not the same
problem. In searching our states for our goal we are solving a
different problem than the original one.
To understand that understanding is (or how it can be) inherently
unsearchable, it is necessary to distinguish between ambiguity
and equivocacy. At first the distinction seems contrived, but
it is required by the assumption that there are discrete
objects called 'names' that have discrete meaning (some other
associated object or objects, see Church 1986, Berke 1987).
An equivocal word/image has more than one clear meaning, an
ambiguous word/image has none. What is usually meant by the
phrase "lexical ambiguity" is semantic equivocacy. Equivocacy
occurs even in formal languages and systems, though in setting up
a formal system one aims to avoid equivocacy. For example, an
expression in a computer language may be equivocal ("of equal
voices"), such as: 'IF a THEN IF b THEN c ELSE d'. The whole
expression is equivocal depending on which 'IF' the 'ELSE' is
paired with. In this case there are two clear meanings, one for
each choice of 'IF'.
On the other hand, 'ELSE' taken in isolation, is ambiguous
("like both"), it's meaning is not one or many alternatives, but
it is like all of them. [The reader, especially one who may
claim that 'ELSE' has no meaning in isolation, may find it
valuable to pause at this point to write down what 'ELSE' means.
Several good attempts can be generated in very little time,
especially with the aid of a dictionary.]
Resolving equivocacy can be represented as search in a state
space; it may very well BE search in a state space. Resolving
ambiguity cannot be represented as search in a state space.
Resolving environmental ambiguity is the problem-formulation
stage of decision making; resolving objective ambiguity is the
object-recognition phase of perception.
The difference between ambiguity and equivocacy is a reason why
object-recognition and problem-formulation are difficult
programming and management problems, only iteratively
approximable by computation or rational thought. A state space
is, by definition, equivocal rather than ambiguous. If we
confuse ambiguity with equivocacy, ambiguity resolution may
seem like search in goal space, but this ignores the process of
reducing an ambiguous situation to an equivocal one much
the way Turing (1936) consciously ignores the transition of a
light switch from OFF to ON.
A digital process can approximate an analog process yet we
distinguish the digital process from the analog one. Similarly,
an equivocal problem can approximate an ambiguous problem, but
the approximating problem differs from the approximated one.
Even if a bank of mini-switches can simulate a larger light
switch moving from OFF to ON, we don't evade the problem of
switch transition, we push it "down" a level, and then ignore
it. Even if we can simulate an ambiguity by a host of
equivocacies, we don't thereby remove ambiguity, we push it
"down" a level, and then ignore it.
Ambiguity resolution cannot be accomplished by goal-search in a
state space. At best it can be pushed down some levels.
Ambiguity must still be resolved at the lower levels. It doesn't
just go away; ambiguity resolution is the process of it going
away. Representation may require ambiguity resolution, so the
general problem of representing something (e.g., problem solving,
understanding a name) as goal-search in a state space can not
be represented as goal-search in a state space.
This leads me to suspect what may be a stronger result:
"Representing something" in a given formalism cannot be
represented in that formalism. For example, "representing a
thought in words," that is, expression, cannot be represented in
words. "What it is to be a word" cannot be expressed in words.
Thus there can be no definition of 'word' nor then of 'language'.
Understanding a word, if it relies on some representation of
"what it is to be a word" in words, cannot be represented in
words.
The meaning of a word is in this way precluded from being (or
being adequately represented by) other words. This agrees with
our daily observations that "the meaning of a word," in a
dictionary is incomplete. Not all words need be impossible to
completely define, just some of them for this argument to hold.
It also agrees with Church's 1950 arguments on the contradictions
inherent in taking words to be the meaning of other words.
If understanding cannot be represented in words, it can never be
well-defined and cannot be programmed. In programming, we can
and must ignore the low-level process of bit-recognition because
it is, and must be, implemented in hardware. Similarly,
hardware must process ambiguities into equivocacies for
subsequent "logical" processing.
We are thus precluded from saying how understanding works, but
that does not preclude us from understanding. Understanding a
word can be learned as demonstrated by humans daily. Thus
learning is not exhausted by any (word-expressed) formalism.
One example of a formalism that does not exhaust learning
behavior is computation as defined (put into words) by Turing.
Another is goal-search in a state-space as defined (put into
words) by Newell.
References:
Berke, P., "Naming and Knowledge: Implications of Church's
Arguments about Knowledge Representation," in revision for
publication,1987.
Church, A., An Unsolvable Problem of Elementary Number Theory
(Presented to the American Mathematical Society, April 19, 1935),
Journal of Symbolic Logic, 1936.
Church, A., "On Carnap's Analysis of Statements of Assertion
and Belief," Analysis, 10:5, pp. 97-99, April, 1950.
Church, A., "Intensionality and the Paradox of the Name
Relation," Journal of Symbolic Logic, 1986.
Laird, J.E., P.S. Rosenbloom, and A. Newell, "Towards Chunking as
a General Learning Mechanism," CMU-CS-85-110.
Newell, A. "Reasoning, Problem Solving, and Decision Processes:
The problem space as a Fundamental Category. Chapter 35 in R.
Nickerson (Ed.), Attention and Performance VIII. Erlbaum, 1980.
Turing, A.M., On Computable numbers, with an application to the
Entscheidungsproblem. Proceedings of the London Mathematical
Society 42-2 (1936-7), 230-265; Correction, ibid., 43 (1937),
544-546.
------------------------------
Date: Tue, 30 Jun 87 12:21 EDT
From: Marcella.Zaragoza@isl1.ri.cmu.edu
Subject: Seminar - Automated Process Planning using Abstraction (CMU)
[Excerpted from AIList]
SPECIAL SEMINAR
TOPIC: AUTOMATED PROCESS PLANNING USING HIERARCHICAL ABSTRACTION *
WHO: Dana S. Nau
Computer Science Department and Institute for
Advanced Computer Studies, University of Maryland, and
Factory Automation Systems Division, National Bureau of Standards
WHEN: Monday, July 6, 10:00-11:30 a.m.
WHERE: WeH 4623
ABSTRACT
SIPS is a system which uses AI techniques to decide what machining
operations to use in the creation of metal parts. SIPS generates its
plans completely from scratch, using the specification of the part to be
produced and knowledge about the intrinsic capabilities of each
manufacturing operation.
Rather than using a rule-based approach to knowledge representation,
SIPS uses a hierarchical abstraction technique called hierarchical knowledge
clustering. Problem-solving knowledge is organized in a taxonomic hierarchy
using frames, and problem solving is done using an adaptation of Branch and
Bound.
The development of SIPS was done with two long-term goals in mind:
the use of AI techniques to develop a practical generative process planning
system, and the investigation of fundamental AI issues in representing and
reasoning about three-dimensional objects. SIPS represents an important
step toward these goals, and a number of extensions and enhancements to SIPS
are either underway or planned. SIPS is currently being integrated into the
Automated Manufacturing Research Facility (AMRF) project at the National
Bureau of Standards.
* This work has been supported in part by the following sources: an NSF
Presidential Young Investigator Award to Dana Nau, NSF Grant NSFD
CDR-85-00108 to the University of Maryland Systems Research Center, IBM
Research, General Motors Research Laboratories, and Martin Marietta
Laboratories.
------------------------------
Date: Tue, 30 Jun 87 14:52 EDT
From: Amy Lansky <lansky@venice.ai.sri.com>
Subject: Seminar - Planning Actions with Context-Dependent Effects (SRI)
[Excerpted from AIList]
SYNTHESIZING PLANS THAT CONTAIN ACTIONS
WITH CONTEXT-DEPENDENT EFFECTS
Edwin P.D. Pednault (VAX135!EPDP@UCBVAX.BERKELEY.EDU)
Knowledge Systems Research Department
AT&T Bell Laboratories
Crawfords Corner Road
Holmdel, NJ 07733
11:00 AM, MONDAY, July 6
SRI International, Building E, Room EJ228
In this talk, I will present an approach to solving planning problems
that involve actions whose effects depend on the state of the world at
the time the actions are performed. To solve such problems, the idea
of a secondary precondition is introduced. A secondary precondition
for an action is a condition that must be true at the time the action
is performed for the action to have its desired effect. By imposing
the appropriate secondary precondition as an additional precondition
to an action, we can coerce that action to preserve a desired
condition or to cause a desired condition to become true. I will
demonstrate the use of secondary preconditions and show how they can
be derived from the specification of a planning problem in a
completely general and domain-independent fashion.
VISITORS: Please arrive 5 minutes early so that you can be escorted up
from the E-building receptionist's desk. Thanks!
------------------------------
Date: Thu, 2 Jul 87 13:15 EDT
From: Yishai Feldman <YISHAI%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU>
Subject: Talk on Visual Formalisms, by David Harel
DATE: Wednesday, July 8, 1987
TIME: Refreshments 4:00 P.M.
Seminar 4:15 P.M.
PLACE: NE43-512A
ON VISUAL FORMALISMS
David Harel
Weizmann Inst., Rehovot, Israel
(visiting CMU for 1986-7)
A general diagrammatic object, the higraph, is presented. Higraphs
borrow and extend ideas from Euler circles, Venn-diagrams, graphs and
hypergraphs. They constitute a visual formalism for describing various
kinds of complex entities, particularly those that involve many sets
of objects having intricate structural (i.e., set-theoretic)
interrelationships as well as additional relations of dynamic, causal
or other nature.
Higraphs appear to have many applications, as well as a rich theory
that awaits further research. We shall exhibit a number of
applications in database theory (entity-relationship diagrams),
artificial intelligence (semantic and associative nets) and the
specification of complex concurrent reactive systems (state-transition
diagrams). As far as the latter goes, a higraph-based extension of
conventional state-transition diagrams, statecharts, will be described
in the talk in some detail. Statecharts support clear, precise and
compact descriptions, and have been used in the design of a number of
large real-world systems.
HOST: Professor Nancy A. Lynch
------------------------------
Date: Thu, 25 Jun 87 10:50 EDT
From: Paul Fishwick <fishwick%bikini.cis.ufl.edu@RELAY.CS.NET>
Subject: Conference - SIMULATION AND AI
ANNOUNCEMENT AND CALL FOR PAPERS
SIMULATION AND ARTIFICIAL INTELLIGENCE CONFERENCE
Part of the 1988 SCS MultiConference
San Diego, CA Feb 3-5, 1988
Paper and Special Session Proposals should be sent to SCS (Society for
Computer Simulation) by July 15, 1987 [note: the deadline has been extended].
Some suggested topics are listed below:
Relation between AI and Simulation
Intelligent Simulation Environments
Knowledge-Based Simulation
Decision Support Systems
Qualitative Simulation (there will be a panel discussion on this topic)
Simulation in AI
Ada and AI and Simulation
Aerospace Applications
Biomedical Applications
Expert Systems in Emergency Planning
Automatic Model Generation
Expert Systems
Learning Systems
Natural Language Processing
Robotics
Speech Recognition
Vision
AI Hardware/Workstations
AI Programming Languages
AI/ES Software Tools
A paper proposal should be submitted (approx. 300 words) to:
SCS
P.O. Box 17900
San Diego, CA 92117-7900
People attending the AI and Simulation workshop at AAAI and others interested
in AI and Simulation are strongly encouraged to attend!
Paul Fishwick
University of Florida
CSNET: fishwick@ufl.edu
------------------------------
End of NL-KR Digest
*******************