Copy Link
Add to Bookmark
Report

AIList Digest Volume 3 Issue 170

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest            Friday, 15 Nov 1985      Volume 3 : Issue 170 

Today's Topics:
Queries - Semantic Networks & Reason Maintenance System (or TMS),
Representation - Conceptual Dependency and Predicate Calculus,
New BBoard - TI Explorer,
Hype - The Business World Flames as Well as We Do!,
Inference - Rumor, Prejudice, and Uncertainty &
Abduction and AI in Space Exploration

----------------------------------------------------------------------

Date: Mon, 11 Nov 85 09:24:07 EST
From: "William J. Rapaport" <rapaport%buffalo.csnet@CSNET-RELAY.ARPA>
Subject: another request for help

When our system crashed, I also lost the address of a guy in Europe
(Switzerland, I think) who wanted info on semantic networks. I'd
greatly appreciate help on recovering his address. Thanks.

------------------------------

Date: Wed, 13 Nov 85 13:32:10 EST
From: munnari!basser.oz!anand@seismo.CSS.GOV
Subject: Reason Maintenance System(or TMS)

Has anyone implemented a RMS using PROLOG? Would like to know the
pros and cons of its implementation with LISP? Thank you in advance.
-- Anand S. Rao

------------------------------

Date: Wed, 13 Nov 85 13:28:33 EST
From: munnari!basser.oz!anand@seismo.CSS.GOV
Subject: Conceptual dependency & predicate calculus

Perhaps the best work on linking CD and predicate calculus is by John Sowa.
Refer his book 'Conceptual Structures: Information Processing in Mind
and Machine'. (Review in AI journal Sept. 1985) --- Anand S. Rao

------------------------------

Date: Tue, 12 Nov 85 13:19:23 gmt
From: Patrick Hayes <spar!hayes@decwrl.DEC.COM>
Subject: CD into PC

In response to Bob Stines request concerning translating CD notation into
1PC. This should by now be regarded as a routine exercise, surely. Since
logic doesnt have such ideas as physical transfer already incorporated into
it, one has to translate into 1pc extended by the choice of a particular
vocabulary of relations, etc., and this can be done in several ways ( n-place
relations instead of (n-1)-place function symbols, for example ) : take your
choice. You will need predicates such as PTRANS, of course, but also relations
(or whatever) corresponding to the various colors of funny-arrow used in CD.
There is a standard way to transform graphical notations into tree-structured
notation such as 1PC or LISP: each node in the graph becomes a name in the
language, and each link in the graph becomes an assertion that some relation
( which one depends on the color of the link ) holds between the entities
named. In this way the graph maps into a conjunction of atomic assertions
in a vocabulary which is just about as simple or complex as that used in the
graphical language.
Several notaional tricks can add variety to this simple idea, for example
instead of mapping <node1>link2<node3> into relation2(thing1,thing3)
one can use exists x. Isrelation(x,tpe2) & Holds(x,thing1,thing3). This
enables one to write general rules about a number of link types in a few
compact axioms. Ask any experienced logic programmer for more ideas.
Now, this just translates CD into 1PC notation, of course. To get the
inferential power of CD one then needs to translate the inference rules into
1PC axioms written in the appropriate notation. If you can find the CD
inference rules written out clearly somewhere, this should be straightforward.

One might ask whether such a translation actually captures the meaning of CD
adequately. Unfortunately, as ( to the best of knowledge ) CD notation has
never been supplied with a clear semantics, this would have to remain a
matter for subjective judgement.

A last observation: if you check the published accounts of MARGIE, one of the
early demonstration systems using CD, you will find that one-third of it was
a program which manipulated CD graphs so as to draw conclusions. In order
to do this, it first translated them into a tree-like notation similar to
that obtained by the above technique.

Pat Hayes

------------------------------

Date: 6 Nov 85 11:04:37 EST
From: Kevin.Neel@ISL1.RI.CMU.EDU
Subject: TI Explorer bbs

[Forwarded from the CMU bboard by Laws@SRI-AI.]

The following was posted on netnews:

>Date: Fri 13 Sep 85 15:16:25-PDT
>From: Richard Acuff <Acuff@SUMEX-AIM.ARPA>
>Subject: New Lists for TI Explorer Discussion

In order to facilitate information exchange among DARPA sponsored
projects using TI Explorers, two ArpaNet mailing lists are being
created. INFO-EXPLORER will be used for general information
distribution, such as operational questions, or announcing new
generally available packages or tools. BUG-EXPLORER will be used to
report problems with Explorer software, as well as fixes. Requests to
be added to or deleted from these lists should be sent to
INFO-EXPLORER-REQUEST or BUG-EXPLORER-REQUEST, respectively. All
addresses are at SUMEX-AIM.ARPA. These lists signify no commitment
from Texas Instruments or Stanford University. Indeed, there is no
guarantee that TI representatives will read the lists. The idea of
the lists is to provide communication among the users of Explorers.

-- Rich Acuff
Stanford KSL

[...]

------------------------------

Date: 11 Nov 85 1549 PST
From: Dick Gabriel <RPG@SU-AI.ARPA>
Subject: The Business World Flames as Well as We Do!

[Forwarded from the Stanford bboard by Laws@SRI-AI.]

You might think that in moving to the business world I've given
up on the joy of seeing first-class flaming in my normal
environment - business ethics and all that.

Wrong!

The following is a quote from a story about Clarity Software Corp's
new ad (soon to appear). Clarity is introducing a product called
``Logic Line-1,'' which is a natural language data retrieval system.
The ad compares their product to competing AI products. They say,
apparently about AI programmers:

``Luckily, we won't have to worry about their rancid cells
polluting mankind's gene pool very long anyhow. Such
brain-damaged geeks tend to die young. If you've recently
spent money on artificial intelligence software, you might be
wishing that a few programmers had croaked before writing
that blithering swill they named AI and palmed off onto you.''

-rpg-

------------------------------

Date: Mon, 11 Nov 85 14:30 EST
From: Mukhop <mukhop%gmr.csnet@CSNET-RELAY.ARPA>
Subject: Rumor, prejudice and the management of uncertainty

AI research in recent years has extensively dealt with the
management of uncertainty. A reasonable approach is to model human
mechanisms for knowledge maintenance. However, these mechanisms are
not perfect since they are vulnerable to rumor and prejudice. Both
traits are universal; the object(s) of rumor or prejudice is a
function of the culture and the times.
Rumoring is illustrated in the following scenario: A passes some
information to B and C, who in turn communicate it to others, and so on.
It is possible for a person to receive the same information from several
sources and consequently have a lot of confidence in its truth.
The underlying uncertainty management calculus seems to be flawed
since it ignores the fact that these sources are not independent.
I would like to see some discussions on the following:

1) In any current AI system, is the test for independence of sources
made prior to updating the uncertainty metric associated with a
proposition? This seems to be especially relevant to
Distributed AI systems.

2) Can someone suggest a model or scenario for prejudice? This may
lead to a test to rid AI systems of it.

3) The human knowledge maintenance system (HKMS) seems to update
knowledge in a reasonable manner when the information is
received from independent sources but behaves erratically when
the sources are not independent. Similarly, do the features of
the HKMS, that cause prejudicial reasoning under some circumstances,
lead to sound conclusions when certain conditions are met? How else
could the HKMS have evolved in such a way?

4) The human visual system allows optical illusions to be formed,
but is near-perfect for most routine activities (the bedouin
who regularly observes mirages may beg to differ). It has also
had more time to evolve. Is it conceivable that the HKMS will
evolve in time so that it will be robust in the face of rumor
and eliminate prejudicial reasoning? Or is it important to
retain these traits to ensure "the survival of the fittest."

Uttam Mukhopadhyay
Computer Science Dept.
General Motors Research Labs.
Warren, MI 48090-9055
Phone: (313) 575-2105


[One model of prejudice is based on our propensity for prototype-based
reasoning, combined with our tendency to focus on and remember the more
extreme characteristics of prototypes. The fewer individuals we have seen
from a population, the more certain we are that they are representative.
The work of Kahneman and Tversky seems relevant. -- KIL]

------------------------------

Date: 13 Nov 1985 00:39-EST
From: ISAACSON@USC-ISI.ARPA
Subject: Abduction & AI in space exploration

To my knowledge, abductive inference received some serious
attention by NASA in the early 1980's. There is a heavy volume:
ADVANCED AUTOMATION FOR SPACE MISSIONS, NASA Conference
Publication 2255, Proceedings of the 1980 NASA/ASEE Summer Study,
University of Santa Clara, CA [published end of 1982].

A certain "Space Exploration" team handled, among other things,
futuristic requirements for advanced machine intelligence. (The
task was to design a mission to Titan sometime around the year
2000.) The whole issue of abduction and hypothesis-formation was
made a central issue in competition with "expert systems" soft-
peddled by certain vested interests. The final "Conclusions and
Recommended Technology Priorities" has in No. 1 place the
following recommendation:

(1) Machine intelligence systems with automatic hypothesis-
formation [i.e., abduction - jdi] capability are necessary for
autonomous examination of unknown environments. This capacity is
highly desirable for efficient exploration of the Solar System
and is essential for the ultimate investigation of other star
systems. [p. 381]

(Some well-known peddlers of expert systems actually wanted to
send over there one of their expert systems, until confronted by
the question of whose expertise they are going to package into
the explorer... )

That recommendation is derived from the Space exploration report,
p. 39-76. That report, p. 68, cites the following conclusion:

Required machine intelligence technologies include:

* Autonomous processing (essentially no programming)

* Autonomous "dynamic" memory

* Autonomous error-correction

* Inherently parallel processing

* Abductive/dialectic logical capabilities

* General capacity for acquisition and recognition of patterns

* Universal "Turing Machine" computability

In the "Technology Assessment" section there are the following
recommendations [p. 351]:

6.2.4 Initial Directions for NASA

Several research tasks can be undertaken immediately by NASA
which have the potential of contributing to the development of a
fully automated hypothesis formulating ability needed for future
space missions:

(1) Continue to develop the perspective and theoretical basis for
machine intelligence which holds that (a) machine intelligence
and especially machine learning rest on a capability for
autonomous hypothesis formation, (b) three distinct patterns of
inference underlie hypothesis formulation - Analytic, inductive,
and abductive inference, and (c) solving the problem of
mechanizing abductive inference is the key to implementing
successful machine learning systems. (This work should focus on
abductive inference and begin laying the foundations for a theory
of abductive inference in machine intelligence applications.)
(2) Draw upon the emerging theory of abductive inference to
establish a terminology for referring to abductive inference and
its role in machine intelligence and learning.

(3) Use this terminology to translate the emerging theory of
abductive inference into the terminology of state-of-the-art AI;
use these translations to connect abductive inference research
needs with current AI work that touches on abduction, e.g.,
nonmonotonic logic; and then discuss these connections within the
AI community. (the point of such an exercise is to identify
those aspects of current AI work which can contribute to the
achievement of mechanized and autonomous abductive inference
systems, and to identify a sequence of research steps that the AI
community can take towards this goal.)

(4) Research proposals for specific machine intelligence projects
should explain how the proposed project contributes to the
ultimate goal of autonomous machine intelligence systems which
learn by means of analytic, inductive, and abductive inferences.
Enough is now known about the terms of this criterion to
distinguish between projects which satisfy it and those which do
not.

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT