Copy Link
Add to Bookmark
Report

AIList Digest Volume 7 Issue 041

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest           Wednesday, 22 Jun 1988     Volume 7 : Issue 41 

Today's Topics:

Representation languages
AI language
determinism a dead issue?
Ding an sich
The primacy of scientific physical reality?
Cognitive AI vs Expert Systems

----------------------------------------------------------------------

Date: Mon, 20 Jun 88 08:16:19 PDT
From: smoliar@vaxa.isi.edu (Stephen Smoliar)
Subject: Re: representation languages

In article <19880615061555.7.NICK@INTERLAKEN.LCS.MIT.EDU> Ian
Dickson writes:
>Date: Tue, 14 Jun 88 05:42 EDT
>From: Ian Dickinson <ijd%otter.lb.hp.co.uk@RELAY.CS.NET>
>To: ailist@mc.lcs.mit.edu
>Subject: Re: representation languages
>
>Whilst I have no doubt that these systems [KEE and ART]
are useful today, _I_ as an
>application developer want to see a representation system that is
>maximally
>small whilst giving me the power that I need. The philosophy I would like
>to
>see adopted is:
> o define conceptual representations that allow applications to be
> written at the maximum level of abstraction (eg generic tasks)
> o define the intermediate representations (frames, rules, sets ..)
> that are needed to implement the conceptual structures
> o choose a subset of these representations that can be maximally
> tightly integrated with the base language of your choice (which
> would not be Lisp in my choice)
>
These are admirable desiderata, but they may not be sufficient to stave off
the dreaded "good feature explosion." Rather, this malady is a consequence
of a desire we seem to have of our representation systems which allows them
to both RECORD and REASON ABOUT "units" of knowledge (whatever those units
may be). (PACE, Mark Stefik; I know I have lifted the name of a
knowledge representation system in my choice of words.) We take it
for granted that we want both facilities. If all we were doing was
recording, all we would have would be a data base; and if all we were
doing was reasoning, all we would have would be a theorem prover. I
would claim that our cultural expectations of a knowledge representation
system has grown out of a desire to assimilate these two capabilities.

Unfortunately, both capabilities turn out to be extremely demanding of
computational resources. As a result, it has been demonstrated that
even some of the simplest attempts to find a viable middle ground can
easily lead to computational intractability (particularly if a clean
semantic foundation is one of your desiderata). As a result, now may
be a good time to question whether or not the sort of "homogeneous
assimilation" of recording and reasoning which is to be found in many
knowledge representation systems is such a good thing. Perhaps it would
be more desirable to have TWO facilities which handle record keeping and
reasoning as independent tasks and which communicate through a protocol
which does not impede their interaction.

Here at ISI we have been exploring means by which expert systems can give
adequate explanatory accounts of their own behaviror. We have discovered
that an important element in the service of such explanation is a
TERMINOLOGICAL FOUNDATION, which amounts to a means by which all
symbols which are used as part of the problem solving apparatus of
the expert system also have a semantic support which links them to
the text generation facilities required in explanation. Thus, for
example "fever" is not treated simply as a symbolic varaible which
get set to T if a patient's temperature is more than 100 degrees
Farenheit but may then get set back to NIL if it is discovered that
the patient had been drinking hot coffee just before the nurse took
his temperature; rather it is a "word" which serves as a key to certain
knowledge about patient conditions, as well as knowledge about how it may
be detected and knowledge about its consequences. In keeping with the
aforementioned attempt to separate the concerns of recording and reasoning,
we have developed a facility (currently called HI-FI) for recording
such terminological knowledge in such a way as to SUPPORT (but not
necessarily PERFORM) subsequent reasoning.

In pursuing this approach, we have developed as set of "terminological
building blocks," which seem to be at least partially sympathetic to Ian
Dickinson's philosophy. Here is a quick outline:

ACTIONS are the "verbs" of the terminological foundation.
They provide the basis for the expression of both the statements
of problems and the statements of solution methods. While they
definitely have a "generic" quality, I am not yet sure that they
bear much resemblance to Chandrasekaran's generic tasks. Since
this material is relatively new, that possibility remains to be
investigated.

TYPES are "nouns" which designate classes (i.e. categories of
entities). They intentionally bear resemblance to both frame
descriptions and object classes which may be found in object-
oriented languages.

INSTANCES are the entities which are members of classes. An
instance may be a member of several classes. However,
determinining whether or not a given instance is a member
of a given class may often be a matter of reasoning rather
than retrieval from a base of recorded facts.

PROPERTIES are unary predicates applied to instances.

RELATIONS are binary predicates applied to instances.

ASSERTIONS are sentences in a typed predicate calculus whose
types are the type classes. These sentences are "about" instances;
but they may incorporate expressions of descriptions of types,
properties, and relations.

A major application of assertions is the representation of DEFINITIONAL
FORMS. These are sentences which establish necessary and/or sufficient
conditions for membership in a type class or for the satisfaction of a
property or relation. These assertions are the major link to the reasoning
facility.

The above is a sketch of work which has just gotten under way. However,
the approach has already been pursued in some test cases concerned with
reasoning about digital circuits; and it appears to be promising. I
plan to have more discipined documentation of our work ready in the near
future; but while I am engaged in preparing such documents, I am
interested in any feedback regarding similar (or contrary) approaches.

------------------------------

Date: Tue, 21 Jun 88 08:23 O
From: <YLIKOSKI%FINFUN.BITNET@MITVMA.MIT.EDU>
Subject: AI language

Distribution-File:
AILIST@AI.AI.MIT.EDU

In a recent AIList issue, Pat Hayes wished to get a list of desirable
features for an AI language.

My opinion is that we need new theoretical formalisms for expressing
intelligence and the world of a human being for the basis of new AI
languages.

It seems to me that almost all successful programming languages have a
good background formalism. APL has Iverson's array notation. Modern
Lisp (CommonLOOPS and Zetalisp) has several ones: functional
programming, the idea of the list, object-oriented programming. The
Algol/Pascal family of languages has the idea of expressing the
language unambiguously with the Backus-Naur notation.

Andy Ylikoski

------------------------------

Date: 21 Jun 88 17:34:40 GMT
From: umix!umich!eecs.umich.edu!itivax!dhw@uunet.UU.NET (David H.
West)
Subject: Re: determinism a dead issue?


In a previous article, Bruce E. Nevin writes:
> Is the notion of determinism not deeply undercut by developments in
> study of nonlinearity and Chaos?

No. (Or, if you prefer, "it depends".) I take it that "determinism"
is for present purposes equivalent to "predictability". Then:
1) Nonlinearity is strictly irrelevent - it just makes the math more
difficult, but determinism requires only the existence (*) of a
solution, not that it be easy to compute, or that it be computed
at all;
2) Chaos means (in the continuum view of things) that some quantity
has with respect to some initial parameter a derivative the
magnitude of which becomes unbounded for large times, i.e.
adjacent trajectories diverge. All this means is that to predict
further into the future, one needs increasingly precise knowledge
of the initial conditions. Infinite precision suffices ;-) for
infinite-time prediction. Remember, Laplace (or was it
Lagrange?) assumed he could have the positions and velocities of every
particle in the universe. Anyone who grants that would be
niggardly to refuse infinite precision.

Determinism *is* undercut by quantum mechanics, but that's
encouraging only to those who identify randomness with freedom.

(*) There are epistemological problems here. One can certainly
prove things about the existence of solutions to equations, but
notwithstanding that we know some equations that describe the world
to some degree of approximation, it is clearly impossible for finite
beings to prove or know that (P:) "equations (currently known or
otherwise) describe the world exactly". Such beings (e.g. us) can
believe P or not, as it suits them (or as they are determined ;-).

> Is it the case that systems
> involving nonlinearity always involve feedback or feedforward loops?

Any system worthy of the name has loops, and linearity is only a
special case, so this is a good bet, but there are counterexamples
(see below).

> (Isn't it mutual effect of the values
> of two or more variables on one another that makes an equation
> nonlinear, and isn't that a way of expressing feedback or feedforward?

No. Consider the nonlinear equation y=sqrt(x).

> Is it that
> nonlinear systems are not error correcting? Or perhaps that they are
> analog rather than digital systems? Are massively parallel systems
> nonlinear, or do they tend to be? Does the distinction apply to now
> familiar characterizations of brain hemisphere specialization?

The answer is probably "not necessarily".

> This has relevance to how an AI based on deterministic, linear systems
> can do what nonlinear organisms do.

Whose AI is based on *linear* systems? Logic circuits are nonlinear,
semantic networks are nonlinear, connectionist networks are
nonlinear...

------------------------------

Date: 20 Jun 88 0322 PDT
From: John McCarthy <JMC@SAIL.Stanford.EDU>
Subject: Ding an sich

I want to defend the extreme point of view that it is both
meaningful and possible that the basic structure of the
world is unknowable. It is also possible that it is
knowable. It just depends on how much of the structure
of the world happens to interact with us. This is like
Kant's "Ding an sich" (the thing in itself) except that
I gather that Kant considered "Ding an sich" as unknowable
in principle, whereas I only consider that it might be
unknowable.

The basis of this position is the notion of evolution
of intelligent beings in a world not created for their
scientific convenience. There is no mathematical theorem
requiring that if a world evolves intelligent beings,
these beings must be in a position to discover all its
laws.

To illustrate this idea, consider the Life cellular
automaton proposed by John Horton Conway and studied
by him and various M.I.T. hackers and others. It's
described in Winning Ways by Berlekamp, Conway and
Guy.

Associated with each point of the two dimensional
integer lattice is a state that takes values 0 and
1. The state of a point at time t+1 is determined
by its state at time t and the states at time t of
its eight neighbors. Namely, if the number of
neigbors in state 1 is less than two or more than
4, its state at time t+1 is 0. If it has exactly two
neighbors in state 1, its state remains as it was.
If it has exactly 3 neighbors in state 1, its
new state is 1.

There is a configuration of five cells in state 1 (with neighbors
in state 0) called a glider, which reproduces itself displaced
in two units of time. There is a configuration called a glider
gun that emits gliders. There are configurations that thin out
streams of gliders from a glider gun. There are configurations
that take two streams of gliders as inputs and perform logical
operations (regarding the presence of a glider
at a given time in the stream as 1 and its absence
as 0) on them producing a new stream. Thinned streams can
cross each other and serve as wires conducting signals.
This permits the construction of general purpose computers
in the Life plane.

The Life automaton wasn't designed to admit computers. The
discovery that it did was made by hacking. Configurations
that can serve as general purpose computers can be made
in a variety of ways. The way indicated above and more
fully described in Berlekamp, et. al. is only one.

Now suppose that one or more interacting Life computers are
programmed to be physicists, i.e. to attempt to discover
the fundamental physics of their world. There is no reason
to expect a mathematical theorem about cellular automata
in general or the Life cellular automaton in particular
that says that a physicist program will be able to discover
that the fundamental physics of its world is the Life
cellular automaton.

It requires some extra attention in the design of the computer to make
sure that it has any capability to observe at all, and some that can
observe will be unable to observe enough detail. Of course, we could
program a Life computer to simulate some other "second level" cellular
automaton that admits computers, and give the "second level computer" only
the ability to observe the "second level world". In that case, it surely
couldn't find any evidence for the its world being the Life cellular
automaton. Indeed the Life automaton could simulate exceedingly slowly
any theory we like of our 3+1 dimensional world.

If a Life world physicist is provided with too narrow a philosophy
of science, and some of the consensual reality theories may indeed
be that narrow, it might not regard the hypothesis that its physics
is the Life world as meaningful. There may be Life world physicists who
regard it as meaningful and Life world philosophers of science
interacting with them who try to forbid it.

This illustrates what I mean by metaepistemology. Metaepistemology must
study what knowledge is possible for intelligent beings in a world to the
structure of the world and the physical structures and computational
programs that support scientific activity.

The traditional methods of philosophy of science are too weak to discuss
these matters, because they don't take into account how the structure of
the world and the structure of its intelligences affect what science is
possible. There is no more guarantee that the structure of our
world is observable than that Fermat's last theorem is decidable
in Peano arithmetic. Physicists are always proposing theories
of fundamental physics whose testability depends on the correctness
of other theories and the development of new apparatus. For example,
some of the current GUTS theories predict unification of the
force laws at energies of 10^15 Mev, and there is no current
idea of how an accelerator producing such an energy might
be physically possible.

I have received messages asking me if the metaepistemology I propose
is like what has been proposed by Kant and other philosophers
or even by Winograd and Flores. As far as I can tell it's not,
and all those mentioned are subject to the criticism of the
previous paragraph.

------------------------------

Date: Mon 20 Jun 88 10:20:14-PDT
From: Mike Dante <DANTE@EDWARDS-2060.ARPA>
Subject: Re: AIList Digest V7 #39

I can't resist replying to George McKee's insistence that "one description
of the collective experience of humanity ... outranks all the alternatives ...
(that is, the ) primacy of scientific physical reality". The statement is of
course true only if you exclude from the "collective experience of humanity"
all of history, aesthetics, human relationships, and self understanding. But
I think you must also exclude George's own belief that we are only a
"quantitative step away" from "telling us what we need to know." This non-
scientific belief was held by Marx, Freud, etc., etc., all of whom wished to
believe that the crystal purity and certainty of the scientific method had
solved mankind's ills. It seems to me that the evidence necessary to support
this belief would be at least some demonstrated success that we were some sort
of "quantitative step away" from having any idea how to close prisons and mental
hospitals, and abolish greed, fear, and war. So far I see no scientific
evidence that we have more than a laundry list of things to try, and a much
longer list of things that have been tried and have failed. To extrapolate
from the limited (though exciting and important) successes of the scientific
method in these fields to an assertion that we are only quantitatively distant
from describing "the collective experience of humanity" seems to me a great
deal less justified than the belief that was expressed by the Dean of American
Science in the 19th Century, that all of Physics had been learned and all that
was left was quantitative improvements.

------------------------------

Date: 21 Jun 88 03:02:26 GMT
From: krulwich-bruce@yale-zoo.arpa (Bruce Krulwich)
Subject: Re: Cognitive AI vs Expert Systems

In a previous post, I claimed that there were differences between people
doing "hard AI" (trying to achieve serious understanding and intelligence)
and "soft AI" (trying to achieve intelligent behavior).

dg1v+@ANDREW.CMU.EDU (David Greene) responds:
>Since my researchs concerns developing knowledge acquisition approaches (via
>machine learning) to address real world environments, I'm well aquainted with
>not only the above literature, but psych, cog psych, JDM (judgement and
>decision making), and BDT (behavioral decision theory).
>
>While I suspect AI researchers who work in Expert System might resent being
>excluded from work in "serious intelligence", I think my point is that, for a
>given phenomena, multiple viewpoints from different disciplines (literature)
>can provide important breadth and insights.

I agree fully, and I think you'll find this in the references section of alot
of "hard AI" research work. (As a matter of fact, a fair number of
researchers in "hard AI" are prof's in or have degrees psychology,
linguistics, etc.) I'm sorry if my post seemed insulting -- it wasn't
intended that way. I truly believe, however, that there are differences in
the research goals, methods, and results of the two different areas. That's
not a judgement, but it is a difference.

Bruce Krulwich

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT