Copy Link
Add to Bookmark
Report

AIList Digest Volume 8 Issue 023

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest            Sunday, 24 Jul 1988       Volume 8 : Issue 23 

Today's Topics:

Philosophy:

lim(facts about the world) -> ding an sich ?
Critique of Systems Theory
Are all Reasoning Systems Inconsistent?
Generality in Artificial Intelligence
Metaepistemology and unknowability

----------------------------------------------------------------------

Date: Sun, 17 Jul 88 11:23:37 EDT
From: George McKee <mckee%corwin.ccs.northeastern.edu@RELAY.CS.NET>
Subject: lim(facts about the world) -> ding an sich ?

In AIList Digest v7 #41 John McCarthy wrote
> There is no reason to expect a mathematical theorem about
> cellular automata in general or the Life cellular automaton
> in particular that says that a physicist program will be able
> to discover the fundamental physics of its world is the
> Life cellular automaton.

This may be true if one thinks in terms of cellular automata, but
if one thinks in other terms that are convertible to statements
about automata, say the lambda calculus, the possible existence of
such a theorem is not such a far-fetched idea. I'm enough of a
newcomer to this kind of thinking myself that I won't pretend
to understand this in exhaustive detail, but the concepts seem
to fit together well enough at low resolution...

One of the most fascinating results to come out of work in
denotational semantics is that one can look at the sequence of
statements that represent each step in the evaluation of lambda
expressions and see that not only do the steps follow in a way
that makes it proper to say that the expressions are related by
a partial order, but that this partial order has enough additional
structure that it can be called a continuous lattice, where the
definition of "continuous" is closely related to the definition
that topologists use to describe more familiar sorts of mathematical
things, like surfaces in space. How close "closely related" has to
be in order to be convincing is unclear to me at this time.

It's this property of continuity that makes people much more
comfortable with calling the lambda calculus a "calculus" than
they used to be, and forms the basis for the rest of this argument.
(Duke Briscoe's remarks in v8 #2 suggest that he may be thinking
along these lines as well.) It means that a reasoning system based
on the lambda calculus is halfway to being able to model real systems.
Without going into quantum mechanics, which would lead to a discussion
of a different aspect of computability, real systems in addition to
being continuous, are also dense. That is, given an appropriate
definition of "nearby", it's always possible to find or generate
a new element between any two nearby elements. In this sense,
real systems contain an infinite amount of detail.

The real numbers, of course, contain infinite numbers of values
like pi or the square root of 2, that fail to be computable
functions in the sense that they can only be fully actualized by
nonterminating computations. But a system like the lambda
calculus that is able to evaluate data as programs doesn't have
to compute forever in order to be able to claim to know about
irrational numbers. Such a system can represent dense structures
even though the representational system itself may not be dense.

The issue of density is not so important in a cellular automaton
universe as it is in our own, where at human scales of measurement
the world is in fact dense, and a physics of partial differential
equations based on real numbers has been marvelously successful.

Things become really interesting when one considers a device made
of dense physical material, functioning as a digital, non-dense
computer system, attempting to discover and represent its own structure.
The device, at the physical level, is itself, a ding an sich.
At the representational level, a finite representation can exist that
is not the ding an sich, but can approximate its behavior and explain
its structure to whatever degree of accuracy and detail you want,
given enough time. Can a device (or evolutionary series of devices)
that starts out without a representation of the world devise a
series of progressively more accurate representations? As long as
the structure of the world (the ding an sich, not the (tentative)
representation) is consistent from time to time and place to place,
I can't see why not. After all, this is just an abstract, semantical
way of looking at learning.

But what about the last step, recognizing the convergence of the
series of world-models and representing the limit of that series,
i.e. representing *the* ding an sich rather than a set of
approximations? The properties of continuity and density
in the lambda calculus suggest that enough analogies with the
differential calculus might exist to make this plausible, and
that farther on, a sufficiently self-referential analog computer
(the human brain may be one of this type) might be able to "compile"
the representation back into a form suitable for direct action.
My knowledge of either kind of calculus is not deep enough to
allow me to do much more than guess about how to express this
rigorously.

In other words, even though it may not be possible to
duplicate the universe in calculo (why bother, when the world is
there to be examined?), it seems to me that it's likely to be
possible to _understand_ its principles of organization, no
matter how discrete your logic is. Acting congruently with that
understanding is a different question.
- George McKee
NU Computer Science

------------------------------

Date: 19 Jul 88 05:40:06 GMT
From: bingvaxu!vu0112@cs.buffalo.edu
Reply-to: vu0112@bingvaxu.cc.binghamton.edu (Cliff Joslyn)
Subject: Re: Philosophy: Critique of Systems Theory


In a previous article, larry@VLSI.JPL.NASA.GOV writes:
>Using Gilbert Cockton's references to works critical of systems theory, over
>the last month I've spent a few afternoons in the CalTech and UCLA libraries
>tracing down those and other criticisms.

I'm very sorry to have missed the original discussion. Gilbert: could
you re-mail to me?

>General Systems Theory was founded by biologist Ludwig von Bertallanfy in
>the late '40s. It drew heavily on biology, borrowed from many areas, and
>promised a grand unified theory of all the sciences.

The newer term is "Systems Science." For example, I study in a Systems
Science department (one of a very few in the country), and the
International Society for General Systems Research is changing its name
to the Int. Soc. for the Systems Sciences.

>The ideas gained
>momentum till in the early '70s in the "humanics" or "soft sciences" it had
>reached fad proportions.

Sad but true.

>What seems to have happened is that the more optimistic promises of GST
>failed and lost it the support of most of its followers. Its more success-
>ful ideas were pre-empted by several fields. These include control theory
>in engineering, taxonomy of organizations in management, and the origins of
>psychosis in social psychology.

It should not be lost sight of that "Systems Science" and "Cybernetics"
are different views of the same field. They showed the same course of
development, especially in Norbert Weiner's career. With the rise of
Chaos theory, fractals, connectionism, family therapy, global politics,
and so many other things, GST/Cybernetics is implicitly achieving the
kinds of results they always claimed. The body of GST work stands as a
testament to the vision of those who could see the future of science,
even though they couldn't claim a corner for themselves.

>For me the main benefit of GST has been a personally satisfactory resolution
>of the reduction paradox, which follows.
> [ excellent description omitted ]

It is a very difficult task to defend the discipline, which I do,
because it is not clear that it is a discipline in the traditional
sense. While it has a body of knowledge and a variety of specific
claims about the world, and especially about dialectical philosopy, it
is inherently interdisciplinary. George Klir, one of my teachers,
describes it as a "second dimension" of science, studying the
similarities of systems across systems types. This in itself is
addressing the problem of reduction by talking about systems at
different scales.

>This is what many physi-
>cists have done with the conflict between quantum and wave views of
energy.

I refere you to an article I am currently reading, by another of my
professors, Howard Pattee, "The Complementarity Principle in Biological
and Social Structures," in _JOurnal of Social and Bio. Structures_,
vol. 1, 1978.

>New kinds of systems exhibit synergy:
>attributes and abilities not observed in any of their constituent elements.
>But where do these attributes/abilities come from?

Some Systems Scientists claim emergent phenomena in the traditional
sense. Others say that that concept is not necessary, but rather
"emergent" phenomena is just a problem of observing at multiple scales.
The physical unity of a rock is a physical property of the electrical
"synergy" of its constituent atoms. Same for a hurricane, an organism,
an economy, or a society, only with different constituents. In
dynamical systems it is common for there to be a complex interplay
between global and local effects and phenomena.

--
O---------------------------------------------------------------------->
| Cliff Joslyn, Cybernetician at Large
| Systems Science, SUNY Binghamton, vu0112@bingvaxu.cc.binghamton.edu
V All the world is biscuit shaped. . .

------------------------------

Date: Tue, 19 Jul 88 10:16:06 EDT
From: jon@XN.LL.MIT.EDU (Jonathan Leivent)
Subject: Are all Reasoning Systems Inconsistent?


Within any (finite) reasoning system, it is possible to construct a sentence S
from any preposition A such that S = (S -> A) using Godel-like methods to
establish the recursion. However, such a sentence leads to the inconsistent
conclusion that A is true - any A!

1. S = (S -> A) ; the definition of S, true by construction
2. S -> (S -> A) ; a weaker form of 1.
[U = V, so U -> V]
3. S -> S ; an obvious tautology
4. S -> (S ^ (S -> A)) ; from 2. and 3. by conjunction of the consequents
[U -> V and U -> W, so U -> (V ^ W)]
5. (S ^ (S -> A)) -> A ; modus ponens
6. S -> A ; from 4. and 5. by transitivity of ->
[U -> V and V -> W, so U -> W]
7. S ; from 1. and 6.
[U = V and V, so U]
8. A ; from 6. and 7. by modus ponens

Am I doing something wrong, or did logic just fall down the rabbit hole?



-- Jon Leivent

------------------------------

Date: Thu, 21 Jul 88 16:05:59 EDT
From: jon@XN.LL.MIT.EDU (Jonathan Leivent)
Subject: more 'Are all Reasoning Systems Inconsistent?'


I have found a mistake in my original proof. Here is a revised version that
should more aptly be titled "Are all Reasoning Systems Impractical?":


Godel's method of creating self-referential sentences allows us to create the
sentence S such that S = (P(n) -> A), where A is any proposition and P(x) is
true if the sentence represented by the Godel number x is provable in this
reasoning system. The self-reference comes from the fact that S can be so
constructed that n is the Godel number of S, hence P(n) asserts the
provability of S. Originally, I managed to induce inconsistency in a
reasoning system by using the sentence S = (S -> A), the inconsistency being
that A is proven true regardless of what proposition it is (even a false one
would do). The subtle mistake with that proof is that such a sentence is not
constructable by Godel numbering techniques. The statement S = (P(n) -> A),
where n is the Godel number of S itself, is constructable, and yields rather
grave consequences:

1. S = (P(n) -> A) ; definition of S, true by
construction
[n is the Godel number of S itself]
2. P(n) -> S ; if something is provable, then it is
true
[the definition of P]
3. P(n) -> (P(n) -> A) ; from 1 and 2 by substitution for S
4. P(n) -> P(n) ; tautology
5. P(n) -> (P(n) ^ (P(n) -> A)) ; conjunction of the consequents in 3
and 4
6. (P(n) ^ (P(n) -> A)) -> A ; modus ponens
7. P(n) -> A ; transitivity of -> from 5 and 6
8. S ; from 1 and 7
9. P(n) ; the fact that steps 1 thru 8 prove S
is sufficient to prove P(n)
10. A ; from 7 and 9 by modus ponens

So, it seems on the surface that the same inconsistency is constructable: that
regardless of what A is, it can be proven to be true. However, the conclusion
in step 9 that P(n) is true based on the derivation of S in steps 1 thru 8,
combined with the axiom P(n) -> S used in step 2, may be the source of the
inconsistency. Perhaps, in order for P(n) to imply S, it must not lead to
inconsistency (a proof is not a proof if it leads to a contradiction). This
insistance seems to be quite self-serving, but it does the trick - the
derivation of S in steps 1 thru 8 is not a proof of S because it "eventually"
leads to inconsistency in step 10, hence step 9 is not valid (not that if A is
instantiated to be a true propostion, then no contradiction is reached - only
if A is false or uninstantiated is there a contradiction). We seem to have
saved the day, except for one thing: we are requiring that a true proof of any
statement involve the exhaustive search for inconsistency (contradictions).
The penalty is that this forces reasoning systems to take infinite time to
generate "true" theorems (otherwise, there may be a contradiction lurking
under the next stone). There is no simple heuristic to use to determine that
the search for inconsistency can end (some would suggest that only theorems
that make statements about a reasoning system's own proof procedure are in
doubt, but the above theorem can be transformed isomorphically into a theorem
entirely about number theory with no reference to a proof procedure (using
Godel numbering techniques again) - the new theorem would still have the same
problems). So, any reasoning system that can do things in finite time should
be doubted (the theorems may be true, but then again ...).

Leivent's Theorem: Doubt all theorems (including this one).

There's something rather sinister about this: how can this theorem be
disproven? If one succeeds in proving the contrary in finite time, perhaps
the theorem is still true, and the proof to the contrary would eventually lead
to a contradiction. Perfect proof is, in practice, impossible.

-- Jon Leivent

------------------------------

Date: 20 Jul 88 08:32 PDT
From: hayes.pa@Xerox.COM
Subject: Re: Generality in Artificial Intelligence

Steve Easterbrook gives us an old idea: that the way to get all the knowledge
our systems need is to let them experience it for themselves. It doesnt work
like that, however, for several reasons.

First, most of our common sense isnt in episodic memory. That candles burn
more often than tables isnt something from episodic memory, for example. Or is
the suggestion is that we only store episodes and do the inductive
generalisations whenever we need to by remarkably efficient internal access
machinery? Apart from the engineering difficulties ( I can imagine 1PC being
reinvented as a handy device to save memory ), this has the problem that lots
of what we know CANT be induced from experiences. Ive never been to Africa or
ancient Rome, for example, but I know a fair bit about them.

But the more serious problem is that the very idea of storing experiences
assumes that there is some way to encode the experiences, some episodic memory
which can represent episodes. I dont mean which knows how to index them
efficiently, just put them into memory in the first place. You know that, in
your `..experience, candles are burnt much more often than tables.' How are all
those experiences of candle-combustion represented in your head? Put your wee
robot in front of a candle, and what happens in its head? Now think of all the
other stuff your episodic memory has to be able to represent. How is this
representing done? Maybe after a while following this thought you will begin to
see McCarthys footsteps on the trail in front of you.

Pat Hayes

------------------------------

Date: Thu, 21 Jul 88 21:19 O
From: <YLIKOSKI%FINFUN.BITNET@MITVMA.MIT.EDU>
Subject: metaepistemology and unknowability

Distribution-File:
AILIST@AI.AI.MIT.EDU

In AIList Digest V8 #9, David Chess <CHESS@ibm.com> writes:

>Can anyone complete the sentence "The actual world is unknowable to
>us, because we have only descriptions/representations of it, and not..."?

I may have misused the word "unknowable". I'm applying a mechanistic
model of human thinking: it is an electrochemical process, neuron
activation patterns representing objects which one thinks of. The
heart of the matter is if you can say a person or a robot *knows*
something if all it has is a representation, which may be right or
wrong, and there is no way for it to get absolute knowledge. Well,
the philosophy of science has a lot to say about describing the
reality with a theory or a model.

Note that there are two kinds of models here. The human brain
utilizes electrochemical, intracranial models without us being aware
of it; the philosophy of science involves written theories and
models which are easy to examine, manipulate and communicate.

I would say that the actual world is unknowable to us because we have
only descriptions of it, and not any kind of absolutely correct,
totally reliable information involving it.

>(I would tend to claim that "knowing" is just (roughly) "having
> the right kind of descriptions/representations of", and that
> there's no genuine "unknowability" here; but that's another
> debate...)

The unknowability here is uncertainty about the actual state of the
world very much in the same sense as scientific theories are theories,
not pure, absolute truths.


Andy Ylikoski

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT