Copy Link
Add to Bookmark
Report

AIList Digest Volume 8 Issue 140

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest            Tuesday, 20 Dec 1988     Volume 8 : Issue 140 

Philosophy:

Epistemology of Common Sense
The homunculus and the orbiculus
The nine-nodes puzzle and AI

----------------------------------------------------------------------

Date: Mon, 28 Nov 88 08:41:36 EST
From: "Bruce E. Nevin" <bnevin@cch.bbn.com>
Subject: Re: Epistemology of Common Sense

Subject: Re: Epistemology of Common Sense

In AIList Digest for Wednesday, 23 Nov 1988 (Volume 8, Issue 131), we
see a message submitted Wed, 16 Nov 88 19:47 EDT by Stephen Dourson
(Dourson <"DPK100::DOURSON_SE%gmr.com"@RELAY.CS.NET>) in response to
mine in #125, Mon, 7 Nov 88 11:21:08 EST), responding to
McCarthy (#121, 31 Oct 88 2154 PST).

Dourson presents a reductionist view that there are only physical facts,
and that

SD> "social 'facts'", "social
| conventions", and so-called "higher values", all . . . are a
| lot of floating fuzzy abstractions that signify nothing.

Unfortunately, I can't understand his message. Nor can I respond to it.
Nor can it constitute a response to mine. They cannot even be said to
constitute messages. The physical facts are that there are oddly shaped
icons projected in luminous phosphor on a rectangular surface in front
of me, but the means to recognize them as letters, words, sentences,
assumptions, conclusions, and so on are only fuzzy abstractions that
have no bearing on their interpretation and their value. :-)

Sure, my (hypothetical) robot can recognize physical facts. For
example, it has sensors to distinguish colors, among them red, yellow,
and green. But it also has to know that red in a certain context means
to stop the car. That is a social fact, a matter of convention and law.

SD> Knowledge of physical facts is an essential condition for knowing the
| value of things.

Perhaps you thought I was denying this? Social facts and physical facts
are not in some kind of competition.

Social facts are >>constituted of<< physical facts. This means that an
identifiable configuration of physical facts `counts as' a given social
fact. That hooded box hanging up there on the wire constitutes a
traffic signal. The box is a physical fact, the fact that it
constitutes a traffic signal is a social fact. The fact that I ought to
govern my behavior in accord with the color of the light it has turned
on is also a social fact. If I fail to do so, and a crossing truck
wrecks my car, that is a physical fact. But it constitutes a range of
social facts eventually involving liability, payment of damages, loss of
privilege, and so on.

Human behavior is rule governed (more accurately, rule oriented [Labov]).
What this means is that these constitutive relationships may be
formulated as rules. The literature to which I referred you
investigates these regularities of human behavior.

Language is an obvious example of social fact. Sounds and inscribed or
projected shapes are physical facts. As soon as you start talking about
phonemes (phonological contrasts of a language) and letters
(graphological contrasts), you have involved yourself in social facts
shared by the users of the given language. The contrast represented by
l vs r is a fact in English, but not in Japanese. The physical facts
are the same, but in one language they constitute a contrast that
distinguishes words from one another, in the other language they do not.

Sentences are not constituted directly of the sounds that form the
physical basis of speech, they are constituted of words, which are
constituted of morphemes, which are constituted of phonemes, which
finally are constituted of physical sounds. (Substitute your favorite
linguistic terminology, the principle holds.) This is entirely
characteristic of social facts: most social facts are constituted of
more primitive social facts by which the relation to physical facts is
mediated.

It is not simple. There is much ambiguity: a given behavior counts as
a persistent reminder for one person, as nagging for another. The fact
that these conventions are not ordinarily matters of conscious
introspection does not make their investigation easier. It even makes
is possible for people to deny that they exist. :-)

It is even crucial that some social facts be deniable or at least out of
conscious awareness: their inaccessibility is our only assurance of
sincerity. We assume that body language, for instance, does not lie.
Hence, our profound distrust of salesmen and actors.

Finally, this is not a matter of being a conformist, an individualist,
or what have you. Unless you are a hermit in a cave somewhere your
survival very much depends upon your facility with the social facts of
your life. And even the hermit probably talks to himself.

Bruce Nevin
bn@bbn.com
<usual_disclaimer>

------------------------------

Date: Tue, 13 Dec 88 21:24 PST
From: Philip E. Agre <AGRE@SPAR-20.ARPA>
Subject: The homunculus and the orbiculus.

[Nick -- Following our conversation back in October, I have dug the enclosed
bit of text out of an unpublished paper of mine (an AI Lab Working Paper as it
happens). If you think it would be appropriate for AIList feel free to include
it. I tend to write in two-page chunks, so I haven't really got anything much
shorter. Hope it's going well with you. Ciao. Phil]


In the old days, philosophers accused one another of associating with a sneaky
individual called a *homunculus*. From Latin, roughly ``little person.'' For
example, one philosopher's account of perception might involve the mental
construction of an entity that `resembled' the thing-perceived. Another
philosopher would object that this entity did nothing to explain perception
since it required another person, the homunculus, to look at it and determine
its identity and properties. Philosophers have been arguing about this issue
for centuries. Computational ideas have a natural appeal to philosophers who
care about such things because they let one envision ways of `discharging' the
homunculus by, for example, decomposing it into a hierarchy of ever-dumber
subsystems.

I think, though, that the tangled ruts of argument about homunculi distract
attention from a more subtle and telling issue. If the homunculus repeats in
miniature certain acts of its host, where does it perform these acts? The
little person lives in a little world, the host's surroundings reconstructed
in his or her head. This little world, I decided, deserves a Latin word of
its own. So I talked to my medievalist friend, who suggested *orbiculus*.
One way to say ``world'' is *orbis terrarum*, roughly ``earthly sphere.'' So
the orbiculus is one's world copied into one's head.

Where can we find orbiculi in AI? All over. A `world model' is precisely an
orbiculus; it's a model of the world inside your head. Or consider the slogan
of vision as `inverse optics': visual processing takes a retinal image and
reconstructs the world that produced it. Of course that's a metaphor. What's
constructed is a *representation* of that world. But the slogan would have us
judge that representation by its completeness, by the extent to which it is a
thorough re-presentation of the corresponding hunk of world.

You'll also find an orbiculus almost anywhere you see an AI person talk about
`reasoning about X'. This X might be solid objects, time-extended processes,
problem-solving situations, communicative interactions, or any of a hundred
other things. The phrase `reasoning' about X suggests a purely internal
cognitive process, as opposed to more active phrases like `using' or `acting
upon' or `working with' or `participating in' or `manipulating' X. Research
into `reasoning about X' almost invariably involves all-out representations of
X. These representations will be judged according to whether they encode all
the salient details of X in such a way that they can be efficiently recovered
and manipulated by computational processes. In fact, discussions of these
`reasoning' processes often employ metaphors of actual, literal manipulations
of X or its components. In practice, the algorithms performing these abstract
manipulations tend to require a choice between extremely restrictive
assumptions and gross computational intractability.

If you favor slogans like `using X' over slogans like `reasoning about X',
someone will ask you,

But we can reason about things that aren't right in front of us, can't we?

Mentalism offers seductively simple answers to many questions, and this is one
of them. According to mentalism and its orbicular slogans, reasoning about a
derailleur proceeds in the same way regardless of whether the derailleur is in
front of you or in the next room. Either way, you build and consult an
orbicular derailleur-model. If having the derailleur there helps you, it is
only by helping you build your model. This is, of course, contrary to common
experience. The first several times you try to reason about a derailleur it
has to be sitting right in front of you and you have to be able to look around
it, watch it working, poke at it, and take it apart -- long after you've been
exposed to enough information about it to build a model of it.

Why? One receives several answers to this sort of question, but my favorite
(and also the most common) is what I call the *gratuitous deficit* response.
For example,

Maybe you build a model but it decays.

Maybe there isn't enough capacity.

Look what we have here. We have a theory that makes everything easy by
postulating explicit encodings of every salient fact in the world. And then
we have this sort of lame excuse that pops up to disable the theory when
anyone notices its blatant empirical falsehood, without at the same time
admitting that anything is fundamentally wrong with it. Maybe the
computational complexity of reasoning with realistic world models is trying to
tell us something about why it's harder to think about a derailleur in the
next room. What exactly? It's hard to say. But that's to be expected.
Expecting it to be easy is a sign of addiction to the easy answers of the
orbiculi.

------------------------------

Date: 16 Dec 88 21:38:44 GMT
From: cs.utexas.edu!sm.unisys.com!aero!abbott@rutgers.edu (Russell
J. Abbott)
Subject: The nine-nodes puzzle and AI


Recently a friend and I were discussing the old puzzle that requires one
to draw a line consisting of (no more than) 4 line segments through the
following points without lifting the pencil from the paper or retracing
part of the path.


o o o


o o o


o o o




The trick, of course, is not to limit the line segments to the rectangle
defined by the corner points, for example:


o--o--o--/
\ /
\ /
o o o
| \/
| /\
o o o
| /
|/


When I first saw this puzzle, I felt cheated because the solution was
not within the bounds that I had assumed had been defined. My friend
(Dan Kalman) argues that the limitations that the naive solver imposes
on himself are not stated in the puzzle. Not only that, but much of the
work we do exists within unclearly specified contexts, and one aspect of
intelligence is to see that one is not limited by boundaries that aren't
really there.

How does this relate to AI? The general question is whether there is
some way of designing a program that can go beyond the implicit
boundaries of the problems it is supposed to solve. There is an
immediate difficulty since in expressing a problem to a program, one is
forced to specify the problem. So the question becomes: how can one
present this or similar problems to a problem solving program in such a
way that the possibility of discovering the insightful solution is
present, but the problem statement doesn't lead to it directly?

If this problem were presented as that of drawing four line segments
through 9 points in a plane, then the solution is easy. There is no
implicit limitation to the rectangle defined by the corner points.

An alternate formulation is to present the problem as 9 nodes in terms
of which one is supposed to define arcs for some yet-to-be-defined
graph. In these terms there is no solution (as the naive problem solver
suspects)--unless one adds additional nodes, which is the insight. This
second approach also requires that one define the notion of "colinear"
arcs. But if colinear arcs are defined only in terms of the given 9
nodes, there is no way to add new nodes that are colinear with any of
the existing nodes. So a more general notion of colinear is needed,
e.g., in terms of the nodes as embedded in a plane.

So, suppose a problem solving program "knows" about both planes and
graphs and also knows that graphs may be defined in terms of points in a
plane. Two arcs may then be defined as colinear if their endpoints in
the plane are colinear. Then, one could present the problem as that of
building a directed graph that defines a path through the given 9 points
in a plane so that the path consists of a sequence of (no more than) 4
subpaths, each of which consists exclusively of colinear arcs. (Sounds
like a complex problem statement!)

The preceding appears to be a problem formulation that allows the
problem to be solved without giving the solution away. In order to
solve it the system must still come up with the idea of adding new
nodes; yet there is nothing in the problem statement that leads to the
operation of adding nodes.

It would appear then, that a system could solve this problem only if its
repertoire of arc creation operations (which is what the problem
requires) includes the addition of new nodes. But if that were the
case, the insight is built in and hardly an insight. If the addition of
new nodes is not built into the system's repertoire of arc creation
operations, then is there some other heuristic that would lead to the
solution of this problem?

A natural question at this point is whether anyone knows of a system
that could deal with such a problem--on the level on which it is
intended.

-- Russ Abbott

------------------------------

Date: 17 Dec 88 17:47:33 GMT
From: buengc!bph@bu-cs.bu.edu (Blair P. Houghton)
Subject: Re: The nine-nodes puzzle and AI

In article <43353@aero.ARPA> abbott@aero.UUCP (Russell J. Abbott) writes:
>An alternate formulation is to present the problem as 9 nodes in terms
>of which one is supposed to define arcs for some yet-to-be-defined
>graph. In these terms there is no solution (as the naive problem solver
>suspects)--unless one adds additional nodes, which is the insight. This
>second approach also requires that one define the notion of "colinear"
>arcs. But if colinear arcs are defined only in terms of the given 9
>nodes, there is no way to add new nodes that are colinear with any of
>the existing nodes. So a more general notion of colinear is needed,
>e.g., in terms of the nodes as embedded in a plane.

If I may, this sort of situation came up at the Pub the other day. We
settled on it's being the distinction between the mathematical fields
of Topology and Geometry. I.e., the nodal description is strictly
topological, and the solution requires a geometrical formulation,
to wit, your statement "embedded in a plane".

It took a half-drunk MS candidate, a near-sober Ph.D., and an easily
sloshed Ph.D. to get the distinction straight.

>A natural question at this point is whether anyone knows of a system
>that could deal with such a problem--on the level on which it is
>intended.

Sorry, but I don't. It seems from your posting that even overeducated
humans have trouble with it, so an artificial system that could solve
it would be a curious bird, indeed.

--Blair
"Is Usenet therefore topological
or geometric, taking economics
(costs) into account? And does
it affect the cost of Harpoon
Winter Ale?"

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT