Copy Link
Add to Bookmark
Report

AIList Digest Volume 4 Issue 170

eZine's profile picture
Published in 
AIList Digest
 · 11 months ago

AIList Digest           Saturday, 19 Jul 1986     Volume 4 : Issue 170 

Today's Topics:
Philosophy - Creativity and Analogy & Life and Intelligence &
Gibson's Theory of Perception & Representationalist perception,
Humor - Circular Reasoning as a Tool

----------------------------------------------------------------------

Date: Tue, 15 Jul 86 09:31 EST
From: MUKHOP%RCSJJ%gmr.com@CSNET-RELAY.ARPA
Subject: Creativity and Analogy

I believe that Jay Weber and I mostly agree on the relation between an
abstraction and an analogy as well as the relation between the respective
spaces of abstractions and analogies (linguistic "slipperiness"
notwithstanding). What I disagreed with is the notion of some absolute
abstraction hierarchy implicit in Jay's comments:

> ... Each analogy corresponds to a node in an abstraction hierarchy which
> relates all of the sub-categories, SO THE SPACE OF ANALOGIES MAPS ONTO THE
> SPACE OF ABSTRACTIONS,.....

The distinction between an absolute abstraction hierarchy and multiple
abstraction lattices (the term I used in an earlier communication) is central
to the discussion of creativity, that is if you accept that creativity is
the art of making INTERESTING analogies (or abstractions).
Implicit in this definition is a choice between candidate analogies--a choice
not available in an abstraction hierarchy. In all fairness, Jay never states
explicitly that the world can only be represented by a single abstraction
hierarchy.

> Proper scientists (by definition) do not construct theories about things
> that cannot be empirically examined, e.g. using structure mapping functions
> to model the communal descriptive definition of the English word
> "creativity". Scientists pick testable domains such as problem solving
> where you can test predictions of a particular theory with respect to
> correct problem solving.

I am surprised by Jay's definition of "proper scientists". As to modeling
the communal descriptive definition of "creativity", how else could one begin
to emulate this elusive property? I am surprised at his choice of a model
problem for "proper scientists"--something as general as
problem solving. If problem solving by induction or by analogy are proper
domains, why isn't problem solving by "creativity" acceptable? The fact that
the word means slightly different things to different people does not justify
its exclusion from the class of "proper domains". It is fairly obvious that
we have similar perceptions about what the word "creativity" means--how
else could we be having this discussion?

> In the past, scientists have left debate over
> such concepts as "truth" and "beauty" to philosophers, and I think we
> should do the same with "creativity" and "intelligence".

Who are the "we" in this sentence? If "we" refers to the AIList, doesn't
that include philosophers interested in AI?

> In Cognitive Science, researchers have too often exaggerated the impact
> of their work through the careless and unscientific use of such terms.

What is the lesson to be learnt here? Do not use words like "creativity"
that sound pompous? If I want to develop a program that has this interesting
property I will need to give this property a name. What would be more natural
than "creativity"?

------------------------------

Date: Wed, 16 Jul 86 21:54:42 PDT
From: larry@Jpl-VLSI.ARPA
Subject: Definitions of Life, Intelligence, and Creativity


Yes, even defining "intelligence" and "creativity" is very difficult, much
less studying their referents scientifically. But I think it's possible.

General systems theory helps, despite some extravagances and errors its
followers have committed. (Stavros McKrakis pointed out a paper to me by
Berliner that discusses some of the worst.) It resolves the difference
between reductionism and mysticism in a useful way, by raising the status of
information to a physical metric as important as space, time, charge, etc.

GST focuses on the fact that when parts are bound together, interaction
effects bring into existence characteristics which none of the parts possess.
Science is organized around this, with physics concentrating on atomic and
subatomic domains, chemistry concentrating on molecular interactions, and
so on. The universe is divided up into layers of virtual machines, and for
the same reason we do it in computer science: intellectual parsimony. The
biologist, for instance, doesn't have to know whether the hydrogen atoms in
a sample of water have one, two, or three neutrons. Water functions much the
same regardless. (There ARE fascinating and subtle differences some
researchers are investigating.)

Definition (and investigation) of intelligence and creativity are bound up
with another "impossible to define" word: life. "Life" is a label I give to
systems which maintain their existence in hostile environments by continuously
remaking themselves. Over a period of time (sometimes quoted as seven years
for humans), each organism exchanges all of its individual atoms with the
environment. Yet it still "lives" and "has the same identity" because its
pattern is (essentially) the same.

Obviously each organism must somehow "know" the pattern it must maintain
and the safe limits for change before corrective action is taken. Biologists
have concluded that genes (and gene-like adjuncts outside them) don't contain
enough information. Studies point to the conclusion that some of this
information is stored in the universe itself, in the form of natural laws
for instance.

Additionally the organism must be able to sense itself, compare itself with
the desired pattern, and take action to correct for deviations. In some cases
it acts on its environment (pushing away a danger, for instance); in others it
acts on itself (say, standing tall and bristling to frighten attackers).

"Intelligence" I would define in very general terms: storing information that
describes an organism's external and internal universe, comparing and other-
wise processing information in the service of its survival and health, and
controlling its action. (Obviously, this definition could be formalised and
made more precise, but it will do as a first cut.)

It may be protested that these terms are too general, that too many things
would thus be classified as alive and/or intelligent. I would say that it's
more important to subclassify intelligence and study the interactions and
limits of different kinds of intelligence, to study the physical bases of
intelligence. I see nothing wrong with saying that a computer program of the
Game of Life is really alive (in a very restricted and limited sense which can
be couched in formal terms) or that a virus has (very limited, specific kinds)
of intelligence. I see it as useful parsimony that intelligence is defined as
a multi-dimensional continuum with protozoa near one end, humans in the middle
on many continua, and who knows what at the upper end(s).

"Creativity" is a particular kind of intelligence. It can be recognized by its
products: ideas, actions, or objects that did not exist before. This is not
an absolute criteria; it's not all that rare for even those we recognize
as geniuses to create the same idea independently (or as much as humans can be
who are working in the same field). There are middle and low grades of
creativity as well: the same "Chicken Kiev" jokes conceived by hundreds of
people on the same day, for instance.

Obviously, these new things don't appear from nowhere. There are conservation
laws in thought as well as in physics (though very different ones). These
novelties are made up of percepts/concepts already in memory, selected and
bound to create a system with emergent properties that convince us (or don't)
that we've come across something original. (I've gone into the dynamics of
creativity in a previous message and won't repeat myself.)

Larry @ jpl-vlsi

------------------------------

Date: 15 Jul 1986 11:06 EDT
From: ihnp4!mtuxo!hfavr@ucbvax.berkeley.edu
Subject: Gibson's theory of perception

I have not read Kelley's book, but as a psychologist I am familiar with
Gibson's "environmental" (or "ecological") theory of perception. In the
standard contemporary conceptualization of perception, from which Gibson
dissented, the input to the perceptual process is thought to be the
sensory impression; for example, in visual perception, the pattern of
retinal stimulation. According to the standard theory, the task of the
perceptual system is to derive, from that pattern, a representation
whose features are analogous to those features of the environment which
originally caused the retinal pattern. If the perceptual system is
thought of as physically limited to the eye and the brain, the standard
view is close to being a logical necessity. It is from this
conceptualization that Gibson dissented.

In Gibson's view, the perceptual system is not limited to the confines
of the organism, but extends into the environment. In the course of its
evolution, the organism has assimilated physical mechanisms present in
its natural environment to function as integral parts of its perceptual
system. Thus, the perceptual processes implemented in the eye and the
brain have evolved to function as the back-end of an integral process of
perception that begins at the perceived object. In this view, the
natural light sources present in the environment, the reflective
properties of the surfaces of objects, and the optical characteristics
of the atmosphere are as much a part of the human perceptual system as
the eyes and the brain. Thus, the retinal stimulation pattern is not the
input to perception, but rather an internal stage in the process. The
input to the perceptual process is the object itself; the output is the
organism's awareness of the object. The information contained in this
awareness is the original, and not a re- (or transformed), presentation
of the object to consciousness.

According to Gibson, the experimental psychologist's laboratory use of
two-dimensional representations, tachistoscopic stimuli, illusions, and
other materials that were not part of the ecological environment in
which the human perceptual system evolved, amounts to studying the human
perceptual system with some of its key parts removed. This is rather
like trying to find out how a computer works after pulling out some of
its chips, or deducing normal physiology from the results of the
surgical removal of organs. To yield valid information, the results of
such experiments must be interpreted with special attention to the fact
that one is not studying an intact or properly functioning system.

Adam Reed (ihnp4!npois!adam)

------------------------------

Date: Wed 16 Jul 86 16:56:49-PDT
From: John Myers <JMYERS@SRI-STRIPE.ARPA>
Subject: Re: AIList Digest V4 #166

I do not believe a concept of self is required for perception of objects.
Concepts needed for the perception of objects include temporal consistency,
directional location, and differentiation; semantic labeling (i.e., "meaning"
or "naming") is also useful. None of these require the concept of a self
who is doing the perceiving.
The robots I work with have no concept of self, and yet they are quite
successful at perceiving objects in the world, constructing an internal world
model of the objects, and manipulating these objects using the model. (Note
that a "world model" consists of objects' locations and names--temporal
consistency is assumed, and differentiation is implicit. Superior world
models include spatial extent and perceptual features.) I would argue that
they are moving by "reflex"--without true understanding of the "meaning" of
their motions--but they certainly are able to truly perceive the world around
them. I believe lower-level life-forms (such as amoebas, perhaps ants) work
in the same manner. When such-and-such happens in the world, run program FOO
which makes certain motions with the effectors, which (happens to) result in
"useful things" getting accomplished.
I think this describes what most of consciousness is: (1) being able to
perceive things in the environment, (2) knowing the meaning of these things,
and (3) being able to respond in an appropriate manner. Notice that all of
these concepts are vague; different degrees of 1,2,3 represent different
degrees of consciousness.
Self-consciousness is more than consciousness.
The concept of self is not required for conscious beings, and it certainly
is not required for perception.
John Myers~~

------------------------------

Date: Thu, 17 Jul 86 18:10:29 -0200
From: Eyal mozes <eyal%wisdom.bitnet@WISCVM.ARPA>
Subject: Re: Representationalist perception

David Sher writes:
>I may be confused by this argument but as far as visual perception is
>concerned we are certainly not aware of the firing rates of our individual
>neurons. We are not even aware of the true wavelengths of the light that
>hits our eyes. We have special algorithms built into our visual hardware
>that implements an algorithm that decides based on global phenomena the
>color of the light in the room and automatically adjusts the colors of
>percieved objects to compensate (this is called color constancy). However
>this mechanism can be fooled. Given that we don't directly percieve
>the lightwaves hitting our eyes how can we be directly percieving objects
>in the world?

That's exactly the point. We DON'T perceive lightwaves, images or
neuron firing-rates; we directly perceive external objects. The light
waves, our eyes, and the neural mechanisms (which are MECHANISMS, not
algorithms) are not the objects of our perception; they are the MEANS
by which we perceive objects. This will seem implausible only if you
accept the diaphanous model of awareness.

Stephen Barnard writes:
>Consider what happens when we look at a realistic
>painting. We can, at one level, see it as a painting, or we can see
>it as a scene with no objective existence whatsoever. How could this
>perception possibly be interpreted as anything but an internal
>representation?

Sorry, I can't follow your argument. Of course, a realistic painting is
a representation; but it is not an INTERNAL representation. Gibson's
books do contain long discussions of paintings; but he specifically
distinguishes between looking at a painting (in which case you are
perceiving a representation of the object) and directly perceiving the
object itself.

>Gibson emphasized the richness of the visual stimulus,
>arguing that much more information was available from it than was
>generally realized. But to go from this observation to the conclusion
>that the stimulus is in all cases sufficient for perception is clearly
>not justified.

Gibson did not deny that there are SOME cases (for example, many
situations created in laboratories) in which the stimulus is
impoverished. His point was that these cases are the exception, rather
than the rule. Even if we agree that in those exceptional cases there
is some inference from background knowledge, this doesn't justify
concluding that in the normal cases, where the stimuli do uniquely
specify the external object, inference also goes on.

Since I can't possibly do justice to these issues in a short electronic
message, let me repeat my recommendation of Kelley's book. It
discusses all these issues in detail, and presents them very clearly.
I'm sure it will be of great value even to those who'll end up
disagreeing with its conclusions.

Eyal Mozes

BITNET: eyal@wisdom
CSNET and ARPA: eyal%wisdom.bitnet@wiscvm.ARPA
UUCP: ..!ucbvax!eyal%wisdom.bitnet

------------------------------

Date: Fri 11 Jul 86 10:42:05-CDT
From: David Throop <AI.THROOP@R20.UTEXAS.EDU>
Subject: Circular Reasoning as a Tool

CIRCULAR-REASONER: A Knowledge-Representation Tool for Couches

The classic question "How long will it take my brother-in-law and his
friend Larry to get the couch from the living room, around a tight corner
and into the guest bedroom?" has inspired several advances in AI knowledge
representation. The spatial and temporal aspects of the problem have
proved particularly difficult.
Early work in logic representations was able to show that (Couch X) could
be unified with (Furniture X) and push the intractable aspects back a level
of abstraction. Rule based systems were able to diagnose Larry's wrenched
back after the first attempt, and show that if anybody ever solved the
intractable spatial problems, they should leave the answer in the knowledge
base. Frame based systems showed that intractable problems could be pushed
back a further level through inheritance. Causal reasoning systems can
reason about all of the possible behaviors of the couch as it undergoes the
process of being shoved around the corner, and move the temporal and
spatial questions back into a meta-knowledge-base.
I propose to generalize these methods for pushing back hard problems. In
particular the program CIRCULAR-REASONER represents these four knowledge
representation systems as a linked list. This linked list can be NCONCed
to itself so that each level, another representation is just around the
corner. Spatial and temporal aspects can be handled by routines that
access this list recursively, so that hard problems can be sent away and
never come back.

David Throop

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT