Copy Link
Add to Bookmark
Report
AIList Digest Volume 4 Issue 166
AIList Digest Monday, 14 Jul 1986 Volume 4 : Issue 166
Today's Topics:
Philosophy - Representationalist Perception & Searle's Chinese Room
----------------------------------------------------------------------
Date: Fri, 11 Jul 86 17:03:37 edt
From: David Sher <sher@rochester.arpa>
Reply-to: sher@rochester.UUCP (David Sher)
Subject: Re: Representationalist Perception
In article <8607100457.AA12123@ucbvax.Berkeley.EDU> eyal@wisdom.BITNET
(Eyal mozes) writes:
>The "output" of perception (if such a term is appropriate) is our
>awareness. Realists claim that this awareness is directly of external
>objects. Representationalists, on the other hand, claim that we are
>directly aware only of internal representations, created by a process
>whose input are external objects; this means that we are aware of
>external objects only INDIRECTLY. That is the position Gibson and
>Kelley argue against, and I think they do understand it accurately.
I may be confused by this argument but as far as visual perception is
concerned we are certainly not aware of the firing rates of our individual
neurons. We are not even aware of the true wavelengths of the light that
hits our eyes. We have special algorithms built into our visual hardware
that implements an algorithm that decides based on global phenomena the
color of the light in the room and automatically adjusts the colors of
percieved objects to compensate (this is called color constancy). However
this mechanism can be fooled. Given that we don't directly percieve
the lightwaves hitting our eyes how can we be directly percieving objects
in the world? Does percieve in this sense mean something different from
the way I am using it? I know that for ordinary people the only images
consciously accessible are quite heavily processed to compensate for
noise and light intensity and to take into account known facts about
the tendencies of objects to be continuous and to fit into know shapes.
I don't know how under such circumstances we can be said to be directly
aware of any form of visual input except internal representations.
My guess is that you are using words in a technical way that has
confused me. But perhaps you can clear up this.
------------------------------
Date: Mon 14 Jul 86 10:09:34-PDT
From: Stephen Barnard <BARNARD@SRI-AI.ARPA>
Subject: perception (realist vs. representationalist position)
Maybe I've never really understood the arguments of the so-called
"perceptual realists" (Gibson, etc.), because their position that we
do not build internal representations of the objects of perception,
but rather perceive the world directly (whatever that means), seems
obviously wrong. Consider what happens when we look at a realistic
painting. We can, at one level, see it as a painting, or we can see
it as a scene with no objective existence whatsoever. How could this
perception possibly be interpreted as anything but an internal
representation?
In many or perhaps even all situations, the stimuli available to our
sense organs are insufficient to specify unique external objects. The
job of perception, as opposed to mere sensation, is to complement the
stimulus information to create a fleshed-out interpretation that is
consistent both with the stimulus and with our knowledge and
expectations. Gibson emphasized the richness of the visual stimulus,
arguing that much more information was available from it than was
generally realized. But to go from this observation to the conclusion
that the stimulus is in all cases sufficient for perception is clearly
not justified.
------------------------------
Date: Fri, 11 Jul 86 15:33:04 edt
From: Tom Scott <scott%bgsu.csnet@CSNET-RELAY.ARPA>
Subject: Knowledge is structured in consciousness
Two recent postings to this newsgroup by Eyal Mozes and Pat
Hayes on the (re)presentation of perception and knowledge in
integrated sensory/knowledge systems indicate the validity of
philosophy in the theoretical foundations of knowledge science, which
includes AI and knowledge engineering. I'd prefer not to make a
public choice between Mozes' vs. Hayes' position, but I'm impressed
by the sincerity of their arguments and the way each connects
philosophy and technology.
Hayes remarks that "The question the representational position
must face is how such things (representations) can serve as percepts
in the overall cognitive framework." This is indeed a serious problem
facing the designers of fifth- and sixth-generation intelligent
systems. Here is a two-hundred-year-old approach to the problem, an
approach that not only can help the representationalists but can also
be of value to realist and idealist (re)constructions of knowledge
within the simulated consciousness of a knowledge system:
REPRESENTATION
|
+---------------+-------------+
| |
UNCONSCIOUS CONSCIOUS
REPRESENTATION REPRESENTATION
(AI/KE) (Perception)
| |
+------------------+ +------+--------+
| | | | |
RULE FRAME LOGIC OBJECTIVE SUBJECTIVE
BASED BASED BASED PERCEPTION PERCEPTION
(Knowledge) (Sensation)
|
+------------------+ Refers to the
Relates | | object by means
immediately to <-- INTUITION CONCEPT --> of a feature
the object | which several
+-------------+ things have in
| | common
Has its origin in PURE EMPIRICAL
the understanding alone <-- CONCEPT CONCEPT
(not in sensibility) (Notion)
|
A concept of reason <-- IDEA
formed from notions
and therefore transcending
the possibility of experience
This taxonomy tree of mental (re)presentations in a knowledge
system was drawn by Jon Cunnyngham of Genan Intelligent Systems
(Columbus, Ohio) after a group discussion on the following passage
from Kant's "Critique of Pure Reason" (B376-77):
The genus is representation in general (repraesentatio).
Subordinate to it stands representation with consciousness
(perceptio). A perception which relates solely to the subject
as the modification of its state is sensation (sensatio), an
objective perception is knowledge (cognitio). This is either
intuition or concept (intuitus vel conceptus). The former
relates immediately to the object and is single, the latter
refers to it mediately by means of a feature which several
things may have in common. The concept is either an empirical
or a pure concept. The pure concept, in so far as it has its
origin in the understanding alone (not in the pure image of
sensibility), is called a notion. A concept formed from
notions and transcending the possibility of experience is an
idea or concept of reason. Anyone who has familiarised
himself with these distinctions must find it intolerable to
hear the representation of the colour, red, called an idea.
It ought not even to be called a concept of understanding, a
notion.
A word of caution about the translation: First, the German
"Anschauung" is translated into English as "intuition." Contrary to
what my wife would have you think, this word should not be taken in
the sense of "woman's intuition" but rather in the sense of "raw
intake" or "input." Second, although "Einbildung" comes over to
English naturally as "image," the imaging faculty ("Einbildungskraft")
should only with caution be designated in English by "imagination,"
especially when we consider that the transcendental role of this
faculty is the central organizing factor in Kant's theory of the
human(oid) knowledge system. Third, the Norman Kemp Smith edition,
available through St. Martin's Press in paperback for somewhere in
the neighborhood of $15.00, is the best English translation, despite
the little problems I've just pointed out regarding "Anschauung" and
"Einbildung." The other translations pale in comparison to Smith's.
In view of all this, I'd like to add to Hayes's challenge:
Yes, there is a problem in the integration of perceptual (or should we
say "sense-based") and intellectual systems. But the solution is
already indicated in Kant's reconstruction of the human(oid) knowledge
system by the equating of "objective perception," "knowledge," and
"cognitio" (which, by the way, may or may not be equivalent to the
English use of "cognition"). The problem can be pinpointed more
exactly in this way: How can we force the system's objects to obey the
apriori structures of consciousness that are necessary for empirical
consciousness (awareness) of intelligible objects in a world, given to
a self. (The construct of a self in a sense-based system of objective
knowledge may seem to be a luxury, but without a self there can be no
object, hence no objective perception, hence no knowledge.)
What do we have now? Do we have intelligent systems?
Perhaps. Do we have knowledgeable systems? Maybe. Are they
conscious? No. The Hauptsatz for knowledge science is this:
"Knowledge is structured in consciousness." So investigate
consciousness and the self in the human, and then you'll have a basis
for (re)constructing it in a computerized knowledge system.
One more diagram that may be of help in unravelling all this:
Understanding Sensibility
|
E Knowledge Images
m of --------> Objects
p objects |
|
----------------------+-----------------------
T |
r Pure concepts Schemas Pure forms of
a (categories) --------> intuition
n and principles | (space and time)
s |
As was mentioned in an earlier posting to this newsgroup (V4 #157),
this diagram springs from a single sentence in the Critique (B74):
"Beide sind entweder rein, oder empirisch" (Both may be either pure
[transcendental] or empirical).
May I suggest that knowledge-system designers consider the
diagram in conjunction with the taxonomy tree of mental
representations. With these two diagrams in mind, two seminal
passages from the Critique (namely, B33-36 and B74-79) can now be
recognized for what they are: the basis for the design of integrated
sense/knowledge systems in the fifth and sixth generations. To be
sure, there is a lot of work to be done, but it can be done in a more
holistic way if the Critique is read as a design manual.
Tom Scott CSNET: scott@bgsu
Dept. of Math. & Stat. ARPANET: scott%bgsu@csnet-relay
Bowling Green State Univ. UUCP: cbosgd!osu-eddie!bgsuvax!scott
Bowling Green OH 43403-0221 ATT: 419-372-2636 (work)
------------------------------
Date: Sun, 13 Jul 86 23:16:27 PDT
From: kube%cogsci@berkeley.edu (Paul Kube)
Subject: Re: common sense
>From Newman.pasa@Xerox.COM, AIList Digest V4 #165:
>...All three appear
>to believe that there is some magical property of human intelligence
>(Searle and Dreyfus appear to believe that there is something special
>about the biological nature of human intelligence) which cannot be
>automated, but none can come up with a reason for why this is so.
>
>Comments?? I would particularly like to hear what you think Searle or
>Dreyfus would say to this.
Searle and Dreyfus agree that human intelligence is biological (and so
*not* magical), and in fact believe that artificial intelligences
probably can be created. What they doubt is that a class of currently
popular techniques for attempting to produce artificial intelligence
will succeed. Beyond this, the scope of their conclusions, and their
arguments for them, are pretty different. They have given reasons for
their views at length in various publications, so I hesitate to post
such a short summary, but here goes:
Dreyfus has been heavily influenced by the existential
phenomenologists Heidegger and Merleau-Ponty. This stuff is extremely
dense going, but the main idea seems to be a reaction against the
Platonic or Cartesian picture of intelligent behavior as being
necessarily rational, reasoned, and rule-described. Instead,
attention is called to the vast bulk of unreflective, fluent, adaptive
coping that constitutes most of human interaction with the world.
That the phenomenology of this kind of intelligent behavior shows it
to not be produced by reasoning about facts, or applying rules to
propositional representations, etc., and that every system designed to
produce such behavior by these means has been brittle and not
extensible, are reasons to suppose that (1) it's not done that way and
(2) it can't be done that way. (These considerations are not intended
to apply to systems which are only rule-described at a sufficiently
subpersonal level, say at the level of weights of neuronal
interconnections. Last I heard, Dreyfus thinks that some flavors of
connectionism might be on the right track.)
Searle, on the other hand, talks about intentional mental states
(states which have semantic content, i.e., which are `about'
something), not behavior. His (I guess by now kind of classic)
Chinese Room argument is intended to show that no formal structure of
states of the sort required to satisfy a computational description of
a system will guarantee that any of the system's states are
intentional. And if it's not the structure of the states that does
the trick, it's probably what the states are instanced in, viz.
neurochemistry and neurophysiology, that lends them intentionality.
So, for Searle, if you want to build an artificial agent that will not
only behave intelligently but also really have beliefs, etc., you will
probably have to wire it up out of neurons, not transistors. (Anyway,
brains are the only kind of substance that we know of that produce
intentional states; Searle regards it as an open empirical question
whether it's possible to do it with silicon.)
Now you can think that these reasons are more or less awful, but it's
just not right to say that these guys have come up with no reasons at all.
Paul Kube
kube@berkeley.edu
...ucbvax!kube
------------------------------
Date: 14 Jul 86 09:42 PDT
From: Newman.pasa@Xerox.COM
Subject: Re: common sense
Thanks for the reply.
Dreyfus' view seems to have changed a bit since I last read anything of
his, so I will let that go. However, I suspect that what I am about to
say applies to him too.
I like your description of Searle's argument. It puts some things in a
clearer light than Searle's own stuff. However, I think that my point
still stands. Searle's argument seems to assume some "magical" property
(I really should be more careful when I use this term; please understand
that I mean only that the property is unexplained, and that I find its
existence highly unintuitive and unlikely) of biology that allows
neurons (governed by the laws of physics, probably entirely
deterministic) to produce a phenomena (or epiphenomena if you prefer -
intelligence) that is not producible by other deterministic systems.
What is this strange feature of neurobiology? What reason do we have to
believe that it exists other than the fact that it must exist if the
Chineese Room argument is correct? I personally think it much more
likely that there is a flaw somewhere in the Chineese Room argument.
>>Dave
------------------------------
Date: Mon 14 Jul 86 09:51:27-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Searle's Chinese Room
There is a lengthy rebuttal to Searle's Chinese Room argument
as the cover story in the latest Abacus. Dr. Rappaport claims
that human understanding (of Chinese or anything else) is different
from machine understanding but that both are implementations of
an abstract concept, "Understanding". I find this weak on three
counts:
1) Any two related concepts share a central core; defining this as the
abstract concept of which each is an implementation is suspect. Try
to define "chair" or "game" by intersecting the definitions of class
members and you will end up with inconsistent or empty abstractions.
2) Saying that machines are capable of "machine understanding", and
hence of "Understanding", takes the heart out of the argument. Anyone
would agree that a computer can "understand" Chinese (or arithmetic)
in a mechanical sense, but that does not advance us toward agreement
on whether computers can be intelligent. The issue now becomes "Can
machines be given "human" understanding.?" The question is difficult
even to state in this framework.
3) Searle's challege needn't have been ducked in this manner. I
believe the resolution of the Chinese Room paradox is that, although
Searle does not understand Chinese, Searle plus his hypothetical
algorithm for answering Chinese queries would constitute a >>system<<
that does understand Chinese. The Room understands, even though
neither Searle nor his written instruction set understands. By
analogy, I would say that Searle understands English even though his
brain circuitry (or homunculus or other wetware) does not.
I have not read the literature surrounding Searle's argument, but I
do not believe this Abacus article has the final word.
-- Ken Laws
------------------------------
End of AIList Digest
********************