Copy Link
Add to Bookmark
Report
AIList Digest Volume 5 Issue 149
AIList Digest Tuesday, 16 Jun 1987 Volume 5 : Issue 149
Today's Topics:
Theory - The Symbol Grounding Problem
----------------------------------------------------------------------
Date: 14 Jun 87 15:13:34 GMT
From: ihnp4!homxb!houxm!houdi!marty1@ucbvax.Berkeley.EDU
(M.BRILLIANT)
Subject: Re: The symbol grounding problem
In article <843@mind.UUCP>, harnad@mind.UUCP (Stevan Harnad) writes:
>
> Intentionality and consciousness are not equivalent to behavioral
> capacity, but behavioral capacity is our only objective basis for
> inferring that they are present. Apart from behavioral considerations,
> there are also functional considerations: What kinds of internal
> processes (e.g., symbolic and nonsymbolic) look as if they might work?
> and why? and how? The grounding problem accordingly has functional aspects
> too. What are the right kinds of causal connections to ground a
> system? Yes, the test of successful grounding is the TTT, but that
> still leaves you with the problem of which kinds of connections are
> going to work. I've argued that top-down symbol systems hooked to
> transducers won't, and that certain hybrid bottom-up systems might. All
> these functional considerations concern how to ground symbols, they are
> distinct from (though ultimately, of course, dependent on) behavioral
> success, and they do have independent content.
Harnad's terminology has proved unreliable: analog doesn't mean
analog, invertible doesn't mean invertible, and so on. Maybe
top-down doesn't mean top-down either.
Suppose we create a visual transducer feeding into an image
processing module that could delineate edges, detect motion,
abstract shape, etc. This processor is to be built with a
hard-wired capability to detect "objects" without necessarily
finding symbols for them.
Next let's create a symbol bank, consisting of a large storage
area that can be partitioned into spaces for strings of
alphanumeric characters, with associated pointers, frames,
anything else you think will work to support a sophisiticated
knowledge base. The finite area means that memory will be
limited, but human memory can't really be infinite, either.
Next let's connect the two: any time the image processor finds
an object, the machine makes up a symbol for it. When it finds
another object, it makes up another symbol and links that symbol
to the symbols for any other objects that are related to it in
ways that it knows about (some of which might be hard-wired
primitives): proximity in time or space, similar shape, etc. It
also has to make up symbols for the relations it relies on to
link objects. I'm over my head here, but I don't think I'm
asking for anything we think is impossible. Basically, I'm
looking for an expert system that learns.
Now we decide whether we want to play a game, which is to make
the machine seem human, or whether we want the machine to
exhibit human behavior on the same basis as humans, that is, to
survive. For the game, the essential step is to make the
machine communicate with us both visually and verbally, so it
can translate the character strings it made up into English, so
we can understand it and it can understand us. For the survival
motivation, the machine needs a full set of receptors and
effectors, and an environment in which it can either survive or
perish, and if we built it right it will learn English for its
own reasons. It could also endanger our survival.
Now, Harnad, Weinstein, anyone: do you think this could work,
or do you think it could not work?
M. B. Brilliant Marty
AT&T-BL HO 3D-520 (201)-949-1858
Holmdel, NJ 07733 ihnp4!houdi!marty1
------------------------------
Date: 14 Jun 87 14:15:55 GMT
From: ihnp4!homxb!houxm!houdi!marty1@ucbvax.Berkeley.EDU
(M.BRILLIANT)
Subject: Re: The symbol grounding problem
In article <835@mind.UUCP>, harnad@mind.UUCP (Stevan Harnad) writes:
> marty1@houdi.UUCP (M.BRILLIANT) of AT&T Bell Laboratories, Holmdel writes:
>
> > Human visual processing is neither analog nor invertible.
>
> Nor understood nearly well enough to draw the former two conclusions,
> it seems to me. If you are taking the discreteness of neurons, the
> all-or-none nature of the action potential, and the transformation of
> stimulus intensity to firing frequency as your basis for concluding
> that visual processing is "digital," the basis is weak, and the
> analogy with electronic transduction strained.
No, I'm taking more than that as the basis. I don't have any
names handy, and I'm not a professional in neurobiology, but
I've seen many articles in Science and Scientific American
(including a classic paper titled something like "What the
frog's eye tells the frog's brain") that describe the flow of
visual information through the layers of the retina, and through
the layers of the visual cortex, with motion detection, edge
detection, orientation detection, etc., all going on in specific
neurons. Maybe a neurobiologist can give a good account of what
all that means, so we can guess whether computer image
processing could emulate it.
> > what is the intrinsic meaning of "intrinsically meaningful"?
> > The Turing test is an objectively verifiable criterion. How can
> > we objectively verify intrinsic meaningfulness?
>
> We cannot objectively verify intrinsic meaningfulness. The Turing test
> is the only available criterion. Yet we can make inferences...
I think that substantiates Weinstein's position: we're back to
the behavior-generating problem.
> ....: We
> know the difference between looking up a meaning in an English/English
> dictionary versus a Chinese/Chinese dictionary (if we are nonspeakers
> of Chinese): The former symbols are meaningful and the latter are
> not.
Not relevant. Intrinsically, words in both languages are
equally meaningful.
> > Using "analog" to mean "invertible" invites misunderstanding,
> > which invites irrelevant criticism.
>
> ..... I have acknowledged all
> along that the physically invertible/noninvertible distinction may
> turn out to be independent of the A/D distinction, although the
> overlap looks significant. And I'm doing my best to sort out the
> misunderstandings and irrelevant criticism...
Then please stop using the terms analog and digital.
>
> > Human (in general, vertebrate) visual processing is a dedicated
> > hardwired digital system. It employs data reduction to abstract such
> > features as motion, edges, and orientation of edges. It then forms a
> > map in which position is crudely analog to the visual plane, but
> > quantized. This map is sufficiently similar to maps used in image
> > processing machines so that I can almost imagine how symbols could be
> > generated from it.
>
> I am surprised that you state this with such confidence. In
> particular, do you really think that vertebrate vision is well enough
> understood functionally to draw such conclusions? ...
Yes. See above.
> ... And are you sure
> that the current hardware and signal-analytic concepts from electrical
> engineering are adequate to apply to what we do know of visual
> neurobiology, rather than being prima facie metaphors?
Not the hardware concepts. But I think some principles of
information theory are independent of the medium.
> > By the time it gets to perception, it is not invertible, except with
> > respect to what is perceived. Noninvertibility is demonstrated in
> > experiments in the identification of suspects. Witnesses can report
> > what they perceive, but they don't always perceive enough to invert
> > the perceived image and identify the object that gave rise to the
> > perception....
> > .... If I am right, human intelligence itself relies on neither
> > analog nor invertible symbol grounding, and therefore artificial
> > intelligence does not require it.
>
> I cannot follow your argument at all. Inability to categorize and identify
> is indeed evidence of a form of noninvertibility. But my theory never laid
> claim to complete invertibility throughout.....
First "analog" doesn't mean analog, and now "invertibility"
doesn't mean complete invertibility. These arguments are
getting too slippery for me.
> .... Categorization and identification
> itself *requires* selective non-invertibility: within-category differences
> must be ignored and diminished, while between-category differences must
> be selected and enhanced.
Well, that's the point I've been making. If non-invertibility
is essential to the way we process information, you can't say
non-invertibility would prevent a machine from emulating us.
Anybody can do hand-waving. To be convincing, abstract
reasoning must be rigidly self-consistent. Harnad's is not.
I haven't made any assertions as to what is possible. All
I'm saying is that Harnad has come nowhere near proving his
assertions, or even making clear what his assertions are.
M. B. Brilliant Marty
AT&T-BL HO 3D-520 (201)-949-1858
Holmdel, NJ 07733 ihnp4!houdi!marty1
------------------------------
Date: 15 Jun 87 01:43:55 GMT
From: berleant@sally.utexas.edu (Dan Berleant)
Subject: Re: The symbol grounding problem
It is interesting that some (presumably significant) visual processing
occurs by graded potentials without action potentials. Receptor cells
(rods & cones), 'horizontal cells' which process the graded output of
the receptors, and 'bipolar cells' which do further processing, use no
action potentials to do it. This seems to indicate the significance of
analog processing to vision.
There may also be significant invertibility at these early stages of
visual processing in the retina: One photon can cause several hundred
sodium channels in a rod cell to close. Such sensitivity suggests a need
for precise representation of visual stimuli which suggests the
representation might be invertible.
Furthermore, the retina cannot be viewed as a module, only loosely
coupled to the brain. The optic nerve, which does the coupling, has a
high bandwidth and thus carries much information simultaneously along
many fibers. In fact, the optic nerve carries a topographic
representation of the retina. To the degree that a topographic
representation is an iconic representation, the brain thus receives an
iconic representation of the visual field.
Furthermore, even central processing of visual information is
characterized by topographic representations. This suggests that iconic
representations are important to the later stages of perceptual
processing. Indeed, all of the sensory systems seem to rely on
topographic representations (particularly touch and hearing as well as
vision).
An interesting example in hearing is direction perception. Direction
seems to be, as I understand it, found by processing the difference in
time from when a sound reaches one ear to when it reaches the other, in
large part. The resulting direction is presumably an invertible
representation of that time difference.
Dan Berleant
UUCP: {gatech,ucbvax,ihnp4,seismo,kpno,ctvax}!ut-sally!berleant
ARPA: ai.berleant@r20.utexas.edu
------------------------------
Date: 14 Jun 87 15:03:51 GMT
From: harwood@cvl.umd.edu (David Harwood)
Subject: Re: The symbol grounding problem
In article <843@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>
(... replying to Anders Weinstein ...who wonders "Where's the beef?" in
Steve Harnad's conceptual and terminological salad ...; uh - let me be first
to prophylactically remind us - lest there is any confusion and forfending
that he should perforce of intellectual scruple must need refer to his modest
accomplishments - Steve Harnad is editor of Behavioral and Brain Sciences,
and I am not, of course. We - all of us - enjoy reading such high-class
stuff...;-)
Anyway, Steve Harnad replies to A.W., re "Total Turing Tests",
behavior, and the (great AI) "symbol grounding problem":
>I think that this discussion has become repetitious, so I'm going to
>have to cut down on the words.
Praise the Lord - some insight - by itself, worthy of Pass of
the "Total Turing Test."
>... Our disagreement is not substantive.
>I am not a behaviorist. I am a methodological epiphenomenalist.
I'm not a behaviorist, you're not a behaviorist, he's not a
behaviorist too ... We are all methodological solipsists hereabouts
on this planet, having already, incorrigibly, failed the "Total Turing
Test" for genuine intergalactic First Class rational beings, but so what?
(Please, Steve - this is a NOT a test - I repeat - this is NOT a test of
your philosophical intelligence. It is an ACTUAL ALERT of your common
sense, not to mention, sense of humor. Please do not solicit BBS review of
this thesis...
>... Apart from behavioral considerations,
>there are also functional considerations: What kinds of internal
>processes (e.g., symbolic and nonsymbolic) look as if they might work?
>and why? and how? The grounding problem accordingly has functional aspects
>too. What are the right kinds of causal connections to ground a
>system? Yes, the test of successful grounding is the TTT, but that
>still leaves you with the problem of which kinds of connections are
>going to work. I've argued that top-down symbol systems hooked to
>transducers won't, and that certain hybrid bottom-up systems might. All
>these functional considerations concern how to ground symbols, they are
>distinct from (though ultimately, of course, dependent on) behavioral
>success, and they do have independent content.
>--
>
>Stevan Harnad (609) - 921 7771
>{bellcore, psuvax1, seismo, rutgers, packard} !princeton!mind!harnad
>harnad%mind@princeton.csnet harnad@mind.Princeton.EDU
You know what is the real problem with your postings - it's
what I would call "the symbol grounding problem". You want to say obvious
things in the worst possible way, otherwise say abstract things in the
worst possible way.. And ignore what others say. Also, for purposes of
controversial public discussion, ignore scientific 'facts' (eg about
neurologic perceptual equivalence), and standard usage of scientific
terminology and interpretation of theories. (Not that these are sacrosanct.)
It seems to me that your particular "symbol grounding problem"
is indeed the the sine qua non of the Total Turing Test for "real"
philosophers of human cognition. As I said, we are all methodological
solipsists hereabouts. However, if you want AI funding from me - I want to
see what real computing system, using your own architecture and object code
of at least 1 megabytes, has been designed by you. Then we will see how
your "symbols" are actually grounded, using the standard, naive but effective
denotational semantics for the "symbols" of your intention, qua "methodological
epiphenomensist."
David Harwood
------------------------------
Date: 15 Jun 87 03:32:07 GMT
From: berleant@sally.utexas.edu (Dan Berleant)
Subject: Re: The symbol grounding problem
In article <835@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>We cannot objectively verify intrinsic meaningfulness. The Turing test
>is the only available criterion.
Yes, the Turing test is by definition subjective, and also subject to
variable results from hour to hour even from the same judge.
But I think I disagree that intrinsic meaningfulness cannot be
objectively verified. What about the model theory of logic?
Dan Berleant
UUCP: {gatech,ucbvax,ihnp4,seismo,kpno,ctvax}!ut-sally!berleant
ARPA: ai.berleant@r20.utexas.edu
------------------------------
End of AIList Digest
********************