Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 156

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest            Monday, 29 Jun 1987      Volume 5 : Issue 156 

Today's Topics:
Theory - Symbol Grounding and Invertibility

----------------------------------------------------------------------

Date: Mon, 22 Jun 87 10:19:59 PDT
From: Neil Hunt <spar!hunt@decwrl.dec.com>
Subject: Symbol grounding and invertibility.

John Cugini <Cugini@icst-ecf.arpa> writes:

> What if there were a few-to-one transformation between the skin-level
> sensors ...
> My example was to suppose that #1:
> a combination of both red and green retinal receptors and #2 a yellow
> receptor BOTH generated the same iconic yellow.

We humans see the world (to a first order at least) through red, green and
blue receptors. We are thus unable to distinguish between light of a yellow
frequency, and a mixture of light of red and green frequencies, and we assign
to them a single token - yellow. However, if our visual apparatus was
equipped with yellow receptors as well, then these two input stimuli
would *appear* quite different, as indeed they are. In this case I think
that it is highly unlikely that we would have the same symbol to
represent the two cases.

Consider a species with only two classes of colour receptors, low
frequency and high frequency, roughly equivalent to our concepts
of red and blue, but with no middle frequency receptors corresponding
to a human concept of green). Creatures of such a species when shown
pure green light would receive reduced levels from the receptors
on each side of green frequency, thus receiving some combination of
blue and red signals. This would be indistinguishable from a mixture
of blue and red, which we call magenta. Such creatures might then
reason (incorrectly) about the possibility of having a middle frequency
receptor, and having a many to one mapping between case #1, pure
green light, and case #2, a mixture of red and blue, and wonder
about how that affects questions of invertibility. As we humans know,
if these creatures had such a visual capability, they would
invent a new symbol for magenta, and there would be no many to one
mapping.

> Clearly this iconic representation is non-invertible back out to the
> sensory surfaces, but intuitively it seems like it would be grounded
> nonetheless - how about it?

The fallacy is that iconic representation described is indeed non invertible,
but it is also clearly not grounded, since if we had yellow receptors,
we would be able to perceive a difference between, and require a new
symbol for one of the new colours.

Neil/.


----- End Forwarded Message -----

------------------------------

Date: 21 Jun 87 22:55:09 GMT
From: ihnp4!homxb!houxm!houdi!marty1@ucbvax.Berkeley.EDU
(M.BRILLIANT)
Subject: Re: The symbol grounding problem

In article <6670@diamond.BBN.COM>, aweinste@Diamond.BBN.COM (Anders
Weinstein) writes, with reference to article <861@mind.UUCP>
harnad@mind.UUCP (Stevan Harnad):
>
> Some of the things you say also suggest that you're attempting to resuscitate
> a form of classical empricist sensory atomism, where the "atomic" symbols
> refer to sensory categories acquired "by acquaintance" and the meaning of
> complex symbols is built up from the atoms "by description". This approach
> has an honorable history in philsophy; unfortunately, no one has ever been
> able to make it work. In addition to the above considerations, the main
> problems seem to be: first, that no principled distinction can be made
> between the simple sensory concepts and the complex "theoretical" ones; and
> second, that very little that is interesting can be explicitly defined in
> sensory terms (try, for example, "chair").
>
I hope none of us are really trying to resuscitate classical philosophies,
because the object of this discussion is to learn how to use modern
technologies. To define an interesting object in sensory terms requires
an intermediary module between the sensory system and the symbolic system.

With a chair in the visual sensory field, the system will use hard-coded
nonlinear (decision-making) techniques to identify boundaries and shapes
of objects, and identify the properties that are invariant to rotation
and translation. A plain wooden chair and an overstuffed chair will be
different objects in these terms. But the system might also learn to
identify certain types of objects that move, i.e., those we call people.
If it notices that people assume the same position in association with
both chair-objects, it could decide to use the same category for both.

The key to this kind of classification is that the chair is not defined in
explicit sensory terms but in terms of filtered sensory input.

M. B. Brilliant Marty
AT&T-BL HO 3D-520 (201)-949-1858
Holmdel, NJ 07733 ihnp4!houdi!marty1

P.S. Sorry for the double posting of my previous article.

------------------------------

Date: 20 Jun 87 02:17:09 GMT
From: ihnp4!homxb!houxm!houdi!marty1@ucbvax.Berkeley.EDU
(M.BRILLIANT)
Subject: Re: The symbol grounding problem

In article <861@mind.UUCP>, harnad@mind.UUCP writes:
> marty1@houdi.UUCP (M.BRILLIANT) asks:
>
> > what do you think is essential: (A) literally analog transformation,
> > (B) invertibility, or (C) preservation of significant relational
> > functions?
>
Let me see if I can correctly rephrase his answer:

(i) "discrimination" (pairwise same/different judgments) he associates
with iconic ("analog") representations, which he says have to be
invertible, and will ordinarily be really analog because "dedicated"
digital equivalents will be too complex.

(ii) for "identification" or "categorization" (sorting and labeling of
objects), he says only distinctive features need be extracted from the
sensory projection; this process is not invertible.

(iii) for "conscious problem-solving," etc., he says relation-preserving
symbolic representations would be optimal, if they are not "autonomous
(modular)"
but rather are grounded by deriving their atomic symbols
through the categorization process above.

(iv) to pass the Total Turing Test he wants all of the above, tied
together in the sequence described.

I agree with this formulation in most of its terms. But some of the
terms are confusing, in that if I accept what I think are good
definitions, I don't entirely agree with the statements above.

"Invertible/Analog": The property of invertibility is easy to visualize
for continuous functions. First, continuous functions are what I would
call "analog" transformations. They are at least locally image-forming
(iconic). Then, saying a continuous transformation is invertible, or
one-to-one, means it is monotonic, like a linear transformation, rather
than many-to-one like a parabolic transformation. That is, it is
unambiguously iconic.

It might be argued that physical sensors can be ambiguously iconic,
e.g., an object seen in a half-silvered mirror. Harnad would argue
that the ambiguity is inherent in the physical scene, and is not
dependent on the sensor. I would agree with that if no human sensory
system ever gave ambiguous imaging of unambiguous objects. What about
the ambiguity of stereophonic location of sound sources? In that case
the imaging (i) is unambiguous; only the perception (ii) is ambiguous.

But physical sensors are also noisy. In mathematical terms, that noise
could be modeled as discontinuity, as many-to-one, as one-to-many, or
combinations of these. The noisy transformation is not invertible.
But a "physically analog" sensory process (as distinct from a digital
one) can be approximately modeled (to within the noise) by a continuous
transformation. The continuous approximation allows us to regard the
analog transformation as image-forming (iconic). But only the
continuous approximation is invertible.

"Autonomous/Modular": The definition of "modular" is not clear to me.
I have Harnad's definition "not analogous to a top-down, autonomous
symbol-crunching module ... hardwired to peripheral modules."
The
terms in the definition need defining themselves, and I think there are
too many of them.

I would rather look at the "hybrid" three-layer system and say it does
not have a "symbol-cruncher hardwired to peripheral modules" because
there is a feature extractor (and classifier) in between. The main
point is the presence or absence of the feature extractor.

The symbol-grounding problem arises because the symbols are discrete,
and therefore have to be associated with discrete objects or classes.
Without the feature extractor, there would be no way to derive discrete
objects from the sensory inputs. The feature extractor obviates the
symbol-grounding problem. I consider the "symbol-cruncher hardwired to
peripheral modules"
to be not only a straw man but a dead horse.

M. B. Brilliant Marty
AT&T-BL HO 3D-520 (201)-949-1858
Holmdel, NJ 07733 ihnp4!houdi!marty1

------------------------------

Date: 26 Jun 87 04:38:02 GMT
From: mind!harnad@princeton.edu (Stevan Harnad)
Subject: Re: The symbol grounding problem


John Cugini <Cugini@icst-ecf.arpa> on ailist@stripe.sri.com writes:

> What if there were a few-to-one transformation between the skin-level
> sensors (remember Harnad proposes "skin-and-in" invertibility
> as being necessary for grounding) and the (somewhat more internal)
> iconic representation. My example was to suppose that #1:
> a combination of both red and green retinal receptors and #2 a yellow
> receptor BOTH generated the same iconic yellow.
> Clearly this iconic representation is non-invertible back out to the
> sensory surfaces, but intuitively it seems like it would be grounded
> nonetheless - how about it?

Invertibility is a necessary condition for iconic representation, not
for grounding. Grounding symbolic representations (according to my
hypothesis) requires both iconic and categorical representations. The
latter are selective, many-to-few, invertible only in the features
they pick out and, most important, APPROXIMATE (e.g., as between
red-green and yellow in your example above). This point has by now
come up several times...
--

Stevan Harnad (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet harnad@mind.Princeton.EDU

------------------------------

Date: 26 Jun 87 05:07:40 GMT
From: mind!harnad@princeton.edu (Stevan Harnad)
Subject: Re: The symbol grounding problem: McCarthy's query


In article 208 of comp.ai.digest: JMC@SAIL.STANFORD.EDU (John McCarthy)
asks:

> I imagine that the alleged point at issue and a few of the positions
> taken could be summarized for the benefit of those of us whose
> subjective probability that there is a real point at issue is too
> low to motivate studying the entire discussion but high enough to
> motivate reading a summary.

The point at issue concerns how symbols in a symbol-manipulative
approach to the modeling of mind can be grounded in something other
than more symbols so that their meanings and their connections to
objects can be independent of people's interpretations of them. One of
the positions taken was that connecting a purely symbolic module to
peripheral (transducer/effector) modules in the right way should be
all you need to ground the symbols. I suggested that all this is
likely to yield is more of the toy models that symbolic AI has produced
until now. To get human-scale (Total Turing Test) performance
capacity, a bottom-up hybrid nonsymbolic/symbolic system may be
needed, one in which the elementary symbols are the names of sensory
categories picked out by inductive (possibly connectionist) feature-filters
(categorical representations) and invertible analogs of sensory projections
(iconic representations). This model is described in "Categorical Perception:
The Groundwork of Cognition"
(Cambridge University Press 1987,
S. Harnad, ed., ISBN 0-521-26758-7). Other alternatives that have been
mentioned by others in the discussion included: (1) symbol-symbol "grounding"
is already enough and (2) connectionist nets already generate grounded
"symbols." If you want the entire file, I've saved it all...
--

Stevan Harnad (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet harnad@mind.Princeton.EDU

------------------------------

Date: 26 Jun 87 17:19:29 GMT
From: ihnp4!homxb!houdi!marty1@ucbvax.Berkeley.EDU (M.BRILLIANT)
Subject: Re: The symbol grounding problem

In article <914@mind.UUCP>, harnad@mind.UUCP (Stevan Harnad) writes:
> Invertibility is a necessary condition for iconic representation, not
> for grounding. Grounding symbolic representations (according to my
> hypothesis) requires both iconic and categorical representations...

Syllogism:
(a) grounding ... requires ... iconic ... representation....
(b) invertibility is ... necessary ... for iconic representation.
(c) hence, grounding must require invertibility.

Why then does harnad say "invertibility is a necessary condition
for ..., NOT for grounding"
(caps mine, of course)?

This discussion is getting hard to follow. Does it have to be carried
on simultaneously in both comp.ai and comp.cog-eng? Could harnad, who
seems to be the major participant, pick one?

M. B. Brilliant Marty
AT&T-BL HO 3D-520 (201)-949-1858
Holmdel, NJ 07733 ihnp4!houdi!marty1

------------------------------

Date: 26 Jun 87 18:03:26 GMT
From: ihnp4!homxb!houdi!marty1@ucbvax.Berkeley.EDU (M.BRILLIANT)
Subject: Re: The symbol grounding problem: McCarthy's query

Will the proponents of the various views described below, and those
whose revelant views have not been described below, please stand up?

In article <915@mind.UUCP>, harnad@mind.UUCP (Stevan Harnad) writes:
> In article 208 of comp.ai.digest: JMC@SAIL.STANFORD.EDU (John McCarthy)
> asks:
>
> > I imagine that the alleged point at issue and a few of the positions
> > taken could be summarized .....
>
> The point at issue concerns how symbols in a symbol-manipulative
> approach to the modeling of mind can be grounded in something other
> than more symbols so that their meanings and their connections to
> objects can be independent of people's interpretations of them.

> ..... One of
> the positions taken was that connecting a purely symbolic module to
> peripheral (transducer/effector) modules IN THE RIGHT WAY should be
> all you need to ground the symbols.

Caps mine. Position 1 is that the peripherals and the symbolic module
have to be connected in the right way. Harnad's position is that

> .... a bottom-up hybrid nonsymbolic/symbolic system may be
> needed, one in which the elementary symbols are the names of sensory
> categories picked out by inductive (possibly connectionist) feature-filters
> (categorical representations) and invertible analogs of sensory projections
> (iconic representations).....

This looks like a way to connect periperals to a symbolic module. To
the extent that I understand it, I like it, except for the invertibility
condition. If it's the right way, it's a special case of position 1.
Harnad has called the "right way" of position 1 "top-down,"
"hard-wired," and other names, to distance himself from it. I'm not
sure there are any real proponents of position 1 in such a narrow
sense. I support position 1 in the wide sense, and I think Harnad does.

> ..... Other alternatives that have been
> mentioned by others in the discussion included: (1) symbol-symbol "grounding"
> is already enough ....

They don't care about the problem, so either they or we can go away.
They (and I) want this discussion to go to another newsgroup.

> ..... and (2) connectionist nets already generate grounded "symbols."

Is that a variant of Harnad's position, i.e., "(possibly connectionist)"?

I think the real subject of discussion is the definition of some of the
technical terms in Harnad's position, and the identification of which
elements are critical and which might be optional? Might some of the
disagreement disappear if the definitions were more concrete?

M. B. Brilliant Marty
AT&T-BL HO 3D-520 (201)-949-1858
Holmdel, NJ 07733 ihnp4!houdi!marty1

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT