Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 158

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest            Monday, 29 Jun 1987      Volume 5 : Issue 158 

Today's Topics:
Theory - The Symbol Grounding Problem

----------------------------------------------------------------------

Date: 27 Jun 87 01:09:41 GMT
From: mind!harnad@princeton.edu (Stevan Harnad)
Subject: Re: The symbol grounding problem: McCarthy's query


marty1@houdi.UUCP (M.BRILLIANT) of AT&T Bell Laboratories, Holmdel writes:

> But a "physically analog" sensory process (as distinct from a digital
> one) can be approximately modeled (to within the noise) by a continuous
> transformation. The continuous approximation allows us to regard the
> analog transformation as image-forming (iconic). But only the
> continuous approximation is invertible.

I have no quarrel with this, in fact I make much the same point --
that iconic representations are approximate too -- in the chapter
describing the three kinds of representation. Is there any reason for
expecting I would object?

> the "hybrid" three-layer system... does not have a "symbol-cruncher
> hardwired to peripheral modules"
because there is a feature extractor
> (and classifier) in between. The main point is the presence or
> absence of the feature extractor... The symbol-grounding problem
> arises because the symbols are discrete, and therefore have to be
> associated with discrete objects or classes. Without the feature
> extractor, there would be no way to derive discrete objects from the
> sensory inputs. The feature extractor obviates the symbol-grounding
> problem.

The problem certainly is not just that of discrete symbols needing to pick
out discrete objects. You are vastly underestimating the problem of
sensory categorization, sensory learning, and the relation between
lower and higher-order categories. Nor is it obvious that symbol manipulation
can still be regarded as just symbol manipulation when the atomic symbols
are constrained to be the labels of sensory categories. That's a
bottom-up constraint, and symbolic AI normally expects to float down
onto its sensors top-down. Imagine if your "setq" statements were
constrained by what your elementary symbols were connected to, and their
respective causal interrelations with other nonsymbolic sensory representations
and their associated labels.

> Why does Harnad say "invertibility is a necessary condition
> for iconic representations..., NOT for grounding"


Because the original statement of mine that you quote was a reply to a
query about whether ALL representations had to be invertible for grounding.
(It was accompanied by alleged counterexamples -- grounded but noninvertible
percepts.) My reply indicated that only iconic ones had to be invertible,
but that both iconic and categorical (noninvertible) ones were needed to
ground symbols.

> Position 1 [on the symbol grounding problem] is that the peripherals
> and the symbolic module have to be connected in the right way. Harnad's
> position is... a special case of position 1.

I'm afraid not. I don't think there will be independent peripheral
modules and symbolic modules suitably interconnected in the hybrid
device that passes the Total Turing Test. I think a lot of what we
consider cognition will be going on in the nonsymbolic iconic and categorical
systems (discrimination, categorization, sensory learning and
generalization) and that symbol manipulation will be constrained in
ways that don't leave it in any way analogous to the notion of an
independent functional module, operating on its own terms (as in
standard AI), but connected at some critical point with the
nonsymbolic modules. When I spoke earlier of the "connections" of the
atomic symbols I had in mind something much more complexly
interdigitated and interdependent than can be captured by anything
that remotely resembles position 1. Position 1 is simply AI's pious
hope that a pure "top-down" approach can expect to meet up with a
bottom-up one somewhere in between. Mine is not a special case of
this; it's a rival.

> "...and (2) connectionist nets already generate grounded "symbols." Is
> that a variant of Harnad's position, i.e., "
(possibly connectionist)"?

No. In my model connectionistic processes are just one possible
candidate for the mechanism that finds the features that will reliably
pick out a learned category. They would just be a component in the
categorical representational system. But there are much more ambitious
connectionistic views than that, for example, that connectionism can
usurp the role of symbolic representations altogether or (worse) that
they ARE symbolic (in some yet to be established sense). As far as I'm
concerned, the latter would entail a double grounding problem for
connectionism, the first to ground its interpretation of its states as
symbolic states, and then to ground the interpretations of the
symbolic states themselves (which is the standard symbol grounding problem).

--

Stevan Harnad (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet harnad@mind.Princeton.EDU

------------------------------

Date: 27 Jun 87 14:32:42 GMT
From: mind!harnad@princeton.edu (Stevan Harnad)
Subject: Re: The symbol grounding problem: Correction re.
Approximationism


In responding to Cugini and Brilliant I misinterpreted a point that
the former had made and the latter reiterated. It's a point that's
come up before: What if the iconic representation -- the one that's
supposed to be invertible -- fails to preserve some objective property
of the sensory projection? For example, what if yellow and blue at the
receptor go into green at the icon? The reply is that an analog
representation is only analog in what it preserves, not in what it fails
to preserve. Icons are hence approximate too. If all retinal squares,
irrespective of color, go into gray icons, I have icons of the
squareness, but not of the colors. Or, to put it another way, the
grayness is approximate as between all the actual colors (and gray).

There is no requirement that all the features of the sensory
projection be preserved in icons; just that some of them should be --
enough to subserve our discrimination capacities. This is analogous to
the fact that the sensory projection itself need not (and does not,
and cannot) preserve all of the properties of the distal object. To
those it fails to preserve -- and that we cannot detect by instruments
or inference -- we are fated to remain "
blind." But none of this
information loss in either sensory projections or icons (or, for that
matter, categorical representations) compromises groundedness. It just
means that our representations are doomed to be approximations.

Finally, it must be recalled that my grounding scheme is proposed in a
framework of methodological epiphenomenalism: It only tries to account
for performance capacity (discrimination, identification,
description), not qualitative experience. So "
what it is like to see
yellow" is not part of my evidential burden: just what it takes to
discriminate, identify and describe colors as those who see yellow do...
--

Stevan Harnad (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet harnad@mind.Princeton.EDU

------------------------------

Date: 27 Jun 87 13:22:19 GMT
From: ihnp4!homxb!houdi!marty1@ucbvax.Berkeley.EDU (M.BRILLIANT)
Subject: Re: The symbol grounding problem

In article <917@mind.UUCP>, harnad@mind.UUCP (Stevan Harnad) writes:
> ... blurred the distinction between the
> following: (a) the many all-or-none categories that are the real burden
> for an explanatory theory of categorization (a penguin, after all, be it
> ever so atypical a bird, ... is, after all, indeed a bird, and we know
> it, and can say so, with 100% accuracy every time, ....
> ... and (b) true "
graded" categories such as "big," "intelligent," ...

> ......
> "
games" are either (i) an all-or-none category, i.e., there is a "right" or
> "
wrong" of the matter, and we are able to sort accordingly, ...
> ... or (ii) "
games"
> are truly a fuzzy category, in which membership is arbitrary,
> uncertain, or a matter of degree. But if the latter, then games are
> simply not representative of the garden-variety all-or-none
> categorization capacity that we exercise when we categorize most
> objects, such as chairs, tables, birds....

Now, much of this discussion is out of my field, but (a) I would like
to share in the results, and (b) I understand membership in classes
like "
bird" and "chair."

I learned recently that I can't categorize chairs with 100% accuracy.
A chair used to be a thing that supported one person at the seat and
the back, and a stool had no back support. Then somebody invented a
thing that supported one person at the seat, the knees, but not the
back, and I didn't know what it was. As far as my sensory
categorization was concerned at the time, its distinctive features were
inadequate to classify it. Then somebody told me it was a chair. Its
membership in the class "
chair" was arbitrary. Now a "chair" in my
lexicon is a thing that supports the seat and either the back or the
knees.

Actually, I think I perceive most chairs by recognizing the object
first as a familiar thing like a kitchen chair, a wing chair, etc., and
then I name it with the generic name "
chair." I think Harnad would
recognize this process. The class is defined arbitrarily by inclusion
of specific members, not by features common to the class. It's not so
much a class of objects, as a class of classes....

If that is so, then "
bird" as a categorization of "penguin" is purely
symbolic, and hence is arbitrary, and once the arbitrariness is defined
out, that categorization is a logical, 100% accurate, deduction. The
class "
penguin" is closer to the primitives that we infer inductively
from sensory input.

But the identification of "
penguin" in a picture, or in the field, is
uncertain because the outlines may be blurred, hidden, etc. So there
is no place in the pre-symbolic processing of sensory input where 100%
accuracy is essential. (This being so, there is no requirement for
invertibility.)

M. B. Brilliant Marty
AT&T-BL HO 3D-520 (201)-949-1858
Holmdel, NJ 07733 ihnp4!houdi!marty1

------------------------------

Date: 28 Jun 87 17:52:03 GMT
From: mind!harnad@princeton.edu (Stevan Harnad)
Subject: The symbol grounding problem: Against Rosch & Wittgenstein


marty1@houdi.UUCP (M.BRILLIANT) of AT&T Bell Laboratories, Holmdel asks:

> Why require 100% accuracy in all-or-none categorizing?... I learned
> recently that I can't categorize chairs with 100% accuracy.

This is a misunderstanding. The "
100% accuracy" refers to the
all-or-none-ness of the kinds of categories in question. The rival
theories in the Roschian tradition have claimed that many categories
(including "
bird" and "chair") do not have "defining" features. Instead,
membership is either fuzzy or a matter of degree (i.e., percent), being
based on degree of similarity to a prototype or to prior instances, or on
"
family resemblances" (as in Wittgenstein on "games"), etc.. I am directly
challenging this family of theories as not really providing a model for
categorization at all. The "
100% accuracy" refers to the fact that,
after all, we do succeed in performing all-or-none sorting and
labeling, and that membership assignment in these categories is not
graded or a matter of degree (although our speed and "
typicality
ratings" may be).

I am not, of course, claiming that noise does not exist and that errors
may not occur under certain conditions. Perhaps I should have put it this way:
Categorization preformance (with all-or-none categories) is highly reliable
(close to 100%) and MEMBERSHIP is 100%. Only speed/ease of categorization and
typicality ratings are a matter of degree. The underlying representation must
hence account for all-or-none categorization capacity itself first,
then worry about its fine-tuning.

This is not to deny that even all-or-none categorization may encounter
regions of uncertainty. Since ALL category representations in my model are
provisional and approximate (relative to the context of confusable
alternatives that have been sampled to date), it is always possible that
the categorizer will encounter an anomalous instance that he cannot classify
according to his current representation. The representation must
hence be revised and updated under these conditions, if ~100% accuracy
is to be re-attained. This still does not imply that membership is
fuzzy or a matter of degree, however, only that the (provisional
"
defining") features that will successfully sort the members must be revised
or extended. The approximation must be tightened. (Perhaps this is
what happened to you with your category "
chair.") The models for the
true graded (non-all-or-none) and fuzzy categories are, respectively,
"
big" and "beautiful."

> The class ["
chair," "bird"] is defined arbitrarily by inclusion
> of specific members, not by features common to the class. It's not so
> much a class of objects, as a class of classes.... If that is so,
> then "
bird" as a categorization of "penguin" is purely symbolic, and
> hence is arbitrary, and once the arbitrariness is defined
> out, that categorization is a logical, 100% accurate, deduction.
> The class "
penguin" is closer to the primitives that we infer
> inductively [?] from sensory input... But the identification of
> "
penguin" in a picture, or in the field, is uncertain because the
> outlines may be blurred, hidden, etc. So there is no place in the
> pre-symbolic processing of sensory input where 100% accuracy is
> essential. (This being so, there is no requirement for invertibility.)

First, most categories are not arbitrary. Physical and ecological
contraints govern them. (In the case of "
chair," this includes the
Gibsonian "
affordance" of whether they're something that can be sat
upon.) One of the constraints may be social convention (as in
stipulations of what we call what, and why), but for a
categorizer that must learn to sort and label correctly, that's just
another constraint to be satisfied. Perhaps what counts as a "
game" will
turn out to depend largely on social stipulation, but that does not make
its constraints on categorization arbitrary: Unless we stipulate that
"
gameness" is a matter of degree, or that there are uncertain cases
that we have no way to classify as "
game" or "nongame," this category
is still an all-or-none one, governed by the features we stipulate.
(And I must repeat: Whether or not we can introspectvely report the features
we are actually using is irrelevant. As long as reliable, consensual,
all-or-none categorization performance is going on, there must be a set of
underlying features governing it -- both with sensory and more
abstract categories. The categorization theorist's burden is to infer
or guess what those features really are.)

Nor is "
symbolic" synonymous with arbitrary. In my grounding scheme,
for example, the primitive categories are sensory, based on
nonsymbolic representations. The primitive symbols are then the names
of sensory categories; these can then can go on to enter into combinations
in the form of symbolic descriptions. There is a very subtle "
entry-point"
problem in investigating this bottom-up quasi-hierarchy, however:
Is a given input sensory or symbolic? And, somewhat independently, is
its categorization mediated by a sensory representation or a symbolic
one (or both, since there are complicated interrelations [especially
inclusion relations] between them, including redundancies and sometimes
even incoherencies)? The Roschian experimental and theoretical line of
work I am criticizing does not attempt to sort any of this out, and no
wonder, because it is not really modeling categorization performance
in the first place, just its fine tuning.

As to invertibility: I must again repeat, an iconic representation is
only analog in the properties of the sensory projection that it
preserves, not those it fails to preserve. Just as our successful
all-or-none categorization performance dictates that a reliable
feature set must have been selected, so our discrimination performance
dictates the minimal resolution capacity and invertibility there must be
in our iconic representations.
--

Stevan Harnad (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet harnad@mind.Princeton.EDU

------------------------------

Date: Sun 28 Jun 87 15:27:22-PDT
From: Ken Laws <Laws@Stripe.SRI.Com>
Subject: Fuzzy Symbolism

From: mind!harnad@princeton.edu (Stevan Harnad)

Finally, and perhaps most important: In bypassing the problem of
categorization capacity itself -- i.e., the problem of how devices
manage to categorize as correctly and successfully as they do, given
the inputs they have encountered -- in favor of its fine tuning, this
line of research has unhelpfully blurred the distinction between the
following: (a) the many all-or-none categories that are the real burden
for an explanatory theory of categorization (a penguin, after all, be it
ever so atypical a bird, and be it ever so time-consuming for us to judge
that it is indeed a bird, is, after all, indeed a bird, and we know
it, and can say so, with 100% accuracy every time, irrespective of
whether we can successfully introspect what features we are using to
say so) and (b) true "
graded" categories such as "big," "intelligent,"
etc. Let's face the all-or-none problem before we get fancy...

Is a mechanical rubber penguin a penguin? Is a dead or dismembered
penguin a penguin? How about a genetically damaged or altered penguin?
When does an penguin embryo become a penguin? When does it become a
bird? I think your example depends on circularities inherent in our
use of natural language. I can't unambiguously define the class of
penguins, so how can I be 100% certain that every penguin is a bird?
If, on the other hand, we are dealing only in abstractions, and the
only "
penguin" involved is a idealized living adult penguin bird, then
the question is a tautology. We would then be saying that we are 100%
certain that our abstraction satisfies its own sufficient conditions --
and even that could change if scientists someday discover incontrovertible
evidence that penguins are really fish.

In short, every category is a graded one except for those that we
postulate to be exact as part of their defining characteristics.


After writing the above, I saw the following reply:

I am not, of course, claiming that noise does not exist and that errors
may not occur under certain conditions. Perhaps I should have put it
this way: Categorization preformance (with all-or-none categories) is
highly reliable (close to 100%) and MEMBERSHIP is 100%. Only
speed/ease of categorization and typicality ratings are a matter of
degree. The underlying representation must hence account for
all-or-none categorization capacity itself first, then worry about its
fine-tuning.

This is not to deny that even all-or-none categorization may encounter
regions of uncertainty. Since ALL category representations in my model are
provisional and approximate (relative to the context of confusable
alternatives that have been sampled to date), it is always possible that
the categorizer will encounter an anomalous instance that he cannot classify
according to his current representation. The representation must
hence be revised and updated under these conditions, if ~100% accuracy
is to be re-attained. This still does not imply that membership is
fuzzy or a matter of degree, however, only that the (provisional
"
defining") features that will successfully sort the members must be revised
or extended. The approximation must be tightened.

You are entitled to such an opinion, of course, but I do not accept the
position as proven. We do, of course, sort and categorize objects when
forced to do so. At the point of observable behavior, then, some kind
of noninvertible or symbolic categorization has taken place. Such
behavior, however, is distinct from any of the internal representations
that produce it. I can carry fuzzy and even conflicting representations
until -- and often long after -- the behavior is initiated. Even at
the instant of commitment, my representations need be unambiguous only
in the implicit sense that one interpretation is momentarily stronger
than the other -- if, indeed, the choice is not made at random.

It may also be true that I do reduce some representations to a single
neural firing or to some other unambiguous event -- e.g., when storing
a memory. I find this unlikely as a general model. Coarse coding,
graded or frequency encodings, and widespread activation seem better
models of what's going on. Symbolic reasoning exists in pure form
only on the printed page; our mental manipulation even of abstract
symbols is carried out with fuzzy reasoning apparatus.

-- Ken Laws

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT