Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 163

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest           Wednesday, 1 Jul 1987     Volume 5 : Issue 163 

Today's Topics:
Theory - The Symbol Grounding Problem & Graded Categories

----------------------------------------------------------------------

Date: 28 Jun 87 23:56:43 GMT
From: ihnp4!homxb!houdi!marty1@ucbvax.Berkeley.EDU (M.BRILLIANT)
Subject: Re: The symbol grounding problem....

In article <919@mind.UUCP>, harnad@mind.UUCP (Stevan Harnad) writes:
> marty1@houdi.UUCP (M.BRILLIANT) of AT&T Bell Laboratories, Holmdel writes:
> ......
> > .... The feature extractor obviates the symbol-grounding
> > problem.
>
> ..... You are vastly underestimating the problem of
> sensory categorization, sensory learning, and the relation between
> lower and higher-order categories. Nor is it obvious that symbol manipulation
> can still be regarded as just symbol manipulation when the atomic symbols
> are constrained to be the labels of sensory categories....

I still think we're having more trouble with terminology than we
would have with the concepts if we understood each other. To
get a little more concrete, how walking through what a machine
might do in perceiving a chair?

I was just looking at a kitchen chair, a brown wooden kitchen
chair against a yellow wall, in side light from a window. Let's
let a machine train its camera on that object. Now either it
has a mechanical array of receptors and processors, like the
layers of cells in a retina, or it does a functionally
equivalent thing with sequential processing. What it has to do
is compare the brightness of neighboring points to find places
where there is contrast, find contrast in contiguous places so
as to form an outline, and find closed outlines to form objects.
There are some subtleties needed to find partly hidden objects,
but I'll just assume they're solved. There may also be an
interpretation of shadow gradations to perceive roundness.

Now the machine has the outline of an object in 2 dimensions,
and maybe some clues to the 3rd dimension. There are CAD
programs that, given a complete description of an object in
3D, can draw any 2D view of it. How about reversing this
essentially deductive process to inductively find a 3D form that
would give rise to the 2D view the machine just saw. Let the
machine guess that most of the odd angles in the 2D view are
really right angles in 3D. Then, if the object is really
unfamiliar, let the machine walk around the chair, or pick it
up and turn it around, to refine its hypothesis.

Now the machine has a form. If the form is still unfamiliar,
let it ask, "What's that, Daddy?" Daddy says, "That's a chair."
The machine files that information away. Next time it sees a
similar form it says "Chair, Daddy, chair!" It still has to
learn about upholstered chairs, but give it time.

That brings me to a question: do you really want this machine
to be so Totally Turing that it grows like a human, learns like
a human, and not only learns new objects, but, like a human born
at age zero, learns how to perceive objects? How much of its
abilities do you want to have wired in, and how much learned?

But back to the main question. I have skipped over a lot of
detail, but I think the outline can in principle be filled in
with technologies we can imagine even if we do not have them.
How much agreement do we have with this scenario? What are
the points of disagreement?

M. B. Brilliant Marty
AT&T-BL HO 3D-520 (201)-949-1858
Holmdel, NJ 07733 ihnp4!houdi!marty1

------------------------------

Date: 29 Jun 87 08:49:00 EST
From: cugini@icst-ecf.arpa
Reply-to: <cugini@icst-ecf.arpa>
Subject: invertibility as a graded category ?


Harnad writes:

> In responding to Cugini and Brilliant I misinterpreted a point that
> the former had made and the latter reiterated. It's a point that's
> come up before: What if the iconic representation -- the one that's
> supposed to be invertible -- fails to preserve some objective property
> of the sensory projection? ... The reply is that an analog
> representation is only analog in what it preserves, not in what it fails
> to preserve. Icons are hence approximate too. ...
> There is no requirement that all the features of the sensory
> projection be preserved in icons; just that some of them should be --
> enough to subserve our discrimination capacities.
> ... But none of this
> information loss in either sensory projections or icons (or, for that
> matter, categorical representations) compromises groundedness. It just
> means that our representations are doomed to be approximations.

But then why say that icons, but not categorical representations or symbols
are/must be invertible? (This was *your* original claim, after all)?
Isn't it just a vacuous tautology to claim that icons are invertible
wrt to the information they preserve, but not wrt the information they
lose? How could it be otherwise? Aren't even symbols likewise
invertible in that weak sense?

(BTW, I quite agree that the information loss does not compromise
grounding - indeed my very point was that there is nothing especially
scandalous about non-invertible icons.)

Look, there's information loss (many to one mapping) at each stage of the game:

1. distal object

2. sensory projection

3. icons

4. categorical representation

5. symbols


It was you who seemed to claim that there was some special invertibility
between stages 2 and 3 - but now you claim for it invertibility in
only such a vitiated sense as to apply to all the stages.

So a) do you still claim that the transition between 2 and 3 is invertible
in some strong sense which would not be true of, say, [1 to 2] or [3 to 4], and
b) if so, what is that sense?

Perhaps you just want to say that the transition between 2 and 3 is usually
more invertible than the other transitions ?

John Cugini <Cugini@icst-ecf.arpa>

------------------------------

Date: 29 Jun 87 10:35:00 EST
From: cugini@icst-ecf.arpa
Reply-to: <cugini@icst-ecf.arpa>
Subject: epistemological exception-taking


> > From me:
> >
> > What if there were a few-to-one transformation between the skin-level
> > sensors ...
> > My example was to suppose that #1:
> > a combination of both red and green retinal receptors and #2 a yellow
> > receptor BOTH generated the same iconic yellow.

> From: Neil Hunt <spar!hunt@decwrl.dec.com>
>
> We humans see the world (to a first order at least) through red, green and
> blue receptors. We are thus unable to distinguish between light of a yellow
> frequency, and a mixture of light of red and green frequencies, and we assign
> to them a single token - yellow. However, if our visual apparatus was
> equipped with yellow receptors as well, then these two input stimuli
> would *appear* quite different, as indeed they are. ...

Oh, really? How do you claim to know what the mental effect would be
of a hypothetical visual nerve apparatus? Do you know what it feels
like to be a bat?

John Cugini <Cugini@icst-ecf.arpa>

------------------------------

Date: 29 Jun 87 20:53:28 GMT
From: ihnp4!homxb!houdi!marty1@ucbvax.Berkeley.EDU (M.BRILLIANT)
Subject: Re: The symbol grounding problem: Against Rosch &
Wittgenstein


In article <931@mind.UUCP>, harnad@mind.UUCP (Stevan Harnad) writes:
> marty1@houdi.UUCP (M.BRILLIANT) of AT&T Bell Laboratories, Holmdel asks:
> > Why require 100% accuracy in all-or-none categorizing?... I learned
> > recently that I can't categorize chairs with 100% accuracy.
>
> This is a misunderstanding. The "100% accuracy" refers to the
> all-or-none-ness of the kinds of categories in question. The rival
> theories in the Roschian tradition have claimed that many categories
> (including "bird" and "chair") do not have "defining" features. Instead,
> membership is either fuzzy or a matter of degree (i.e., percent)....

OK: once I classify a thing as a chair, there are no two ways about it:
it's a chair. But there can be a stage when I can't decide. I
vacillate: "I think it's a chair." "Are you sure?" "No, I'm not sure,
maybe it's a bed."
I would never say seriously that I'm 40 percent
sure it's a chair, 50 percent sure it's a bed, and 10% sure it's an
unfamiliar object I've never seen before.

I think this is in agreement with Harnad when he says:

> Categorization preformance (with all-or-none categories) is highly reliable
> (close to 100%) and MEMBERSHIP is 100%. Only speed/ease of categorization and
> typicality ratings are a matter of degree....
> This is not to deny that even all-or-none categorization may encounter
> regions of uncertainty. Since ALL category representations in my model are
> provisional and approximate ..... it is always possible that
> the categorizer will encounter an anomalous instance that he cannot classify
> according to his current representation.....
> ...... This still does not imply that membership is
> fuzzy or a matter of degree.....

So to pass the Total Turing Test, a machine should respond the way a
human does when faced with inadequate or paradoxical sensory data: it
should vacillate (or bluff, as some people do). In the presence of
uncertainty it will not make self-consistent statements about
uncertainty, but uncertain and possibly inconsistent statements about
absolute membership.

M. B. Brilliant Marty
AT&T-BL HO 3D-520 (201)-949-1858
Holmdel, NJ 07733 ihnp4!houdi!marty1

------------------------------

Date: 29 Jun 87 23:34:58 GMT
From: mind!harnad@princeton.edu (Stevan Harnad)
Subject: The symbol grounding problem: "Fuzzy" categories?


In comp.ai.digest: Laws@STRIPE.SRI.COM (Ken Laws) asks re. "Fuzzy Symbolism":

> Is a mechanical rubber penguin a penguin?... dead...dismembered
> genetically damaged or altered...? When does a penguin embryo become
> a penguin?... I can't unambiguously define the class of penguins, so
> how can I be 100% certain that every penguin is a bird?... and even
> that could change if scientists someday discover incontrovertible
> evidence that penguins are really fish. In short, every category is a
> graded one except for those that we postulate to be exact as part of
> their defining characteristics.

I think you're raising the right questions, but favoring the wrong
answers. My response to this argument for graded or "fuzzy" category
was that our representations are provisional and approximate. They
converge on the features that will reliably sort members from
nonmembers on the basis of the sample of confusable alternatives
encountered to date. Being always provisional and approximate, they
are always susceptible to revision should the context of confusable
alternatives be widened.

But look at the (not so hidden) essentialism in Ken's query: "how can I
be 100% certain that every penguin is a bird?"
. I never promised that!
We're not talking about ontological essences here, about the way things
"really are," from the God's Eye" or omniscient point of view! We're
just talking about how organisms and other devices can sort and label
APPEARANCES as accurately as they do, given the feedback and
experiential sample they get. And this sorting and labeling is
provisional, based on approximate representations that pick out
features that reliably handle the confusable alternatives sampled to
date. All science can do is tighten the approximation by widening the
alternatives (experimentally) or strengthening the features
(theoretically).

But provisionally, we do alright, and it's NOT because we sort things
as being what they are as a matter of degree. A penguin is 100% a bird
(on current evidence) -- no more or less a bird than a sparrow. If
tomorrow we find instances that make it better to sort and label them
as fish, then tomorrow's approximation will be better than today's,
but they'll then be 100% fish, and so on.

Note that I'm not denying that there are graded categories; just that
these aren't them. Examples of graded categories are: big,
intelligent, beautiful, feminine, etc.

> You are entitled to such an opinion, of course, but I do not
> accept the position as proven...

(Why opinion, by the way, rather than hypothesis, on the evidence and
logical considerations available? Nor will this hypothesis be proven:
just supported by further evidence and analysis, or else supplanted by
a rival hypothesis that accounts for the evidence better; or the
hypothesis and its supporting arguments may be shown to be incoherent
or imparsimonious...)

> ...We do, of course, sort and categorize objects when forced to do so.
> At the point of observable behavior, then, some kind of noninvertible
> or symbolic categorization has taken place. Such behavior, however,
> is distinct from any of the internal representations that produce it.
> I can carry fuzzy and even conflicting representations until -- and
> often long after -- the behavior is initiated. Even at the instant of
> commitment, my representations need be unambiguous only in the
> implicit sense that one interpretation is momentarily stronger than
> the other -- if, indeed, the choice is not made at random.

I can't follow some of this. Categorization is the performance
capacity under discussion here. ("
Force" has nothing to do with it!).
And however accurately and reliably people can actually categorize things,
THAT'S how accurately our models must be able to do it under the same
conditions. If there's successful all-or-none performance, the
representational model must be able to generate it. How can the
behavior be "
distinct from" the representations that produce it?

This is not to say that representations will always be coherent, or
even that incoherent representations can't sometimes generate correct
categorization (up to a point). But I hardly think that the basis of
the bulk of our reliable all-or-none sorting and labeling will turn
out to be just a matter of momentary relative strengths -- or even
chance -- among graded representations. I think probabilistic mechanisms
are more likely to be involved in feature-finding in the training
phase (category learning) rather than in the steady state phase, when
a (provisional) performance asymptote has been reached.

> It may also be true that I do reduce some representations to a single
> neural firing or to some other unambiguous event -- e.g., when storing
> a memory. I find this unlikely as a general model. Coarse coding,
> graded or frequency encodings, and widespread activation seem better
> models of what's going on. Symbolic reasoning exists in pure form
> only on the printed page; our mental manipulation even of abstract
> symbols is carried out with fuzzy reasoning apparatus.

Some of this sounds like implementational considerations rather than
representational ones. The question was: Do all-or-none categories
(such as "
bird") have "defining" features that can be used to sort
members from nonmembers at the level of accuracy (~100%) with which we
sort? However they are coded, I claim that those features MUST exist
in the inputs and must be detected and used by the categorizer. A
penguin is not a bird as a matter of degree, and the features that
reliably assign it to "
bird" are not graded. Nor is "bird" a fuzzy
category such as "
birdlike." And, yes, symbolic representations are
likely to be more apodictic (i.e., categorical) than nonsymbolic ones.
--

Stevan Harnad (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet harnad@mind.Princeton.EDU

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT