Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 169

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest             Monday, 6 Jul 1987      Volume 5 : Issue 169 

Today's Topics:
Theory - "Fuzzy" Categories?

----------------------------------------------------------------------

Date: 2 Jul 87 01:44:00 GMT
From: ctnews!pyramid!prls!philabs!pwa-b!mmintl!franka@unix.sri.com
(Frank Adams)
Subject: Re: The symbol grounding problem: "Fuzzy" categories?

In article <936@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
|The question was: Do all-or-none categories (such as "bird") have "defining"
|features that can be used to sort members from nonmembers at the level of
|accuracy (~100%) with which we sort? However they are coded, I claim that
|those features MUST exist in the inputs and must be detected and used by the
|categorizer. A penguin is not a bird as a matter of degree, and the features
|that reliably assign it to "bird" are not graded.

I don't see how this follows. It is quite possible to make all-or-none
judgements based on graded features. Thermostats, for example, do it all
the time. People do, too. The examples which come to mind as being
obviously in this category are all judgements of actions to take based on
such features, not of categorization. But then, we don't understand how we
categorize.

But to take an example of categorizing based on a graded feature. Consider
a typical, unadorned, wooden kitchen chair. We have no problem categorizing
this as a "chair". Consider the same object, with no back. This is
clearly categorized as a "stool", and not a "chair". Now vary the size of
the back. With a one inch back, the object is clearly still a "stool"; with
a ten inch back, it is clearly a "chair"; somewhere in between is an
ambiguous point.

I would assert that we *do*, in fact, make "all-or-none" type distinctions
based precisely on graded distinctions. We have arbitrary (though vague)
cut off points where we make the distinction; and those cut off points are
chosen in such a way that ambiguous cases are rare to non-existent in our
experience[1].

In short, I see nothing about "all-or-none" categories which is not
explainable by arbitrary cutoffs of graded sensory data.

---------------
[1] There are some categories where this strategy does not work. Colors are
a good example of this -- they vary over all of their range, with no very
rare points in it. In this case, we use instead the strategy of large
overlapping ranges -- two people may disagree on whether a color should be
described as "blue" or "green", but both will accept "blue-green" as a
description. The same underlying strategy applies: avoid borderline
situations.
--

Frank Adams ihnp4!philabs!pwa-b!mmintl!franka
Ashton-Tate 52 Oakland Ave North E. Hartford, CT 06108

------------------------------

Date: 3 Jul 87 12:43:39 GMT
From: ihnp4!homxb!houdi!marty1@ucbvax.Berkeley.EDU (M.BRILLIANT)
Subject: Re: The symbol grounding problem

In article <958@mind.UUCP>, harnad@mind.UUCP (Stevan Harnad) writes:
> On ailist cugini@icst-ecf.arpa writes:
> > why say that icons, but not categorical representations or symbols
> > are/must be invertible? Isn't it just a vacuous tautology to claim
> > that icons are invertible wrt to the information they preserve, but
> > not wrt the information they lose?... there's information loss (many
> > to one mapping) at each stage of the game ...

In Harnad's response he does not answer the question "why?" He
only repeats the statement with reference to his own model.

Harnad probably has either a real problem or a contribution to
the solution of one. But when he writes about it, the verbal
problems conceal it, because he insists on using symbols that
are neither grounded nor consensual. We make no progress unless
we learn what his terms mean, and either use them or avoid them.

M. B. Brilliant Marty
AT&T-BL HO 3D-520 (201)-949-1858
Holmdel, NJ 07733 ihnp4!houdi!marty1

------------------------------

Date: 3 Jul 87 19:26:40 GMT
From: mind!harnad@princeton.edu (Stevan Harnad)
Subject: Re: The symbol grounding problem: "Fuzzy" categories?


In Article 176-8 of comp.cog-eng: franka@mmintl.UUCP (Frank Adams)
of Multimate International, E. Hartford, CT.writes:

> I don't believe there are any truly "all-or-none" categories. There are
> always, at least potentially, ambiguous cases... no "100% accuracy
> every time"
... how do you know that "graded" categories are less
> fundamental than the other kind?

On the face of it, this sounds self-contradictory, since you state
that you don't believe "the other kind" exists. But let's talk common sense.
Most of our object categories are indeed all-or-none, not graded. A
penguin is not a bird as a matter of degree. It's a bird, period. And
if we're capable of making that judgment reliably and categorically,
then there must be something about our transactions with penguins that
allows us to do so. In the case of sensory categories, I'm claiming
that a sufficient set of sensory features is what allows as to make
reliable all-or-none judgments; and in the case of higher-order
categories, I claim they are grounded in the sensory ones (and their
features).

I don't deny that graded categories exist too (e.g., "big," "smart"), but
those are not the ones under consideration here. And, yes, I
hypothesize that all-or-none categories are more fundamental in the
problem of categorization and its underlying mechanisms than graded
categories. I also do not deny that regions of uncertainty (and even
arbitrariness) -- natural and contrived -- exist, but I do not think that
those regions are representative of the mechanisms underlying successful
categorization.

The book under discussion ("Categorical Perception: The Groundwork of
Cognition"
) is concerned with the problem of how graded sensory continua
become segmented into bounded all-or-none categories (e.g., colors,
semitones). This is accomplished by establishing upper and lower
thresholds for regions of the continuum. These thresholds, I must
point out, are FEATURES, and they are detected by feature-detectors.
The rest is a matter of grain: If you are speaking at the level of
resolution of our sensory acuity (the "jnd" or just-noticeable-difference),
then there is always a region of uncertainty at the border of a category,
dependent on the accuracy and sensitivity of the threshold-detector.

But discrimination grain is not the right level of analysis for
questions about higher-order sensory categories, and all-or-none
categorization in general. The case for the putative "gradedness" of
"penguin"'s membership in the category "bird" is surely not being
based on the limits of sensory acuity. If it is, I'll concede at once,
and add that that sort of gradedness is trivial; the categorization
problem is concerned with identification grain, not discrimination grain.
All categories will of course be fuzzy at the limits of our sensory
resolution capacity. My own grounding hypothesis BEGINS with
bounded sensory categories (modulo threshold uncertainty) and attempts
to ground the rest of our category hierarchy bottom-up on those.

Finally, as I've stressed in responses to others, there's one other
form of category uncertainty I'm quite prepared to concede, but that
likewise fails to imply that category membership is a matter of
degree: All categories -- true graded ones as well as all-or-none ones
-- are provisional and approximate, relative to the context of
interconfusable members and nonmembers that have been sampled to date. If
the sample ever turns out to have been nonrepresentative, the feature-set that
was sufficient to generate successful sorting in the old context must
be revised and updated to handle the new, wider context. Anomalies and
ambiguities that had never occurred before must now be handled. But what
happens next (if all-or-none sorting performance can be successfully
re-attained at all) is just the same as with the initial category learning
in the old context: A set of features must be found that is sufficient to
subserve correct performance in the extended context. The approximation
must be tightened. This open-endedness of all of our categories, however, is
really just a symptom of inductive risk rather than of graded representations.

> "Analog" means "invertible". The invertible properties of a
> representation are those properties which it preserves...[This
> sounds] tautologically true of *all* representations.

For the reply to this, see my response to Cugini, whose criticism you
cite. Sensory icons need only be invertible with the discriminable properties
of the sensory projection. There is no circularity in this. And in a dedicated
system invertibility at various stages may well be a matter of degree, but
this has nothing to do with the issue of graded/nongraded category membership,
which is much more concerned with selective NONinvertibility.

> It is quite possible to make all-or-none judgements based on graded
> features [e.g., thermostats]

Apart from (1) thresholds (which are features, and which I discussed
earlier), (2) probabilistic features so robust as to be effectively
all-or-none, and (3) gerrymandered examples (usually playing on the
finiteness of the cases sampled, and the underdetermination of the
winning feature set), can you give examples?

> "chair"... with no back... [is a] "stool"... Now vary the size
> of the back

The linguist Labov, with examples such as cup/bowl, specialized in
finding graded regions for seemingly all-or-none categories.
Categorization is always a context-dependent, "compared-to-what"
task . Features must reliably sort the members from the nonmembers
they can be confused with. Sometimes nature cooperates and gives us
natural discontinuities (horses could have graded continuously into
zebras). Where she does not, we have only one recourse left: an
all-or-none sensory threshold at some point in the continuum. One can
always generate a real or hypothetical continuum that would foil our
current feature-detectors and necessitate a threshold-detector. Such
cases are only interesting if they are representative of the actual
context of confusable alternatives that our category representation
must resolve. Otherwise they are not informative about our actual
current (provisional) feature-set.

> I see nothing about "all-or-none" categories which is not explainable
> by arbitrary cutoffs of graded sensory data... [and] avoid[ing]
> borderline situations.

Neither do I. (Most feature-detection problems, by the way, do not
arise from the need to place thresholds along true continua, but from
the problem of underdetermination: there are so many features that it
is hard to find a set that will reliably sort the confusable
alternatives into their proper all-or-none categories.)
--

Stevan Harnad (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet harnad@mind.Princeton.EDU

------------------------------

Date: 5 Jul 87 00:51:01 GMT
From: sher@cs.rochester.edu (David Sher)
Subject: Re: The symbol grounding problem: "Fuzzy" categories?

In article <967@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>
>Most of our object categories are indeed all-or-none, not graded. A
>penguin is not a bird as a matter of degree. It's a bird, period.

Just for the record is this an off hand statement or are you speaking
as an expert when you say most of our categories are all or none. Do
you have some psychology experiments that measure the size of human
category spaces and using a metric on them shows that most categories
are of this form? Can I quote you on this? Personally I have trouble
imagining how to test such a claim but psychologists are clever
fellows.
--
-David Sher
sher@rochester
{ seismo , allegra }!rochester!sher

------------------------------

Date: 5 Jul 87 04:52:30 GMT
From: mind!harnad@princeton.edu (Stevan Harnad)
Subject: Re: The symbol grounding problem: "Fuzzy" categories?


In Article 185 of comp.cog-eng sher@rochester.arpa (David Sher) of U of
Rochester, CS Dept, Rochester, NY responded as follows to my claim that
"Most of our object categories are indeed all-or-none, not graded. A penguin
is not a bird as a matter of degree. It's a bird, period."
--

> Personally I have trouble imagining how to test such a claim...

Try sampling concrete nouns in a dictionary.
--

Stevan Harnad (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet harnad@mind.Princeton.EDU

------------------------------

Date: 5 Jul 87 05:29:02 GMT
From: mind!harnad@princeton.edu (Stevan Harnad)
Subject: Re: The symbol grounding problem


In Article 184 of comp.cog-eng: adam@gec-mi-at.co.uk (Adam Quantrill)
of Marconi Instruments Ltd., St. Albans, UK writes:

> It seems to me that the Symbol Grounding problem is a red herring.
> If I took a partially self-learning program and data (P & D) that had
> learnt from a computer with 'sense organs', and ran it on a computer
> without, would the program's output become symbolically ungrounded?...
> [or] if I myself wrote P & D without running it on a computer at all?

This begs two of the central questions that have been raised in
this discussion: (1) Can one speak of grounding in a toy device (i.e.,
a device with performance capacities less than those needed to pass
the Total Turing Test)? (2) Could the TTT be passed by just a symbol
manipulating module connected to transducers and effectors? If a
device that could pass the TTT were cut off from its transducers, it
would be like the philosophers' "brain in a vat" -- which is not
obviously a digital computer running programs.
--

Stevan Harnad (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet harnad@mind.Princeton.EDU

------------------------------

Date: 5 Jul 87 02:47:25 GMT
From: ihnp4!twitch!homxb!houdi!marty1@ucbvax.Berkeley.EDU
(M.BRILLIANT)
Subject: Re: The symbol grounding problem

In article <605@gec-mi-at.co.uk>, adam@gec-mi-at.co.uk (Adam Quantrill) writes:
> It seems to me that the Symbol Grounding problem is a red herring.

As one who was drawn into a problem that is not my own, let me
try answering that disinterestedly. To begin with, a "red
herring"
is something drawn across the trail that distracts the
pursuer from the real goal. Would Adam tell us what his real
goal is?

Actually, my own real goal, from which I was distracted by the
symbol grounding problem, was an expert system that would (like
Adam's last example) ground its symbols only in terminal I/O.
But that's a red herring in the symbol grounding problem.

..... If I took a partially self-learning program and data (P & D)
that had learnt from a computer with 'sense organs', and ran it on a
computer without, would the program's output become symbolically
ungrounded?

No, because the symbolic data was (were?) learned from sensory
data to begin with - like a sighted person who became blind.

Similarily, if I myself wrote P & D without running it on a computer
at all, [and came] up with identical P & D by analysis. Does that
make the original P & D running on the computer with
'sense organs' symbolically ungrounded?

No, as long as the original program learned its symbolic data
from its own sensory data, not by having them defined by a
person in terms of his or her sensory data.

A computer can always interact via the keyboard & terminal
screen, (if those are the only 'sense organs'), grounding its
internal symbols via people who react to the output, and provide
further stimulus.

That's less challenging and less useful than true symbol
grounding. One problem that requires symbol grounding (more
useful and less ambitious than the Total Turing Test) is a
seeing-eye robot: a machine with artificial vision that could
guide a blind person by giving and taking verbal instructions.
It might use a Braille keyboard instead of speech, but the
"terminal I/O" must be "grounded" in visual data from, and
constructive interaction with, the tangible world. The robot
could learn words for its visual data by talking to people who
could see, but it would still have to relate the verbal symbols
to visual data, and give meaning to the symbols in terms of its
ultimate goal (keeping the blind person out of trouble).

M. B. Brilliant Marty
AT&T-BL HO 3D-520 (201)-949-1858
Holmdel, NJ 07733 ihnp4!houdi!marty1

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT