Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 168

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest             Monday, 6 Jul 1987      Volume 5 : Issue 168 

Today's Topics:
Policy - Hard Limit on Quotations,
Theory - The Symbol Grounding Problem & Against Rosch and Wittgenstein

----------------------------------------------------------------------

Date: Thu 2 Jul 87 09:41:54-PDT
From: Ken Laws <Laws@Stripe.SRI.Com>
Subject: Hard Limit on Quotations

The "quotation problem" has become so prevalent across all of the
Usenet newsgroups that the gateway now rejects any message with more
quoted text than new text. If a message is rejected for this reason,
I am unlikely to clean it up and resend.

As I indicated last week, I think we could get along just fine
with more "I say ..." and less "You said ...". Paraphrases are
fine and even beneficial, but trying to reestablish the exact
context of each comment is not worth the hassle to the general
readership. Perhaps some of the hair splitting could be carried
on through private mail, with occasional reports to the list on
points of agreement and disagreement. Discussions of perception
and categorization are appropriate for AIList, but we cannot give
unlimited time and attention to any one topic.

I've engaged in "interpolated debate" myself, and have enjoyed
this characteristic mode of net discussions. I won't object to
occasional use, but I do get very tired of seeing the same text
quoted in message after message. I used to edit such repetitions
out of the digest, but I can't manage it with this traffic volume.
Please keep in mind that this is a broadcast channel and that many
readers have slow terminals or have to pay intercontinental
transmission fees. Words are money.

It seems that a consistent philosophy cannot be put forth in less
than a full book, or at least a BBS article, and that meaningful
rebuttals require similar length. We have been trying to cram this
through a linear channel, with swirls of debate spinning off from each
paragraph [yes, I know that's a contradiction], and there is no
evidence of convergence. Let's try to slow down for a while.

I would also recommend that messages be kept to a single topic,
even if that means (initially) that a full response to a previous
message must be split into parts. Separate discussion of grounding,
categorization, perception, etc., would be more palatable than the
current indivisible stream. I would like to sort the discussions,
if only for ease of meaningful retrieval, but can't do so if they
all carry the same subject line and mix of topics.

-- Ken

------------------------------

Date: Thu, 2 Jul 87 09:37:21 EDT
From: Alex Kass <kass-alex@YALE.ARPA>
Subject: AIList Digest V5 #163


Can't we bag this damn symbol grounding discussion already?

If it *must* continue, how about instituting a symbol grounding news
group, and freeing the majority of us poor AILIST readers from the
burden of flipping past the symbol grounding stuff every morning.


-Alex

ARPA: Kass@yale
UUCP: {decvax,linus,seismo}!yale!kass
BITNET: kass@yalecs
US: Alex Kass
Yale University Computer Science Department
New Haven, CT 06520

------------------------------

Date: 2 Jul 87 05:19:05 GMT
From: mind!harnad@princeton.edu (Stevan Harnad)
Subject: Re: The symbol grounding problem


smoliar@vaxa.isi.edu (Stephen Smoliar)
Information Sciences Institute writes:

> Consider the holographic model proposed by Karl Pribram in LANGUAGES
> OF THE BRAIN... as an alternative to [M.B. Brilliant's] symbol
> manipulation scenario.

Besides being unimplemented and hence untested in what they can and can't
do, holographic representations seem to inherit the same handicap as
all iconic representations: Being unique to each input and blending
continuously into one another, how can holograms generate
categorization rather than merely similarity gradients (in the hard
cases, where obvious natural gaps in the input variation don't solve
the problem for you a priori)? What seems necessary is active
feature-selection, based on feedback from success and failure in attempts
to learn to sort and label correctly, not merely passive filtering
based on natural similarities in the input.

> [A] difficulty seems to be in what it means to file something away if
> one's memory is simply one of experiences.

Episodic memory -- rote memory for input experiences -- has the same
liability as any purely iconic approach: It can't generate category
boundaries where there is significant interconfusability among
categories of episodes.

> Perhaps the difficulty is that the mind really doesn't want to
> assign a symbol to every experience immediately.

That's right. Maybe it's *categories* of experience that must first be
selectively assigned names, not each raw episode.

> Where does the symbol's name come from? How is the symbol actually
> "bound" to what it retrieves?

That's the categorization problem.

> The big unanswered question...[with respect to connectionism]
> would appear to be: will [it] all... scale upward?

Connectionism is one of the candidates for the feature-learning
mechanism. That it's (i) nonsymbolic, that it (ii) learns, and that it
(iii) uses the same general statistical algorithm across problem-types
(i.e., that it has generality rather than being ad hoc, like pure
symbolic AI) are connectionism's plus's. (That it's brainlike is not,
nor is it true, on current evidence, nor even relevant at this stage.)
But the real question is indeed: How much can it really do (i.e., will it
scale up)?
--

Stevan Harnad (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet harnad@mind.Princeton.EDU

------------------------------

Date: 2 Jul 87 04:36:37 GMT
From: mind!harnad@princeton.edu (Stevan Harnad)
Subject: Re: The symbol grounding problem: Against Rosch &
Wittgenstein


dgordon@teknowledge-vaxc.ARPA (Dan Gordon)
of Teknowledge, Inc., Palo Alto CA writes:

> finding an objective basis for a performance and getting a device to
> do it given the same inputs are two different things. We may be able
> to find an objective basis for a performance but be unable...to get a
> device to exhibit the same performance. And, I suppose, the converse
> is true: we may be able to get a device to mimic a performance without
> understanding the objective basis for the model

I agree with part of this. J.J. Gibson argued that the objective basis of much
of our sensorimotor performance is in stimulus invariants, but this
does not explain how we get a device (like ourselves) to find and use
those invariants and thereby generate the performance. I also agree that a
device (e.g., a connectionist network) may generate a performance
without our understanding quite how it does it (apart from the general
statistical algorithm it's using, in the case of nets). But the
point I am making is neither of these. It concerns whether performance
(correct all-or-none categorization) can be generated without an
objective basis (in the form of "defining" features) (a) existing and
(b) being used by any device that successfully generates the
performance. Whether or not we know know what the objective basis is
and how it's used is another matter.

> There may in fact be categorization performances that a) do not use
> a set of underlying features; b) have an objective basis which is not
> feature-driven; and c) can only be simulated (in the strong sense) by
> a device which likewise does not use features. This is one of the
> central prongs of Wittgenstein's attack on the positivist approach to
> language, and although I am not completely convinced by his criticisms,
> I haven't run across any very convincing rejoinder.

Let's say I'm trying to provide the requisite rejoinder (in the special case of
all-or-none categorization, which is not unrelated to the problems of
language: naming and description). Wittgenstein's arguments were not governed
by a thoroughly modern constraint that has arisen from the possibility of
computer simulation and cognitive modeling. He was introspecting on
what the features defining, say, "games" might be, and he failed to
find a necessary and sufficient set, so he said there wasn't one. If
he had instead asked: "How, in principle, could a device categorize
"
games" and "nongames" successfully in every instance?" he would have had
to conclude that the inputs must provide an objective basis
which the device must find and use. Whether or not the device can
introspect and report what the objective basis is is another matter.

Another red herring in Wittegenstein's "family resemblance" metaphor was
the issue of negative and disjunctive features. Not-F is a perfectly good
feature. So is Not-F & Not-G. Which quite naturally yields the
disjunctive feature F-or-G. None of this is tautologous. It just shows
up a certain arbitrary myopia there has been about what a "feature" is.
There's absolutely no reason to restrict "features" to monadic,
conjunctive features that subjects can report by introspection. The
problem in principle is whether there are any logical (and nonmagical)
alternatives to a feature-set sufficient to sort the confusable
alternatives correctly. I would argue that -- apart from contrived,
gerrymandered cases that no one would want to argue formed the real
basis of our ability to categorize -- there are none.

Finally, in the special case of categorization, the criterion of "defining"
features also turns out to be a red herring. According to my own model,
categorization is always provisional and context-dependent (it depends on
what's needed to successfully sort the confusable alternatives sampled to date).
Hence an exhaustive "definition," good till doomsday and formulated from the
God's-eye viewpoint is not at issue, only an approximation that works now, and
can be revised and tightened if the context is ever widened by further
confusable alternatives that the current feature set would not be able to
sort correctly. The conflation of (1) features sufficient to generate the
current provisional (but successful) approximation and (2) some nebulous
"eternal," ontologically exact "defining" set (which I agree does not exist,
and may not even make sense, since categorization is always a relative,
"compared-to-what?" matter) has led to a multitude of spurious
misunderstandings -- foremost among them being the misconception that
our categories are all graded or fuzzy.
--

Stevan Harnad (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet harnad@mind.Princeton.EDU

------------------------------

Date: 2 Jul 87 15:51:40 GMT
From: mind!harnad@princeton.edu (Stevan Harnad)
Subject: Re: The symbol grounding problem


On ailist cugini@icst-ecf.arpa writes:

> why say that icons, but not categorical representations or symbols
> are/must be invertible? Isn't it just a vacuous tautology to claim
> that icons are invertible wrt to the information they preserve, but
> not wrt the information they lose?... there's information loss (many
> to one mapping) at each stage of the game: 1. distal object...
> 2. sensory projection... 3. icons... 4. categorical representation...
> 5. symbols... do you still claim that the transition between 2
> and 3 is invertible in some strong sense which would not be true of,
> say, [1 to 2] or [3 to 4], and if so, what is that sense?... Perhaps
> you just want to say that the transition between 2 and 3 is usually
> more invertible than the other transitions [i.e., invertibility as a
> graded category]?

[In keeping with Ken Laws' recommendation about minimizing quotation, I have
compressed this query as much as I could to make my reply intelligible.]

Iconic representations (IRs) must perform a very different function from
categorical representations (IRs) or symbolic representations (SRs).
In my model, IRs only subserve relative discrimination, similarity
judgment and sensory-sensory and sensory-motor matching. For all of
these kinds of task, traces of the sensory projection are needed for
purposes of relative comparison and matching. An analog of the sensory
projection *in the properties that are discriminable to the organism*
is my candidate for the kind of representation that will do the job
(i.e., generate the performance). There is no question of preserving
in the IR properties that are *not* discriminable to the organism.

As has been discussed before, there are two ways that IRs could in
principle be invertible (with the discriminable properties of the
sensory projection): by remaining structurally 1:1 with it or by going
into symbols via A/D and an encryption and decryption transformation in a
dedicated (hard-wired) system. I hypothesize that structural copies are
much more economical than dedicated symbols for generating discrimination
performance (and there is evidence that they are what the nervous system
actually uses). But in principle, you can get invertibility and generate
successful discrimination performance either way.

CRs need not -- indeed cannot -- be invertible with the sensory
projection because they must selectively discard all features except
those that are sufficient to guide successful categorization
performance (i.e., sorting and labeling, identification). Categorical
feature-detectors must discard most of the discriminable properties preserved
in IRs and selectively preserve only the invariant properties shared
by all members of a category that reliably distinguish them from
nonmembers. I have indicated, though, that this representation is
still nonsymbolic; the IR to CR transformation is many-to-few, but it
continues to be invertible in the invariant properties, hence it is
really "micro-iconic." It does not invert from the representation to
the sensory projection, but from the representation to invariant features of
the category. (You can call this invertibility a matter of degree if
you like, but I don't think it's very informative. The important
difference is functional: What it takes to generate discrimination
performance and what it takes to generate categorization
performance.)

Finally, whatever invertibility SRs have is entirely parasitic on the
IRs and CRs in which they are grounded, because the elementary SRs out
of which the composite ones are put together are simply the names of
the categories that the CRs pick out. That's the whole point of this
grounding proposal.

I hope this explains what is invertible and why. (I do not understand your
question about the "invertibility" of the sensory projection to the distal
object, since the locus of that transformation is outside the head and hence
cannot be part of the internal representation that cognitive modeling is
concerned with.)

--

Stevan Harnad (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet harnad@mind.Princeton.EDU

------------------------------

Date: 2 Jul 87 01:19:35 GMT
From: ctnews!pyramid!prls!philabs!pwa-b!mmintl!franka@unix.sri.com
(Frank Adams)
Subject: Re: The symbol grounding problem: Correction re.
Approximationism

In article <923@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
|In responding to Cugini and Brilliant I misinterpreted a point that
|the former had made and the latter reiterated. It's a point that's
|come up before: What if the iconic representation -- the one that's
|supposed to be invertible -- fails to preserve some objective property
|of the sensory projection? For example, what if yellow and blue at the
|receptor go into green at the icon? The reply is that an analog
|representation is only analog in what it preserves, not in what it fails
|to preserve.

I'm afraid when I parse this, using the definitions Harnad uses, it comes
out as tautologically true of *all* representations.

"Analog" means "invertible". The invertible properties of a representation
are those properties which it preserves. Is there some strange meaning of
"preserve" being used here? Otherwise, I don't see how this statement has
any meaning.
--

Frank Adams ihnp4!philabs!pwa-b!mmintl!franka
Ashton-Tate 52 Oakland Ave North E. Hartford, CT 06108

------------------------------

Date: 2 Jul 87 01:07:00 GMT
From: ctnews!pyramid!prls!philabs!pwa-b!mmintl!franka@unix.sri.com
(Frank Adams)
Subject: Re: The symbol grounding problem

In article <917@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
|Finally, and perhaps most important: In bypassing the problem of
|categorization capacity itself -- i.e., the problem of how devices
|manage to categorize as correctly and successfully as they do, given
|the inputs they have encountered -- in favor of its fine tuning, this
|line of research has unhelpfully blurred the distinction between the
|following: (a) the many all-or-none categories that are the real burden
|for an explanatory theory of categorization (a penguin, after all, be it
|ever so atypical a bird, and be it ever so time-consuming for us to judge
|that it is indeed a bird, is, after all, indeed a bird, and we know
|it, and can say so, with 100% accuracy every time, irrespective of
|whether we can successfully introspect what features we are using to
|say so) and (b) true "graded" categories such as "big," "intelligent,"
|etc. Let's face the all-or-none problem before we get fancy...

I don't believe there are any truely "all-or-none" categories. There are
always, at least potentially, ambiguous cases. There is no "100% accuracy
every time"
, and trying to theorize as though there were is likely to lead
to problems.

Second, and perhaps more to the point, how do you know that "graded"
categories are less fundamental than the other kind? Maybe it's the other
way around. Maybe we should try to understand to understand graded
categories first, before we get fancy with the other kind. I'm not saying
this is the case; but until we actually have an accepted theory of
categorization, we won't know what the simplest route is to get there.
--

Frank Adams ihnp4!philabs!pwa-b!mmintl!franka
Ashton-Tate 52 Oakland Ave North E. Hartford, CT 06108

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT