Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 154

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest            Sunday, 21 Jun 1987      Volume 5 : Issue 154 

Today's Topics:
Theory - Symbol Grounding and Physical Invertibility

----------------------------------------------------------------------

Date: 16 Jun 87 1559 PDT
From: John McCarthy <JMC@SAIL.STANFORD.EDU>
Subject: Symbol Grounding Problem and Disputes

[In reply to message sent Mon 15 Jun 1987 23:23-PDT.]

This dispute strikes me as unnecessarily longwinded. I imagine that the
alleged point at issue and a few of the positions taken could be
summarized for the benefit of those of us whose subjective probability
that there is a real point at issue is too low to motivate studying the
entire discussion but high enough to motivate reading a summary.

------------------------------

Date: 16 Jun 87 17:41:50 GMT
From: ihnp4!homxb!houxm!houdi!marty1@ucbvax.Berkeley.EDU
(M.BRILLIANT)
Subject: Re: The symbol grounding problem (Reply to Ken Laws on
ailist)

In article <849@mind.UUCP>, harnad@mind.UUCP (Stevan Harnad) writes:
> .... Invertibility could fail to capture the standard A/D distinction,
> but may be important in the special case of mind-modeling. Or it could
> turn out not to be useful at all....

So what do you think is essential: (A) literally analog transformation,
(B) invertibility, or (C) preservation of significant relational
functions?

> ..... what I've said about the grounding problem and the role
> of nonsymbolic representations (analog and categorical) would stand
> independently of my particular criterion for analog; substituting a more
> standard one leaves just about all of the argument intact.....

Where does that argument stand now? Can we restate it in terms whose
definitions we all agree on?

> ..... to get the requisite causality I'm looking
> for, the information must be interpretation-independent. Physical
> invertibility seems to give you that......

I think invertibility is too strong. It is sufficient, but not
necessary, for human-style information-processing. Real people forget
awesome amounts of detail, we misunderstand each other (our symbol
groundings are not fully invertible), and we thereby achieve levels of
communication that often, but not always, satisify us.

Do you still say we only need transformations that are analog
(invertible) with respect to those features for which they are analog
(invertible)? That amounts to limited invertibility, and the next
essential step would be to identify the features that need
invertibility, as distinct from those that can be thrown away.

> Ken Laws <Laws@Stripe.SRI.Com> on ailist@Stripe.SRI.Com writes:
> > ... I am sure that methods for decoding both discrete and
> > continuous information in continuous signals are well studied.
>
> I would be interested to hear from those who are familiar with such work.
> It may be that some of it is relevant to cognitive and neural modeling
> and even the symbol grounding problems under discussion here.

I'm not up to date on these methods. But if you want to get responses
from experts, it might be well to be more specific. For monaural
sound, decoding can be done with Fourier methods that are in principle
continuous. For monocular vision, Fourier methods are used for image
enhancement to aid in human decoding, but I think machine decoding
depends on making the spatial dimensions discontinous and comparing the
content of adjacent cells.

M. B. Brilliant Marty
AT&T-BL HO 3D-520 (201)-949-1858
Holmdel, NJ 07733 ihnp4!houdi!marty1

------------------------------

Date: Wed 17 Jun 87 23:33:01-PDT
From: Ken Laws <Laws@Stripe.SRI.Com>
Subject: Visual Decoding

From: ihnp4!homxb!houxm!houdi!marty1@ucbvax.Berkeley.EDU (M.BRILLIANT)

For monaural
sound, decoding can be done with Fourier methods that are in principle
continuous. For monocular vision, Fourier methods are used for image
enhancement to aid in human decoding, but I think machine decoding
depends on making the spatial dimensions discontinous and comparing the
content of adjacent cells.

Marty is right; one must be specific about the types of signals that are
carrying the information. Information theorists tend to work with
particular types of modulation (e.g., radar returns), but are interested
in the general principles of information transmission. Some of the
spread spectrum work is aimed at concealing evidence of modulation while
still being able to recover the encoded information.

Fourier techniques are particularly appropriate for speech processing
because sinusoidal waveforms (the basis of Fourier analysis) are the
eigenforms of acoustic channels. In other words, the sinusoidal components
of speech are transmitted relatively unharmed, although the phase relationships
between the components can be scrambled. Any process that decodes acoustic
signals must be prepared to deal with a little phase spreading. Other
1-D signals (e.g., spectrographic signatures of chemicals) may be composed
of Gaussian pulses or other basis forms. Yet others may be generated by
differential equations rather than composition or modulation of basis
functions. Decoding generally requires models of the generating process
and of the channel or sensing transformations, particularly if the latter
are invertible.

Images are typically captured in discrete arrays, although we know that
biological retinas are neither limited to one kind of detector/resolution
nor so spatially regular. Discrete arrays are convenient, and the Nyquist
theorem (combined with the limited spatial resolution of typical imaging
systems) gives us assurance that we lose nothing below a specific minimum
frequency -- we can, if we wish, reconstruct the true image intensity at
any point in the image plane, regardless of its relationship to the pixel
centers. (In practice this interpolation is exceedingly difficult and is
almost never done -- but enough pixels are sampled to make interpolation
unnecessary for the types of discrimination we need to perform.) The
discrete pixel grid is often convenient but is not fundamental to the
enterprise of image analysis.

A difficulty in image analysis is that we rarely know the shapes of the
basis functions that carry the information; that, after all, is what we
are trying to determine by parsing a scene into objects. We do have
models of the optical channels, but they are generally noninvertible.
Our models of the generating processes (e.g., real-world scenes) are
exceedingly weak. We have some approaches to decoding these signals,
but nothing approaching the power of the human visual system except in
very special tasks (such as analysis of bubble chamber photographs).

-- Ken

------------------------------

Date: 17 Jun 87 08:02:00 EST
From: cugini@icst-ecf.arpa
Reply-to: <cugini@icst-ecf.arpa>
Subject: symbol grounding and physical invertibility


I hate to nag but...

In all the high-falutin' philosophical give-and-take (of which, I admit,
I am actually quite fond) there's been no response to a much more
*specific* objection/question I raised earlier:

What if there were a few-to-one transformation between the skin-level
sensors (remember Harnad proposes "skin-and-in" invertibility
as being necessary for grounding) and the (somewhat more internal)
iconic representation. My example was to suppose that #1:
a combination of both red and green retinal receptors and #2 a yellow
receptor BOTH generated the same iconic yellow.

Clearly this iconic representation is non-invertible back out to the
sensory surfaces, but intuitively it seems like it would be grounded
nonetheless - how about it?


John Cugini <Cugini@icst-ecf.arpa>

------------------------------

Date: 17 Jun 87 18:32:20 GMT
From: diamond.bbn.com!aweinste@husc6.harvard.edu (Anders Weinstein)
Subject: Re: The symbol grounding problem (Reply to Ken Laws on
ailist)

In article <849@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
> As long as the requisite
>information-preserving mapping or "relational function" is in the head
>of the human interpreter, you do not have an invertible (hence analog)
>transformation. But as soon as the inverse function is wired in
>physically, producing a dedicated invertible transformation, you do
>have invertibility, ...

This seems to relate to a distinction between "physical invertibility" and
plain old invertibility, another of your points which I haven't understood.

I don't see any difference between "physical" and "merely theoretical"
invertibility. If a particular physical transformation of a signal is
invertible in theory, then I'd imagine we could always build a device to
perform the actual inversion if we wanted to. Such a device would of course
be a physical device; hence the invertibility would seem to count as
"physical," at least in the sense of "physically possible".

Surely you don't mean that a transformation-inversion capability must
actually be present in the device for it to count as "analog" in your sense.
(Else brains, for example, wouldn't count). So what difference are you trying
to capture with this distinction?

Anders Weinstein
BBN Labs

------------------------------

Date: 17 Jun 87 20:12:22 GMT
From: mind!harnad@princeton.edu (Stevan Harnad)
Subject: Re: The symbol grounding problem


marty1@houdi.UUCP (M.BRILLIANT) asks:

> what do you think is essential: (A) literally analog transformation,
> (B) invertibility, or (C) preservation of significant relational
> functions?

Essential for what? For (i) generating the pairwise same/different judgments,
simlarity judgments and matching that I've called, collectively,
"discrimination", and for which I've hypothesized that there are
iconic ("analog") representations? For that I think invertibility is
essential. (I think that in most real cases what is actually
physically invertible in my sense will also turn out to be "literally
analog"
in a more standard sense. Dedicated digital equivalents that
would also have yielded invertibility will be like a Rube-Goldberg
alternative; they will have a much bigger processing cost. But for my
puroposes, the dedicated digital equivalent would in principle serve
just as well. Don't forget the *dedicated* constraint though.)

For (ii) generating the reliable sorting and labeling of objects on the
basis of their sensory projections, which I've called collectively,
"identification" or "categorization"? For that I think only distinctive
features need to be extracted from the sensory projection. The rest need
not be invertible. Iconic representations are one-to-one with the
sensory projection; categorical representations are many-to-few.

But if you're not talking about sensory discrimination or about
stimulus categorization but about, say, (iii) conscious problem-solving,
deduction, or linguistic description, then relation-preserving
symbolic representations would be optimal -- only the ones I advocate
would not be autonomous (modular). The atomic terms of which they were
composed would be the labels of categories in the above sense, and hence they
would be grounded in and constrained by the nonsymbolic representations.
They would preserve relations not just in virtue of their syntactic
form, as mediated by an interpretation; their meanings would be "fixed"
by their causal connections with the nonsymbolic representations that
ground their atoms.

But if your question concerns what I think is nesessary to pass the
Total Turing Test (TTT), I think you need all of (i) - (iii), grounded
bottom-up in the way I've described.

> Where does [the symbol grounding] argument stand now? Can we
> restate it in terms whose definitions we all agree on?

The symbols of an autonomous symbol-manipulating module are
ungrounded. Their "meanings" depend on the mediation of human
interpretation. If an attempt is made to "ground" them merely by
linking the symbolic module with input/output modules in a dedicated
system, all you will ever get is toy models: Small, nonrepresentative,
nongeneralizable pieces of intelligent performance (a valid objective for
AI, by the way, but not for cognitive modeling). This is only a
conjecture, however, based on current toy performance models and the
the kind of thing it takes to make them work. If a top-down symbolic
module linked to peripherals could successfully pass the TTT that way,
however, nothing would be left of the symbol grounding problem.

My own alternative has to do with the way symbolic models work (and
don't work). The hypothesis is that a hybrid symbolic/nonsymbolic
model along the lines sketched above will be needed in order to pass
the TTT. It will require a bottom-up, nonmodular grounding of its
symbolic representations in nonsymbolic representations: iconic
( = invertible with the sensory projection) and categorical ( = invertible
only with the invariant features of category members that are preserved
in the sensory projection and are sufficient to guide reliable
categorization).

> I think invertibility is too strong. It is sufficient, but not
> necessary, for human-style information-processing. Real people
> forget... misunderstand...

I think this is not the relevant form of evidence bearing on this
question. Sure we forget, etc., but the question concerns what it takes
to get it right when we actually do get it right. How do we discriminate,
categorize, identify and describe things as well as we do (TTT-level)
based on the sensory data we get? And I have to remind you again:
categorization involves at least as much selective *non*invertibility
as it does invertibility. Invertibility is needed where it's needed;
it's not needed everywhere, indeed it may even be a handicap (see
Luria's "Mind of a Mnemonist," which is about a person who seems to
have had such vivid, accurate and persisting eidetic imagery that he
couldn't selectively ignore or forget sensory details, and hence had
great difficulty categorizing, abstracting and generalizing; Borges
describes a similar case in "Funes the Memorious," and I discuss the
problem in "Metaphor and Mental Duality," a chapter in Simon & Sholes' (eds.)
"Language, Mind and Brain," Academic Press 1978).

> Do you still say [1] we only need transformations that are analog
> (invertible) with respect to those features for which they are analog
> (invertible)? That amounts to limited invertibility, and the next
> essential step would be [2] to identify the features that need
> invertibility, as distinct from those that can be thrown away.

Yes, I still say [1]. And yes, the category induction problem is [2].
Perhaps with the three-level division-of-labor I've described a
connectionist algorithm or some other inductive mechanism would be
able to find the invariant features that will subserve a sensory
categorization from a given sample of confusable alternatives. That's
the categorical representation.
--

Stevan Harnad (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet harnad@mind.Princeton.EDU

------------------------------

Date: Thu, 18 Jun 87 10:08:27 pdt
From: Ray Allis <ray@BOEING.COM>
Subject: The Symbol Grounding Answer

I have enjoyed the ailist's tone of rarified intellectual inquiry,
but lately I have begun to think the form of the question "What is
the solution to the Symbol Grounding Problem"
has unduly influenced
the content of the answer, as in "How many angels can dance on the
head of a pin?"


You are solemnly discussing angels and pinheads.

There is no "Symbol Grounding Problem"; the things are *not* grounded.

The only relationship a symbol has with anything is that the physical
effects (electrical and chemical) of its perception in the brain of a
perceiver co-exist with the physical effects of other perceptions, and
are consequently associated in that individual's brain, and therefore
mind. It happens when we direct our baby's attention at a bovine and
clearly enunciate "COW". There is no more "physical invertibility" in
that case than there is between you and your name, and there is no other
physical relationship. And, as we computer hackers are wont to say,
"That's a feature, not a bug". It means we can and do "think" about
things and relationships which may not "exist". (BTW, it's even better!
You are right now using second-level symbols. The visual patterns you
are perceiving on paper or on a display screen are symbols for sounds,
which in turn are symbols for experiences.)

Last year's discussion of the definitions of "analog" and "digital" are
relevant to the present topic. In the paragraph above, the electrical
and chemical effects in the observer's brain are an *analogy* (we
hypothesize) of external "reality". These events are *determined* (we
believe) by that reality, i.e., for each external situation there is one
and only one electro-chemical state of the observer's brain. Now, the
brain effects appear pretty abstracted, or attenuated, so "complete
invertibility"
is unlikely, but if we can devise a fancy enough brain,
may be approachable. No such deterministic relationship holds between
external "reality" and symbols. As I noted above, symbols are related
to their referents by totally arbitrary association.

Thus, there is nothing subtle about the distinction between "analog"
and "digital"; they are two profoundly different things. The "digital"
side of an A/D relationship is *symbolic*. The relationship (we humans
create) between a symbol and a quantity is wholly arbitrary. The value
here is that we can use *deductive* relationships in our manipulation
of quantities, rather than, say, pouring water back and forth among a set
of containers to balance our bank account.

I am one of those convinced by such considerations that purely symbolic
means, which includes most everything we do on digital computers, are
*insufficient in principle* to duplicate human behavior. And I have some
ideas about the additional things we need to investigate. (By the way,
whose behavior are we to duplicate? Ghengis Khan? William Shakespeare?
Joe Sixpack? All of the above in one device? The Total Turing Test is
just academic obfuscation of "If it walks like a duck, and quacks like
A duck ..."
).

------------------------------

Date: 18 Jun 87 18:26:23 GMT
From: diamond.bbn.com!aweinste@husc6.harvard.edu (Anders Weinstein)
Subject: Re: The symbol grounding problem

In article <861@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
> The atomic terms of which they were
>composed would be the labels of categories in the above sense, and hence they
>would be grounded in and constrained by the nonsymbolic representations.
>They would preserve relations not just in virtue of their syntactic
>form, as mediated by an interpretation; their meanings would be "fixed"
>by their causal connections with the nonsymbolic representations that
>ground their atoms.

I don't know how significant this is for your theory, but I think it's worth
emphasizing that the *semantic* meaning of a symbol is still left largely
unconstrained even after you take account of it's "grounding" in perceptual
categorization. This is because what matters for intentional content is not
the objective property in the world that's being detected, but rather how the
subject *conceives* of that external property, a far more slippery notion.

This point is emphasized in a different context in the Churchland's BBS reply
to Drestke's "Knowledge and the Flow of Information." To paraphrase one of
their examples: primitive people may be able to reliably categorize certain
large-scale atmospheric electrical discharges; nevertheless, the semantic
content of their corresponding states might be "Angry gods nearby" or some
such. Indeed, by varying their factual beliefs we could invent cases where
the semantic content of these states is just about anything you please.
Semantic content is a holistic matter.

Another well-known obstacle to moving from an objective to an intentional
description is that the latter contains an essentially normative component,
in that we must make some distinction between correct and erroneous
classification. For example, we'd probably like to say that a frog has a
fly-detector which is sometimes wrong, rather than a "moving-spot-against-a-
fixed-background"
detector which is infallible. Again, this distinction seems
to depend on fuzzy considerations about the purpose or functional role of the
concept in question.

Some of the things you say also suggest that you're attempting to resuscitate
a form of classical empricist sensory atomism, where the "atomic" symbols
refer to sensory categories acquired "by acquaintance" and the meaning of
complex symbols is built up from the atoms "by description". This approach
has an honorable history in philsophy; unfortunately, no one has ever been
able to make it work. In addition to the above considerations, the main
problems seem to be: first, that no principled distinction can be made
between the simple sensory concepts and the complex "theoretical" ones; and
second, that very little that is interesting can be explicitly defined in
sensory terms (try, for example, "chair").

I realize the above considerations may not be relevant to your program -- I
just can't tell to what extent you expect it to shed any light on the problem
of explaining semantic content in naturalistic terms. In any case, I think
it's important to understand why this fundamental problem remains largely
untouched by such theories.

Anders Weinstein
BBN Labs

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT