Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 146

eZine's profile picture
Published in 
AIList Digest
 · 11 months ago

AIList Digest            Tuesday, 16 Jun 1987     Volume 5 : Issue 146 

Today's Topics:
Theory - The Symbol Grounding Problem

----------------------------------------------------------------------

Date: 9 Jun 87 22:12:32 GMT
From: diamond.bbn.com!aweinste@husc6.harvard.edu (Anders Weinstein)
Subject: Re: The symbol grounding problem

In article <812@mind.UUCP> Stevan Harnad <harnad@mind.UUCP> replies:

With regard physical invertibility and the A/D distinction:
>
>> a *digitized* image -- in your terms... is "analog" in the
>> information it preserves and not in the information lost. This
>> seems to me to be a very unhappy choice of terminology!
>
> For the time being, I've acknowledged that
>my invertibility criterion is, if not necessarily unhappy, somewhat
>surprising in its implications, for it implies (1) that being analog
>may be a matter of degree (i.e., degree of invertibility) and (2) even
>a classical digital system must be regarded as analog to a degree ...

Grumble. These consequences only *seem* surprising if we forget that you've
redefined "analog" in a non-standard manner; this is precisely I why I keep
harping on your terminology. Compare them with what you're really saying:
"physical invertibility is a matter of degree" or "a classical digital system
still employs physically invertible representations"
-- both quite humdrum.

With regard to the symbolic AI approach to the "symbol-grounding problem":
>
>One proposal, as you note, is that a pure symbol-manipulating system can be
>"grounded" by merely hooking it up causally in the "right way" to the outside
>world with simple (modular) transducers and effectors. ... I have argued
>that [this approach] simply won't succeed in the long run (i.e., as we
>attempt to approach an asymptote of total human performance capacity ...)
>...In (1) a "toy" case ... the right causal connections could be wired
>according to the human encryption/decryption scheme: Inputs and outputs could
>be wired into their appropriate symbolic descriptions. ... But none but the
>most diehard symbolic functionalist would want to argue that such a simple
>toy model was "thinking," ... The reason is that we are capable of
>doing *so much more* -- and not by an assemblage of endless independent
>modules of essentially the same sort as these toy models, but by some sort of
>(2) integrated internal system. Could that "total" system be just an
>oversized toy model -- a symbol system with its interpretations "fixed" by a
>means analogous to these toy cases? I am conjecturing that it is not.

I think your reply may misunderstand the point of my objection. I'm not
trying to defend the intentionality of "toy" programs. I'm not even
particularly concerned to *defend* the symbolic approach to AI (I personally
don't even believe in it). I'm merely trying to determine exactly what your
argument against symbolic AI is.

I had thought, perhaps wrongly, that you were claiming that the
interpretations of systems conceived by symbolic AI system must somehow
inevitably fail to be "grounded", and that only a system which employed
"analog" processing in the way you suggest would have the causal basis
required for fixing an interpretation. In response, I pointed out first that
advocates of the symbolic approach already understand that causal commerce
with the environment is necessary for intentionality: they envision the use
of complex perceptual systems to provide the requisite "grounding". So it's
not as though the symbolic approach is indifferent to this issue. And your
remarks against "toy" systems and "hard-wiring" the interpretations of the
inputs are plain unfair -- the symbolic approach doesn't belittle the
importance or complexity of what perceptual systems must be able to do. It is
in total agreement with you that a truly intentional system must be capable
of complex adaptive performance via the use of its sensory input -- it just
hypothesizes that symbolic processing is sufficient to achieve this.

And, as I tried to point out, there is just no reason that a modular,
all-digital system of the kind envisioned by the symbolic approach could not
be entirely "grounded" BY YOUR OWN THEORY OF "GROUNDEDNESS": it could employ
"physically inevertible" representations (only they would be digital ones),
from these it could induct reliable "feature filters" based on training (only
these would use digital rather than analog techniques), etc. I concluded that
the symbolic approach appears to handle your so-called "grounding problem"
every bit as well as any other method.

Now comes the reply that you are merely conjecturing that analog processing
may be required to realize the full range of human, as opposed to "toy",
performance -- in short, you think the symbolic approach just won't work.
But this is a completely different issue! It has nothing to do with some
mythical "symbol grounding" problem, at least as I understand it. It's just
the same old "intelligent-behavior-generating" problem which everyone in AI,
regardless of paradigm, is looking to solve.

>From this reply, it seems to me that this alleged "symbol-grounding problem"
is a real red-herring (it misled me, at least). All you're saying is that
you suspect that mainstream AI's symbol system hypothesis is false, based on
its lack of conspicuous performance-generating sucesses. Obviously everyone
must recognize that this is a possibility -- the premise of symbolic AI is,
after all, only a hypothesis.

But I find this a much less interesting claim than I originally thought --
conjectures, after all, are cheap. It *would* be interesting if you could
show, as, say, the connectionist program is trying to, how analog processing
can work wonders that symbol-manipulation can't. But this would require
detailed research, not speculation. Until then, it remains a mystery why your
proposed approach should be regarded as any more promising than any other.

Anders Weinstein
BBN Labs

------------------------------

Date: 10 Jun 87 21:28:23 GMT
From: mind!harnad@princeton.edu (Stevan Harnad)
Subject: Re: The symbol grounding problem


aweinste@Diamond.BBN.COM (Anders Weinstein) of BBN Laboratories, Inc.,
Cambridge, MA writes:

> There's no [symbol] grounding problem, just the old
> behavior-generating problem

Before responding to the supporting arguments for this conclusion, let
me restate the matter in what I consider to be the right way. There is:
(1) the behavior-generating problem (what I have referred to as the problem of
devising a candidate that will pass the Total Turing Test), (2) the
symbol-grounding problem (the problem of how to make formal symbols
intrinsically meaningful, independent of our interpretations), and (3)
the conjecture (based on the existing empirical evidence and on
logical and methodological considerations) that (2) is responsible for
the failure of the top-down symbolic approach to solve (1).

>>my [SH's] invertibility criterion is, if not necessarily unhappy, somewhat
>>surprising in its implications, for it implies that (1) being analog may
>>be a matter of degree (i.e., degree of invertibility) and that (2) even
>>a classical digital system must be regarded as analog to a degree ...
>
> These consequences only *seem* surprising if we forget that you've
> redefined "analog" in a non-standard manner... you're really saying:
> "physical invertibility is a matter of degree" or "a classical digital
> system still employs physically invertible representations"
-- both
> quite humdrum.

You've bypassed the three points I brought up in replying to your
challenge to my invertibility criterion for an analog transform the
last time: (1) the quantization in standard A/D is noninvertible, (2) a
representation can only be analog in what it preserves, not in what it
fails to preserve, and, in cognition at any rate, (3) the physical
shape of the signal may be what matters, not the "message" it
"carries." Add to this the surprising logical consequence that a
"dedicated" digital system (hardwired to its peripherals) would be
"analog" in its invertible inputs and outputs according to my
invertibility criterion, and you have a coherent distinction that conforms
well to some features of the classical A/D distinction, but that may prove
to diverge, as I acknowledged, sufficiently to make it an independent,
"non-standard" distinction, unique to cognition and neurobiology. Would it be
surprising if classical electrical engineering concepts did not turn
out to be just right for mind-modeling?

> I [AW] had thought, perhaps wrongly, that you were claiming that the
> interpretations of systems conceived by symbolic AI system must somehow
> inevitably fail to be "grounded", and that only a system which employed
> "analog" processing in the way you suggest would have the causal basis
> required for fixing an interpretation.

That is indeed what I'm claiming (although you've completely omitted
the role of the categorical representations, which are just as
critical to my scheme, as described in the CP book). But do make sure you
keep my "non-standard" definition of analog in mind, and recall that I'm
talking about asymptotic, human-scale performance, not toy systems.
Toy systems are trivially "groundable" (even by my definition of
"analog") by hard-wiring them into a dedicated input/output
system. But the problem of intrinsic meaningfulness does not arise for
toy models, only for devices that can pass the Total Turing Test (TTT).
[The argument here involves showing that to attribute intentionality to devices
that exhibit sub-TTT performance is not justified in the first place.]
The conjecture is accordingly that the modular solution (i.e., hardwiring an
autonomous top-down symbolic module to conventional peripheral modules
-- transducers and effectors) will simply not succeed in producing a candidate
that will be able to pass the Total Turing Test, and that the fault
lies with the autonomy (or modularity) of the symbolic module.

But I am not simply proposing an unexplicated "analog" solution to the
grounding problem either, for note that a dedicated modular system *would*
be analog according to my invertibility criterion! The conjecture is
that such a modular solution would not be able to meet the TTT
performance criterion, and the grounds for the conjecture are partly
inductive (extrapolating symbolic AI's performance failures), partly
logical and methodological (the grounding problem), and partly
theory and data-driven (psychophysical findings in human categorical
perception). My proposal is not that some undifferentiated,
non-standard "analog" processing must be going on. I am advocating a
specific hybrid bottom-up, symbolic/nonsymbolic rival to the pure
top-down symbolic approach (whether or not the latter is wedded to
peripheral modules), as described in the volume under discussion
("Categorical Perception: The Groundwork of Cognition," CUP 1987).

> advocates of the symbolic approach already understand that causal
> commerce with the environment is necessary for intentionality: they
> envision the use of complex perceptual systems to provide the
> requisite "grounding". So it's not as though the symbolic approach
> is indifferent to this issue.

This is the pious hope of the "top-down" approach: That suitably
"complex" perceptual systems will meet for a successful "hook-up"
somewhere in the middle. But simply reiterating it does not mean it
will be realized. The evidence to date suggests the opposite: That the
top-down approach will just generate more special-purpose toys, not a
general purpose, TTT-scale model of human performance capacity. Nor is
there any theory at all of what the requisite perceptual "complexity"
might be: The stereotype is still standard transducers that go from physical
energy via A/D conversion straight into symbols. Nor does "causal
commerce"
say anything: It leaves open anything from the modular
symbol-cruncher/transducer hookups of the kind that so far only seem
capable of generating toy models, to hybrid, nonmodular, bottom-up
models of the sort I would advocate. Perhaps it's in the specific
nature of the bottom-up grounding that the nature of the requisite
"complexity" and "causality" will be cashed in.

> your remarks against "toy" systems and "hard-wiring" the
> interpretations of the inputs are plain unfair -- the symbolic
> approach doesn't belittle the importance or complexity of what
> perceptual systems must be able to do. It is in total agreement
> with you that a truly intentional system must be capable of complex
> adaptive performance via the use of its sensory input -- it just
> hypothesizes that symbolic processing is sufficient to achieve this.

And I just hypothesize that it isn't. And I try to say why not (the
grounding problem and modularity) and what to do about it (bottom-up,
nonmodular grounding of symbolic representations in iconic and categorical
representations).

> there is just no reason that a modular, all-digital system of the
> kind envisioned by the symbolic approach could not be entirely
> "grounded" BY YOUR OWN THEORY OF "GROUNDEDNESS": it could employ
> "physically inevertible" representations (only they would be digital
> ones), from these it could induct reliable "feature filters" based on
> training (only these would use digital rather than analog techniques),
> etc. ... the symbolic approach appears to handle your so-called
> "grounding problem" every bit as well as any other method.

First of all, as I indicated earlier, a dedicated top-down symbol-crunching
module hooked to peripherals would indeed be "grounded" in my sense --
if it had TTT-performance power. Nor is it *logically impossible* that
such a system could exist. But it certainly does not look likely on the
evidence. I think some of the reasons we were led (wrongly) to expect it were
the following:

(1) The original successes of symbolic AI in generating intelligent
performance: The initial rule-based, knowledge-driven toys were great
successes, compared to the alternatives (which, apart from some limited
feats of Perceptrons, were nonexistent). But now, after a generation of
toys that show no signs of converging on general principles and growing
up to TTT-size, the inductive evidence is pointing in the other direction:
More ad hoc toys is all we have grounds to expect.

(2) Symbol strings seemed such hopeful candidates for capturing mental
phenomena such as thoughts, knowledge, beliefs. Symbolic function seemed
like such a natural, distinct, nonphysical level for capturing the mind.
Easy come, easy go.

(3) We were persuaded by the power of computation -- Turing
equivalence and all that -- to suppose that computation
(symbol-crunching) just might *be* cognition. If every (discrete)
thing anyone or anything (including the mind) does is computationally
simulable, then maybe the computational functions capture the mental
functions? But the fact that something is computationally simulable
does not entail that it is implemented computationally (any more than
behavior that is *describable* as ruleful is necessarily following an
explicit rule). And some functions (such as transduction and causality)
cannot be implemented computationally at all.

(4) We were similarly persuaded by the power of digital coding -- the
fact that it can approximate analog coding as closely as we please
(and physics permits) -- to suppose that digital representations were
the only ones we needed to think about. But the fact that a digital
approximation is always possible does not entail that it is always
practical or optimal, nor that it is the one that is actually being
*used* (by, say, the brain). Some form of functionalism is probably
right, but it certainly need not be symbolic functionalism, or a
functionalism that is indifferent to whether a mental function or
representation is analog or digital: The type of implementation may
matter, both to the practical empirical problem of successfully
generating performance and to the untestable phenomenological problem of
capturing qualitative subjective experience. And some functions (let
me again add), such as transduction and (continuous) A/A, cannot be
implemented purely symbolically at all.

A good example to bear in mind is Shepard's mental rotation
experiments. On the face of it, the data seemed to suggest that
subjects were doing analog processing: In making same/different
judgments of pairs of successively presented 2-dimensional projections
of 3-dimensional, computer-generated, unfamiliar forms, subjects' reaction
times for saying "same" when one stimulus was in a standard orientation and
the other was rotated were proportional to the degree of rotation. The
diehard symbolists pointed out (correctly) that the proportionality,
instead of being due to the real-time analog rotation of a mental icon, could
have been produced by, say, (1) serially searching through the coordinates
of a digital grid on which the stimuli were represented, with more distant
numbers taking more incremental steps to reach, or by (2) doing
inferences on formal descriptions that became more complex (and hence
time-consuming) as the orientation became more eccentric. The point,
though, is that although digital/symbolic representations were indeed
possible, so were analog ones, and here the latter would certainly seem to be
more practical and parsimonious. And the fact of the matter -- namely,
which kinds of representations were *actually* used -- is certainly
not settled by pointing out that digital representations are always
*possible.*

Maybe a completely digital mind would have required a head the size of
New York State and polynomial evolutionary time in order to come into
existence -- who knows? Not to mention that it still couldn't do the
"A" in the A/D...

> [you] reply that you are merely conjecturing that analog processing
> may be required to realize the full range of human, as opposed to "toy",
> performance -- in short, you think the symbolic approach just won't
> work. But this... has nothing to do with some mythical "symbol
> grounding"
problem, at least as I understand it. It's just
> the same old "intelligent-behavior-generating" problem which everyone
> in AI, regardless of paradigm, is looking to solve... All you're
> saying is that you suspect that mainstream AI's symbol system
> hypothesis is false, based on its lack of conspicuous
> performance-generating successes. Obviously everyone must recognize
> that this is a possibility -- the premise of symbolic AI is, after
> all, only a hypothesis.

I'm not just saying I think the symbolic hypothesis is false. I'm
saying why I think it's false (ungroundedness) and I'm suggesting an
alternative (a bottom-up hybrid).

> But I find this a much less interesting claim than I originally
> thought -- conjectures, after all, are cheap. It *would* be
> interesting if you could show, as, say, the connectionist program
> is trying to, how analog processing can work wonders that
> symbol-manipulation can't. But this would require detailed research,
> not speculation. Until then, it remains a mystery why your proposed
> approach should be regarded as any more promising than any other.

Be patient. My hypotheses (which are not just spontaneous conjectures,
but are based on an evaluation of the available evidence, the theoretical
alternatives, and the logical and methodological problems involved)
will be tested. They even have a potential connectionist component (in
the induction of the features subserving categorization), although
connectionism comes in for criticism too. For now it would seem only
salutary to attempt to set cognitive modeling in directions that
differ from the unprofitable ones it has taken so far.
--

Stevan Harnad (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet harnad@mind.Princeton.EDU

------------------------------

Date: 11 Jun 87 15:24:22 GMT
From: ihnp4!homxb!houxm!houdi!marty1@ucbvax.Berkeley.EDU
(M.BRILLIANT)
Subject: Re: The symbol grounding problem

In article <828@mind.UUCP>, harnad@mind.UUCP (Stevan Harnad) writes:
> aweinste@Diamond.BBN.COM (Anders Weinstein) of BBN Laboratories, Inc.,
> Cambridge, MA writes:
>
> > There's no [symbol] grounding problem, just the old
> > behavior-generating problem
>
> ..... There is:
> (1) the behavior-generating problem (what I have referred to as the problem of
> devising a candidate that will pass the Total Turing Test), (2) the
> symbol-grounding problem (the problem of how to make formal symbols
> intrinsically meaningful, independent of our interpretations), and (3) ...

Just incidentally, what is the intrinsic meaning of "intrinsically
meaningful"
? The Turing test is an objectively verifiable criterion.
How can we objectively verify intrinsic meaningfulness?

> .... Add to this the surprising logical consequence that a
> "dedicated" digital system (hardwired to its peripherals) would be
> "analog" in its invertible inputs and outputs according to my
> invertibility criterion, .....

Using "analog" to mean "invertible" invites misunderstanding, which
invites irrelevant criticism.

Human (in general, vertebrate) visual processing is a dedicated
hardwired digital system. It employs data reduction to abstract such
features as motion, edges, and orientation of edges. It then forms a
map in which position is crudely analog to the visual plane, but
quantized. This map is sufficiently similar to maps used in image
processing machines so that I can almost imagine how symbols could be
generated from it.

By the time it gets to perception, it is not invertible, except with
respect to what is perceived. Noninvertibility is demonstrated in
experiments in the identification of suspects. Witnesses can report
what they perceive, but they don't always perceive enough to invert
the perceived image and identify the object that gave rise to the
perception. If you don't agree, please give a concrete, objectively
verifiable definition of "invertibility" that can be used to refute my
conclusion.

If I am right, human intelligence itself relies on neither analog nor
invertible symbol grounding, and therefore artificial intelligence
does not require it.

By the way, there is an even simpler argument: even the best of us can
engage in fuzzy thinking in which our symbols turn out not to be
grounded. Subjectively, we then admit that our symbols are not
intrinsically meaningful, though we had interpreted them as such.

M. B. Brilliant Marty
AT&T-BL HO 3D-520 (201)-949-1858
Holmdel, NJ 07733 ihnp4!houdi!marty1

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT