Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 128

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest           Thursday, 28 May 1987     Volume 5 : Issue 128 

Today's Topics:
Theory - Subsymbolic Pointers & IR Semantics & Symbol Grounding

----------------------------------------------------------------------

Date: Thu, 21 May 87 13:01:33 n
From: DAVIS%EMBL.BITNET@wiscvm.wisc.edu
Subject: framing problems


I'd like to briefly say that perhaps an even more astounding problem
than that proposed by Stevan Harnad is that connected with the means by
which literate, intelligent and interested persons can totally obscure
the central core of an idea by the use of unnecessarily obtuse jargon.

If we're going to discuss the more philosphical end of AI (*yes please!*),
then we don't *have* to throw everyone else off the track by bogging
down the chat in a maze of terms intended to have *such* a precise meaning
as to prevent anyone but the author from truly grasping the intended meaning.

Hofstadter's mention of the journal "Art - Language" in metamagical themas
should be than just humourous - it should have warned us all about the
dangers of ridiculously narrow terminology.

thankyou, and goodnight (yaaaawn!)

paul davis

netmail: davis@embl.bitnet

"and when calculating the promise, remember this - the real of the matter -
`to shatter tradition makes us *feel* free, but tradition, is a static
defence against a chaotic community, and what do we gain by destroying it'?"


------------------------------

Date: Thu, 21 May 87 12:56:10 PDT
From: rik%roland@sdcsvax.ucsd.edu (Rik Belew)
Subject: Subsymbolic pointers & IR Semantics


I now see why you consider my use of ``subsymbolic'' sloppy. It is
because you have a well thought out, concrete proposal for three distinct
representational levels that captures extremely well the distinctions I
was trying to make. In the main, I think I accept and even like your
``psycho-physically grounded'' symbol theory. I do have a few
questions, however.

\section{Icon/Pointer/Symbol != Icon/Category/Symbol}

First, what evidence causes you to postulate iconic and categorical
representations as being distinct? Your distinction appears to rest
on differences between the types of performance at task these two
representations each ``subserve.'' Apart from a relatively few
cognitive phenomena (short-term sensory storage, perhaps mental
imagery) I am aware of little evidence of ``... continuous, isomorphic
analogues of the sensory surfaces'' that is the basis of your iconic
representations. In any case, I see great difficulty in distinguishing
between such representations and ``... constructive A/D filters which
preserve the invariant sensory features'' based simply on performance
at any particular task. More generally, could you motivate your
``subserve'' basis for classifying cognitive representations.

I use ``icon'' to mean much the same as your ``categorical
representations'' (which I'm sure will cause us no end of problems as
we discuss these issues!). These representations --- whatever they
are called --- are characterized by their direct, albeit statistical,
relationship with sensory features. This distinguishes icons from
``symbols'' which are representations without structural
correspondence with the environment.

Your, more restricted, notion of ``symbol'' seems to differ in two
major respects: its emphasis on the systematicity of symbols; and its
use of LABELS (of categories) as the atomic elements. I accept
the systematicity requirement, but I believe your labeling notion
confounds several important factors.

First, I believe you are using labels to mean POINTERS:
computationally efficient references to more elaborate and complete
representations. Such pointers correspond closely to Peirce's notion
of INDICES, and are valuable not only for pointing from symbols
to icons (the role you intend for labels) but also from one place in
the symbolic representation to another. Consider Peirce's view on the
primacy of pronouns.

However, I have come to use the term ``pointer'' instead of ``index''
because I also mean to refer to the vast economy of representation
afforded by such representational devices, as recognized by computer
science. Pointers have obviously been an integral part of traditional
data structures in computer science since the beginning. Quillian's
use of TOKEN --> TYPE pointers is still a
classic example of their benefit to AI knowledge structures. More
recently, many connectionists have taken this pointer quality to be
what they mean by ``symbol.'' For example, Touretzky and Derthick say:
\begin{quotation}
Intelligence seems to require the ability to build complex structures
and to refer to them with simpler objects that may be passed among
processes easily. In this paper we use ``symbol'' to denote such
objects... Symbols in Lisp are mobile [one of five properties
Touretzky ascribes to symbols] because their addresses are easily
copied and passed around. In connectionist models where symbols are
identified with activity in particular units, symbols are not mobile.
[Touretzky \& Derthick, ``Symbol structures in connectionist
networks'' IEEE COMPCON 1987]
\end{quotation}

A more sophisticated form of pointer has been discussed by Hinton as
what he calls a ``reduced description.'' The idea here is to allow the
pointer to contain some reduced version of the description to which it
is pointing. (For example, consider the use of tag bits in some
computer architectures that indicate whether the pointer address
refers to an integer, a real, a string, etc.) If the reduced
description is appropriately constructed, the pointer itself may
contain sufficient information and so the computational overhead of
following it to the full description can be avoided. In general,
however, it might seem impossible to construct such appropriate
reduced descriptions. But if a PROCESS view of cognition is
adopted, rather than relying on a STATIC structure to encode all
information , such generalized pointers become more conceivable:
reduced descriptions correspond to PARTIALLY ACTIVE
representations which, when more FULLY ACTIVE, lead to more
completely specified descriptions.

The other feature of your labeling notion that intrigues me is
the naming activity it implies. This is where I see the issues
of language as becoming critical. I would go so far as to
propose that truly symbolic representations and language are
co-dependent. I believe we agree on this point. It is
important to point out that by claiming true symbol
manipulation arose only as a response to language, I do not
mean to belittle the cognitive abilities of pre-lingual
hominids. Current connectionist research is showing just how
powerful iconic (and perhaps categorical) representations can
be. By the same token I use the term language broadly, to
include the behavior of other animals for example.

In summary, it seems to me that the aspect of symbols connectionism
needs most is something resembling pointers. More elaborate notions of
symbol introduce difficult semantic issues of language that can be
separated and addressed indepently (see below). Without pointers,
connectionist systems will be restricted to ``iconic'' representations
whose close correspondence with the literal world severly limits them from
``subserving'' most higher (non-lingual) cognitive functioning.

\section{Total Turing Test}

While I agree with the aims of your Total Turing Test (TTT),
viz. capturing the rich interrelated complexity characteristic
of human cognition, I have never found this direct comparison
to human performance helpful. A criterion of cognitive
adequacy that relies so heavily on comparison with humans
raises many tangential issues. I can imagine many questions
(e.g., regarding sex, drugs, rock and roll) that would easily
discriminate between human and machine. Yet I do not see such
questions illuminating issues in cognition.

On the other hand, I also want to avoid the ``... Searlian mysteries
about `intrinsic' vs. `derived' intentionality....'' Believe it or
not, it is exactly these considerations that has led me to the
information retrieval task domain. I did not motivate this well in my
last message and would like to give it another try.

\section{Semantics in information retrieval}

First, let's do our best to imagine providing an artificial cognitive
system (a robot) with the sort of grounding experience you and I both
believe necessary to full cognition. Let's give it video eyes,
microphone ears, feedback from its affectors, etc. And let's even
give it something approaching the same amount of time in this
environment that the developing child requires. I want to make two
comments on this Gedanken experiment. First, the corpus of experience
acquired by such a robot is orders of magnitude more complex than any
system today. Second, there is no doubt that even such a complete
system as this would have a radically different experience of the
world than our own. In short, I simply mean to highlight the huge
distance between the psycho-physical experience of any artificial
system and any human.

The communication barrier between the symbols of man and the
symbols of machine to which I referred in my last message is a
consequence of this distance. When we say ``apple'' I would
expect the symbol in our heads to have almost no correspondence
to the symbol ``apple'' in any computers. Since I see such a
correspondence as a necessary precondition to the development
of language, I am not hopeful that language between man and
machine can develop in the same fashion as language develops
within a species.

So the question for me becomes: how might we give a machine the
same rich corpus of experience (hence satisfying the total part
of your TTT) without relying on such direct experiential
contact with the world? The answer for me (at the moment) is
to begin at the level of WORDS. I view the enormous textual
databases of information retrieval (IR) systems as merely so
many words. I want to take this huge set of ``labels,''
attached by humans to their world, as my primitive experiential
database.

The task facing my system, then, is to look at and learn from this
world. This experience actually has two components. The textbase
itself provides the first source of information, viz., how authors use
and juxtapose words. The second, ongoing source of experience are the
interactions with IR users, in which people use these same words and
then react positively or negatively to my systems interpretation of
those words. The system then adapts its (connectionist) representation
of the words and documents so as to reflect what the consensus of its
users indicate by these words. In short, I am using the original
authors and the browsing users as the systems ``eyes'' into the human
world. I am curious to see what structural relationship arise among
these words, via low level connectionist learning procedures, to
facilitate access to the IR database.

------------------------------

Date: Fri, 22 May 87 11:12:29 EDT
From: harnad@Princeton.EDU
Subject: Symbol Grounding - Pt. 1

This is part 1 of a response to a longish exchange on the symbol grounding
problem. Rik Belew <rik%roland@SDCSVAX.UCSD.EDU> asks:

> ... [1] what evidence causes you to postulate iconic and categorical
> representations as being distinct?... Apart from a relatively few
> cognitive phenomena (short-term sensory storage, perhaps mental
> imagery), I am aware of little evidence of "continuous, isomorphic
> analogues of the sensory surfaces"
[your "iconic" representations].
> [2] I see great difficulty in distinguishing between such
> representations and "constructive A/D filters [`categorical'
> representations] which preserve the invariant sensory features"
based
> simply on performance at any particular task. More generally, could
> you [3] motivate your ``subserve'' basis for classifying cognitive
> representations.

[1] First of all, short-term sensory storage does not seem to constitute
*little* evidence but considerable evidence. The tasks we can perform
after a stimulus is no longer present (such as comparing and matching)
force us to infer that there exist iconic traces. The alternative
hypthesis that the information is already a symbolic description at
this stage is simply not parsimonious and does not account for all the
data (e.g., Shepard's mental rotation effects). These short-term
effects do suggest that iconic representations may only be temporary
or transient, and that is entirely compatible with my model. Something
permanent is also going on, however, as the sensory exposure studies
suggest: Even if iconic traces are always stimulus-bound and
transient, they seem to have a long-term substrate too, because their
acuity and reliability increases with experience.

I would agree that the subjective phenomenology of mental imagery is very
weak evidence for long-term icons, but successful performance on some
perceptual tasks drawing on long-term memory is at least as economically
explained by the hypothesis that the icons are still accessible as by the
alternative that only symbolic descriptions are being used. In my
model, however, most long-term effects are mediated by the categorical
representations rather than the iconic ones. Iconic representations
are hypothesized largely to account for short-term perceptual
performance (same/difference judgment, relative comparisons,
similarity judgments, mental rotation, etc.). They are also, of
course, more compatible with subjective phenomenology (memory images
seem to be more like holistic sensory images than like selective
feature filters or symbol strings).

[2] The difference between isomorphic iconic representations (IRs)
and selective invariance filters (categorical representations, CRs)
is quite specific, although I must reiterate that CRs are really a
special form of "micro-icon." They are still sensory, but they are
selective, discarding most of the sensory variation and preserving
only the features that are invariant *within a specific context of
confusable alternatives*. (The key to my approach is that identifying
or categorizing something is never an *absolute* task but a relative,
context-dependent one: "What's that?" "Compared to What?") The only
"features" preserved in a CR are the ones that will serve as a reliable
basis for sorting the instances one has sampled into their respective
categories (as learned from feedback indicating correct or incorrect
categorizing). The "context" (of confusable alternatives), however, is
not a short-term phenomenon. Invariant features are provisional, and
always potentially revisable, but they are parts of a stable,
long-term category-representational system, one that is always being
extended and updated on the basis of new categorization tasks and
samples. It constitutes an ever-tightening approximation.

So the difference between IRs and CRs ("constructive A/D filters") is
that IRs are context-independent, depending only on the
comparison of raw sensory configurations and on any transformations that
rely on isomorphism with the unfiltered sensory configuration, whereas
IRs are context-dependent and depend on what confusable alternatives
have been sampled and must then be reliably identified in
isolation. The features on which this successful categorization is based
cannot be the holistic configural ones, which blend continuously into
one another; they are features specifically selected and abstracted to
subserve reliable categorization (within the context of alternatives
sampled to date). They may even be "constructive" features, in the sense
that they are picked out by performing an active operation -- sensory,
comparative or even logical -- on the sensory input. Apart from this invariant
basis for categorization (let's call these selectively abstracted features
"micro-iconic") all the rest of the iconic information is discarded from the
category filter.

[3] Having said all this, it is easy to motivate my "subserve" as you
request: IRs are the representations that subserve ( = are required in
order to generate successful performance on) tasks that call for
holistic sensory comparisons and isomorphic transformations of
the unfiltered sensory trace (e.g., discrimination, matching,
similarity judgment) and CRs are the representations required to
generate successsful performance on tasks that call for reliable
identification of confusable alternatives presented in isolation. As a
bonus, the latter provide the grounding for a third representational
system, symbolic representations (SRs), whose elementary symbols are
the labels of the bounded categories picked out by the CRs and
"fleshed out" by the IRs. These elementary symbols can then be
rulefully combined and recombined into symbolic descriptions which, in
virtue of their reducibility to grounded nonsymbolic representations,
can now refer to, describe, predict and explain objects and events in
the world.

Stevan Harnad
{seismo, psuvax1, bellcore, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet harnad@mind.princeton.edu
(609)-921-7771

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT