Copy Link
Add to Bookmark
Report
AIList Digest Volume 5 Issue 166
AIList Digest Thursday, 2 Jul 1987 Volume 5 : Issue 166
Today's Topics:
Theory - Perception,
Policy - Quoting
----------------------------------------------------------------------
Date: 29 Jun 87 22:46:31 GMT
From: trwrb!aero!venera.isi.edu!smoliar@ucbvax.Berkeley.EDU (Stephen
Smoliar)
Subject: Re: The symbol grounding problem....
In article <1194@houdi.UUCP> marty1@houdi.UUCP (M.BRILLIANT) writes:
>
>I was just looking at a kitchen chair, a brown wooden kitchen
>chair against a yellow wall, in side light from a window. Let's
>let a machine train its camera on that object. Now either it
>has a mechanical array of receptors and processors, like the
>layers of cells in a retina, or it does a functionally
>equivalent thing with sequential processing. What it has to do
>is compare the brightness of neighboring points to find places
>where there is contrast, find contrast in contiguous places so
>as to form an outline, and find closed outlines to form objects.
>There are some subtleties needed to find partly hidden objects,
>but I'll just assume they're solved. There may also be an
>interpretation of shadow gradations to perceive roundness.
>
I have been trying to keep my distance from this debate, but I would like
to insert a few observations regarding this scenario. In many ways, this
paragraph represents the "obvious" approach to perception, assuming that
one is dealing with a symbol manipulation system. However, other approaches
have been hypothesized. While their viability remains to be demonstrated,
it would be fair to say that, in the broad scope of perception in the real
world, the same may be said of symbol manipulation systems.
Consider the holographic model posed by Karl Pribram in LANGUAGES OF THE
BRAIN. As I understand it, this model postulates that memory is a collection
of holographic transforms of experienced images. As new images are
experienced, the brain is capable of retrieving "best fits" from this
memory to form associations. Thus, the chair you see in the above
paragraph is recognized as a chair by virtue of the fact that it "fits"
other images of chairs you have seen in the past.
I'm not sure I buy this, but I'm at least willing to acknowledge it as
an alternative to your symbol manipulation scenario. The biggest problem
I have has to do with retrieval. As far as I understand, present holographic
retrieval works fine as long as you don't have to worry about little things
like change of scale, translation, or rotation. If this model is going to
work, then the retrieval process is going to have to be more powerful than
the current technology allows.
The other problem relates to concept acquisition, as was postulated in
Brilliant's continuation of the scenario:
>
>Now the machine has a form. If the form is still unfamiliar,
>let it ask, "What's that, Daddy?" Daddy says, "That's a chair."
>The machine files that information away. Next time it sees a
>similar form it says "Chair, Daddy, chair!" It still has to
>learn about upholstered chairs, but give it time.
>
The difficulty seems to be in what it means to file something away if
one's memory is simply one of experiences. Does the memory trace of the
chair experience include Daddy's voice saying "chair?" While I'm willing
to acknowledge a multi-media memory trace, this seems a bit pat. It
reminds me of Skinner's VERBAL BEHAVIOR, in which he claimed that one
learned the concept "beautiful" from stimuli of observing people saying
"beautiful" in front of beautiful objects. This conjures up a vision
of people wandering around the Metropolitan Museum of Art mutttering
"beautiful" as they wander from gallery to gallery.
Perhaps the difficulty is that the mind really doesn't want to assign a
symbol to every experience immediately. Rather, following the model of
Holland et. al., it is first necessary to build up some degree of
reinforcement which assures that a particular memory trace is actually
going to be retrieved relatively frequently (whatever that means).
In such a case, then, a symbol becomes a fast-access mechanism for
retrieval of that trace (or a collection of common traces). However,
this gives rise to at least two questions for which I have no answer:
1. What are the criteria by which it is decided that such a
symbol is required for fast-access?
2. Where does the symbol's name come from?
3. How is the symbol actually "bound" to what it retrieves?
These would seem to be the sort of questions which might help to tie
this debate down to more concrete matters.
Brilliant continues:
>That brings me to a question: do you really want this machine
>to be so Totally Turing that it grows like a human, learns like
>a human, and not only learns new objects, but, like a human born
>at age zero, learns how to perceive objects? How much of its
>abilities do you want to have wired in, and how much learned?
>
This would appear to be one of the directions in which connectionism is
leading. In a recent talk, Sejnowski talked about "training" networks
for text-to-speech and backgammon . . . not programming them. On the
other hand, at the current level of his experiments, designing the network
is as important as training it; training can't begin until one has a
suitable architecture of nodes and connections. The big unanswered
questions would appear to be: will all of this scale upward? That
is, is there ultimately some all-embracing architecture which includes
all the mini-architectures examined by connectionist experiments and
enough more to accommodate the methodological epiphenomenalism of real
life?
------------------------------
Date: 1 Jul 87 16:14:41 GMT
From: diamond.bbn.com!aweinste@husc6.harvard.edu (Anders Weinstein)
Subject: Re: The symbol grounding problem: Against Rosch &
Wittgenstein
In article <949@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>
>> There is no reliable, consensual all-or-none categorization performance
>> without a set of underlying features? That sounds like a restatement of
>> the categorization theorist's credo rather than a thing that is so.
>
>If not, what is the objective basis for the performance? And how would
>you get a device to do it given the same inputs?
I think there's some confusion as to whether Harnad's claim is just an empty
tautology or a significant empirical claim. To wit: it's clear that we can
reliably recognize chairs from sensory input, and we don't do this by magic.
Hence, we can perhaps take it as trivially true that there are some
"features" of the input that are being detected. If we are taking this line
however, we have remember that it doesn't really say *anything* about the
operation of the mechanism -- it's just a fancy way of saying we can
recognize chairs.
On the other hand, it might be taken as a significant claim about the nature
of the chair-recognition device, viz., that we can understand its workings as
a process of actually parsing the input into a set of features and actually
comparing these against what is essentially some logical formula in
featurese. This *is* an empirical claim, and it is certainly dubitable:
there could be pattern recognition devices (holograms are one speculative
suggestion) which cannot be interestingly broken down into feature-detecting
parts.
Anders Weinstein
BBN Labs
------------------------------
Date: 1 Jul 87 22:33:50 GMT
From: teknowledge-vaxc!dgordon@unix.sri.com (Dan Gordon)
Subject: Re: The symbol grounding problem: Against Rosch &
Wittgenstein
In article <949@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>
>
>dgordon@teknowledge-vaxc.ARPA (Dan Gordon)
>of Teknowledge, Inc., Palo Alto CA writes:
>
>> There is no reliable, consensual all-or-none categorization performance
>> without a set of underlying features? That sounds like a restatement of
>> the categorization theorist's credo rather than a thing that is so.
>
>If not, what is the objective basis for the performance? And how would
>you get a device to do it given the same inputs?
Not a riposte, but some observations:
1) finding an objective basis for a performance and getting a device to
do it given the same inputs are two different things. We may be able
to find an objective basis for a performance but be unable (for merely
contingent reasons, like engineering problems, etc., or for more funda-
mental reasons) to get a device to exhibit the same performance. And,
I suppose, the converse is true: we may be able to get a device to mimic
a performance without understanding the objective basis for the model
(chess programs seem to me to fall into this class).
2) There may in fact be categorization performances that a) do not use
a set of underlying features; b) have an objective basis which is not
feature-driven; and c) can only be simulated (in the strong sense) by
a device which likewise does not use features. This is one of the
central prongs of Wittgenstein's attack on the positivist approach to
language, and although I am not completely convinced by his criticisms,
I haven't run across any very convincing rejoinder.
Maybe more later, Dan Gordon
------------------------------
Date: 1 Jul 87 14:02:28 GMT
From: harwood@cvl.umd.edu (David Harwood)
Subject: Re: The symbol grounding problem - please start your own
newsgroup
In article <950@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
[...replying to M.B. about something...]
>................................................ I do not see this
>intimate interrelationship -- between names and, on the one hand, the
>nonsymbolic representations that pick out the objects they refer to
>and, on the other hand, the higher-level symbolic descriptions into
>which they enter -- as being perspicuously described as a link between
>a pair of autonomous nonsymbolic and symbolic modules. The relationship is
>bottom-up and hybrid through and through, with the symbolic component
>derivative from, inextricably interdigitated with, and parasitic on the
>nonsymbolic.
Uh - let me get this straight. This is the conclusion for your
most recent posting on "the symbol grounding problem." In the first
poorly written sentence you criticize to your bogeyman, saying he ain't
"perspicuous." Small wonder - you invent him for purposes of obsurantist
controversy; no one else even believes in him so far as I can tell.
But wait - there is more. You say your bogeyman - he ain't
"perspicuous." (as if you aren't responsible for this) Then you go on
with what you consider, apparently, to be a "perspicuous" account of
the meaning of "names." So far as I can tell, this sentence is the most
full and "perspicuous" accounting yet, confirmed by everything you've
written on this subject (which I shall not need quote, since it is fresh
on everyone's mind). You say, with inestimatable "perspicuity," concerning
your own superior speculations about the meaning of names (which I quote
since we have all day, day after day, for this): "The relationship is
bottom-up and hybrid through and through, with the symbolic component
derivative from, inextricably interdigitated with, and parasitic on the
symbolic." A mouthful all right. Interdigitated with something all right.
Could you please consider creating your own newsgroup, Mr. Harnad?
I don't know what your purpose is, except for self-aggrandizement, but
I'm fairly sure your purpose has nothing to do with computer science. There's
no discussion of algorithms, computing systems, not even any logical
formality in all this bullshit. And if we have to hear about the meaning of
names - why couldn't we hear from Saul Kripke, instead of you? Then we
might learn something.
Why not create your own soapbox? I will never listen or bother.
I wouldn't even bother to read BBS, which you apparently edit - with
considerable help no doubt, except that you don't write all the articles
(as you do here).
-David Harwood
------------------------------
Date: Wed, 1 Jul 1987 13:28 EDT
From: MINSKY%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU
Subject: AIList Digest V5 #163
Too much, already. This "symbol grounding" has gotten out of hand.
This is a network, not a private journal.
------------------------------
Date: Wed 1 Jul 87 22:02:55-PDT
From: Ken Laws <Laws@Stripe.SRI.Com>
Reply-to: AIList-Request@STRIPE.SRI.COM
Subject: Policy on Quoting
Perhaps the discussion of philosophy/theory/perception would be
more palatable -- or even concise and understandable -- if we
refrained from quoting each other in the style of the old
Phil-Sci list. Quotations are often necessary, of course, but
the average reader can follow a discussion without each participant
echoing his predecessors. Those few who are really interested
in exact wordings can save the relevant back issues; I'll even
send copies on request.
On the whole, I think that this interchange has been conducted
admirably. My hope in making this suggestion is that participants
will spend less bandwidth attacking each other's semantics and more of
it constructing and presenting their own coherent positions. (It's OK
if we don't completely agree on terms such as "analog", as long as
each contributor builds a consistent world view that includes his own
Humpty-Dumpty variants.)
-- Ken
------------------------------
End of AIList Digest
********************