Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 150

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest            Tuesday, 16 Jun 1987     Volume 5 : Issue 150 

Today's Topics:
Binding - comp.theory Newsgroup,
Theory - Information Flow & The Symbol Grounding Problem

----------------------------------------------------------------------

Date: 11 Jun 87 21:17:29 GMT
From: ramesh@cvl.umd.edu (Ramesh Sitaraman)
Subject: New newsgroup: comp.theory

To those of you haven't already noticed ....

A new news group "comp.theory" has commenced. This group presumably
deals with all aspects of the Theoretcial Computer Science
including Complexity theory, Algorithm analysis, Logic and
theory of computation, denotational semantics, computational
geometry etc etc etc.


Make merry,

Ramesh

------------------------------

Date: Wed 10 Jun 87 12:31:39-EDT
From: Albert Boulanger <ABOULANGER@G.BBN.COM>
Subject: Re: Information flow discussions


Anthony Pelletier writes:
P.S. I think alot about information flow problems and would enjoy
discussions on that...if anyone wants to chat.

For a real "juicy" discussion of information flow in non-linear
systems see:

"Strange Attractors, Chaotic Behavior, and Information Flow"
Robert Shaw, Z. Naturforsch, 36a, 80-112 1981

This discusses the information flow characteristics of non-linear
systems in order to gain insight on how non-linear systems self-organize.
(This self-organization aspect of non-linear dynamical systems
is an aspect of neural networks. See for example Kohonen's work
on self-organizing feature maps in "Self-Organization and
Associative Memory"
Springer-Verlag 1984. This feature map stuff
is a type of unsupervised learning.)

Albert Boulanger
BBN Labs

------------------------------

Date: 15 Jun 87 02:37:00 GMT
From: mind!harnad@princeton.edu (Stevan Harnad)
Subject: Re: The symbol grounding problem


In two consecutive postings marty1@houdi.UUCP (M.BRILLIANT)
of AT&T Bell Laboratories, Holmdel wrote:

> the flow of visual information through the layers of the retina,
> and through the layers of the visual cortex, with motion detection,
> edge detection, orientation detection, etc., all going on in specific
> neurons... Maybe a neurobiologist can give a good account of what
> all that means, so we can guess whether computer image
> processing could emulate it.

As I indicated the last time, neurobiologists don't *know* what all
those findings mean. It is not known how features are detected and by
what. The idea that single cells are doing the detecting is just a
theory fragment, and one that has currently fallen on hard times. Rivals
include distributed networks (of which the cell is just a component),
or spatial frequency detectors, or coding at some entirely different
level, such as continuous postsynaptic potentials, local circuits,
architectonic columns or neurochemistry. Some even think that the
multiple analog retinas at various levels of the visual system (12 on
each side, at last count) may have something to do with feature
extraction. One cannot just take current neurophysiological data and
replace the nonexistent theory by preconceptions from machine vision
-- especially not by way of justifying the machine-theoretic concepts.

>> >[SH:] my theory never laid claim to complete invertibility
>> >throughout.
>
> First "analog" doesn't mean analog, and now "invertibility"
> doesn't mean complete invertibility. These arguments are
> getting too slippery for me... If non-invertibility is essential
> to the way we process information, you can't say non-invertibility
> would prevent a machine from emulating us.

I have no idea what proposition you think you were debating here. I
had pointed out a problem with the top-down symbolic approach to
mind-modeling -- the symbol grounding problem -- which suggested that
symbolic representations would have to be grounded in nonsymbolic
representations. I had also sketched a model for categorization that
attempted to ground symbolic representations in two nonsymbolic kinds
of representations -- iconic (analog) representations and categorical
(feature-filtered) representations. I also proposed a criterion for
analog transformations -- invertibility. I never said that categorical
representations were invertible or that iconic representations were
the only nonsymbolic representations you needed to ground symbols. Indeed,
most of the CP book under discussion concerns categorical representations.

> All I'm saying is that Harnad has come nowhere near proving his
> assertions, or even making clear what his assertions are...
> Harnad's terminology has proved unreliable: analog doesn't mean
> analog, invertible doesn't mean invertible, and so on. Maybe
> top-down doesn't mean top-down either...
> Anybody can do hand-waving. To be convincing, abstract
> reasoning must be rigidly self-consistent. Harnad's is not.
> I haven't made any assertions as to what is possible.

Invertibility is my candidate criterion for an analog transform. Invertible
means invertible, top-down means top-down. Where further clarification is
needed, all one need do is ask.

Now here is M. B. Brilliant's "Recipe for a symbol-grounder" (not to be
confused with an assertion as to what is possible):

> Suppose we create a visual transducer... with hard-wired
> capability to detect "objects"... Next let's create a symbol bank
> Next let's connect the two... I'm over my head here, but I don't
> think I'm asking for anything we think is impossible. Basically,
> I'm looking for an expert system that learns... the essential step
> is to make the machine communicate with us both visually and verbally,
> so it can translate the character strings it made up into English, so
> we can understand it and it can understand us. For the survival
> motivation, the machine needs a full set of receptors and
> effectors, and an environment in which it can either survive or
> perish, and if we built it right it will learn English for its
> own reasons. Now, Harnad, Weinstein, anyone: do you think this
> could work, or do you think it could not work?

Sounds like a conjecture about a system that would pass the TTT.
Unfortunately, the rest seems far too vague and hypothetical to respond to.

If you want me to pay attention to further postings of yours, stay
temperate and respectful as I endeavor to do. Dismissive rhetoric will not
convince anyone, and will not elicit substantive discussion.

--

Stevan Harnad (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet harnad@mind.Princeton.EDU

------------------------------

Date: 15 Jun 87 05:21:36 GMT
From: mind!harnad@princeton.edu (Stevan Harnad)
Subject: Re: The symbol grounding problem


berleant@ut-sally.UUCP (Dan Berleant) of U. Texas CS Dept., Austin, Texas
has posted this welcome reminder:

> the retina cannot be viewed as a module, only loosely
> coupled to the brain. The optic nerve, which does the coupling, has a
> high bandwidth and thus carries much information simultaneously along
> many fibers. In fact, the optic nerve carries a topographic
> representation of the retina. To the degree that a topographic
> representation is an iconic representation, the brain thus receives an
> iconic representation of the visual field.

> Furthermore, even central processing of visual information is
> characterized by topographic representations. This suggests that iconic
> representations are important to the later stages of perceptual
> processing. Indeed, all of the sensory systems seem to rely on
> topographic representations (particularly touch and hearing as well as
> vision).

As I mentioned in my last posting, at last count there were 12 pairs
of successively higher analog retinas in the visual system. No one yet
knows what function they perform, but they certainly suggest that it
is premature to dismiss the importance of analog representations in at
least one well optimized system...

> Yes, the Turing test is by definition subjective, and also subject to
> variable results from hour to hour even from the same judge.
> But I think I disagree that intrinsic meaningfulness cannot be
> objectively verified. What about the model theory of logic?

In earlier postings I distinguished between two components of the
Turing Test. One is the formal, objective one: Getting a system to generate
all of our behavioral capacities. The second is the informal,
intuitive (and hence subjective) one: Can a person tell such a device
apart from a person? This version must be open-ended, and is no better
or worse than -- in fact, I argue that is identical to -- the
real-life turing-testing we do of one another in contending with the
"other minds" problem.

The subjective verification of intrinsic meaning, however, is not done
by means of the informal turing test. It is done from the first-person
point of view. Each of us knows that his symbols (his linguistic ones,
at any rate) are grounded, and refer to objects, rather than being
menaningless syntactic objects manipulated on the basis of their shapes.

I am not a model theorist, so the following reply may be inadequate, but it
seems to me that the semantic model for an uninterpreted formal system
in formal model-theoretic semantics is always yet another formal
object, only its symbols are of a different type from the symbols of the
system that is being interpreted. That seems true of *formal* models.
Of course, there are informal models, in which the intended interpretation
of a formal system corresponds to conceptual or even physical objects. We
can say that the intended interpretation of the primitive symbol tokens
and the axioms of formal number theory are "numbers," by which we mean
either our intuitive concept of numbers or whatever invariant physical
property quantities of objects share. But such informal interpretations
are not what formal model theory trades in. As far as I can tell,
formal models are not intrinsically grounded, but depend on our
concepts and our linking them to real objects. And of course the
intrinsic grounding of our concepts and our references to objects is
what we are attempting to capture in confronting the symbol grounding
problem.

I hope model theorists will correct me if I'm wrong. But even if the
model-theoretic interpretation of some formal symbol systems can truly
be regarded as the "objects" to which it refers, it is not clear that
this can be generalized to natural language or to the "language of
thought,"
which must, after all, have Total-Turing-Test scope, rather
than the scope of the circumscribed artificial languages of logic and
mathematics. Is there any indication that all that can be formalized
model-theoretically?
--

Stevan Harnad (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet harnad@mind.Princeton.EDU

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT