Copy Link
Add to Bookmark
Report

AIList Digest Volume 8 Issue 047

eZine's profile picture
Published in 
AIList Digest
 · 11 months ago

AIList Digest           Saturday, 13 Aug 1988      Volume 8 : Issue 47 

Philosophy:

AI and the future of the society
Dual encoding, propostional memory and...
Self-reference in Natural Language
point of metalanguage in language
The Godless assumption

----------------------------------------------------------------------

Date: 5 Aug 88 17:24:30 GMT
From: jbn@glacier.stanford.edu (John B. Nagle)
Reply-to: glacier!jbn@labrea.stanford.edu (John B. Nagle)
Subject: Re: AI and the future of the society


Antti Ylikoski (YLIKOSKI@FINFUN.BITNET) writes:
>I once heard an (excellent) talk by a person working with Symbolics.
>(His name is Jim Spoerl.)
>
>One line by him especially remained in my mind:
>
>"What we can do, and animals cannot, is to process symbols.
>(Efficiently.)"

>
>
>In the human brain, there is a very complicated real-time symbol
>processing activity going on, and the science of Artificial
>Intelligence is in the process of getting to know and to model this
>activity.
>
>A very typical example of the human real-time symbol processing is
>what happens when a person drives a car. Sensory input is analyzed
>and symbols are formed of it: a traffic sign; a car driving in the
>same direction and passing; the speed being 50 mph. There is some
>theory building going on: that black car is in the fast lane and
>drives, I guess, some 10 mph faster than me, therefore I think it's
>going to pass me after about half a minute. To a certain extent, the
>driver's behaviour is rule-based: there is for example a rule saying
>that whenever you see a red traffic light in front of you you have to
>stop the car. (I remember someone said in AIList some time ago that
>rule-based systems are "synthetic", not similar to human information
>processing. I disagree.)

As someone who works on automatic driving and robot navigation,
I have to question this. One notable fact is that animals are quite
good at running around without bumping into things. Horses are capable
of running with the herd over rough terrain within hours of birth.
("Horses of the Camargue" has some beautiful pictures of this.) This
leads one to suspect that the primary mechanisms are not based on
symbols or rules. Definitely, learning is not required. Horses are
born with the systems for walking, obstacle avoidance, running, standing up,
motion vision, foot placement, and small-obstacle jumping fully functional.

More likely, the basics of navigation are based on geometric
processing, or what some people like to call "spatial reasoning".
See Witkin and Kass's work at Schlumberger for some
idea of what this means. Ossana Khatib's approach to path planning
(1979) is also very relevant. Geometry has the advantage of being
compatible with the real world without abstraction. Recognize that
abstraction is not free. In real-world situations, as faced by robots,
the processing necessary to put the sensory data into a form where rule-based
approaches can even begin to operate is formidable, and in most non-trivial
cases is beyond the state of the art.

I would encourage people moving into the AI field to work in the
vision, spatial, and geometric domains. There are many problems that
need to be solved, and enough computational power is becoming available
to address them. Much of the impetus for the past concentration on highly
abstract domains came from the need to find problems that could be
addressed with modest computational resources. This is much less of a
problem today. We are beginning to have adequate tools.

Personally, I suspect that horse-level performance in navigation
and squirrel-level performance in manipulation can be achieved without
any component of the system using mathematical logic.

John Nagle

------------------------------

Date: 5 Aug 88 10:52 PDT
From: hayes.pa@Xerox.COM
Subject: Re: Dual encoding, propostional memory and...


> (yes, I encode all my knowledge of a scene into
>little FOPC like tuples, honest) ......
>... thinking about bit level encoding
> protocols, ....

No: thats not the claim. You are here talking about the implementation level of
encoding: we are talking about the semantic level. You might encode your
epistmologically adequate ( see McCarthy and Hayes 1969 ) representation in all
sorts of ways, perhaps as states of a connectionist network ( although I havnt
yet seen a way in which it could really be done ), probably not as lots of
little n-tuples ( very inefficient ). The point at issue was whether the
knowledge is encoded or not, and whether, if it is, we can make much progress
without thinking about how it does its representing.

>The dual coding theory, which normally distinguishes
>between iconic and semantic memory, has caused
>endless debate

Yes, but much of this debate has been between psychologists, and so has little
relevance to the issues we are discussing here. These trails arent in
different directions to McCarthys, they are in a different landscape. Ive had
interesting arguments with some of them, about such terms as `iconic memory'.
Are there iconic and propositional representations in the head? Of course, the
psychologist says; if we can visualise, producing different behaviour than when
we remember: thats what `different' MEANS. Thats not what the AI modeller means
by `different', though. If one takes the iconic/semantic distinction to refer
to different ways in which information can be encoded, then it isnt at all
obvious that different behavior means different representations ( though it
certainly suggests different implementations ).

>Pat's argument hinges on the demand that we think
>about something called representation (eh?) and then
>describe the encoding. The minute you are tricked...

Well now, lets be clear. The argument goes like this. People know things -
facts, lets say, but use a different word if you like - and their behavior is
influenced in important ways by the things they know and what they are able to
conclude from the. It seems reasonable to conclude that these facts that they
know are somehow encoded in their heads, ie a change of knowledge-state is a
change of physical state. Thats all the trickery involved in talking about
`representation', or being concerned with how knowledge is encoded. All the
rest is just science: guesses about how this encoding is done, observations
about good and bad ways to describe it, etc..
Do you disagree with any of this, Gilbert? If so, what alternative account
would you suggest for describing, for example, whatever it is that we are doing
sending these messages to one another?

>PDP networks will work of course,...

Well, will they? Lets see them do some cognitive task of the sort McCarthy has
been aiming at. But it must be possible, I agree, to implement it all this
way, since we are ourselves walking, talking networks.

> ....but you can't of
>course IMAGINE the contents of the network, and thus
>they cannot be a representation

Sure they can be a(n implemetation of a ) representation. And sure we can
imagine the contents of the network: people do it all the time. When someone
shows me a network doing a bit of semantic memorising, you can bet they are
explaining to me how to imagine whats in the network.

If people who attack AI or the Computational Paradigm, simultaneously tell me
that PDP networks are the answer, I know they havnt understood the point of the
representational idea. Go back and (re)read that old 1969 paper CAREFULLY,
Gilbert, before you find yourself writing a book like Dreyfusss.

Pat Hayes

------------------------------

Date: Tue, 9 Aug 88 10:55 CDT
From: <CMENZEL%TAMLSR.BITNET@MITVMA.MIT.EDU>
Subject: Self-reference in Natural Language

In AIList Digest vol. 8 num. 29 I claimed that Bruce Nevin's
(bnevin@cch.bbn.com) analysis of self-reference in natural language
entailed there was something semantically improper about sentences like
"This sentence is in English" and "This sentence is grammatical." My claim
was that they are wholly unproblematic, and hence that there was something
wrong with the analysis. In his lengthy and very interesting reply, Nevin
demurs:

> They are not "wholly unproblematical," they engender a double-take kind of
> reaction. Of course people can cope with paradox, I am merely accounting
> for the source of the paradox.

First, I'm dubious about whether they do engender the sort of double-take
Nevin refers to here. But even so, it's not at all clear to me what that's
supposed to signal. People sometimes have a similar reaction to sentences
containing several negatives, but for all that we wouldn't want to cast
aspersions on them. Second, Nevin seems to be implying that people have to
"cope with paradox" when they are confronted with the self-referential
sentences above. But again, even if we grant that there are problematic,
there's surely no paradox lurking anywhere nearby.

> If I say it in Modern Greek, where the noun followed by deictic can
> come last, the normal reading is still for "this" to refer to a nearby
> prior sentence in the discourse. The paradoxical reading has to be
> forced by isolating the sentence, usually in a discourse context like
> "The sentence /psema ine i frasi afti/, translated literally 'Falsehood
> it is the sentence this', is paradoxical because if I suppose that it
> is false, then it is truthful, and if I suppose it is truthful, then it
> is false."
These are metalanguage statements about the sentence. The
> crux of the matter (which word order in English only makes easier to
> see), is that a sentence (or any utterance) cannot be a metalanguage
> statement about itself--cannot be at the same time a sentence in the
> object language (English or Greek) and in the metalanguage (the
> metalanguage that is a sublanguage of English or of Greek).

The assumption that there is a metalanguage/object language distinction in
ordinary language is carrying an awful lot of weight here. There's no
doubt we have to make and heed such a distinction when we're doing formal
semantics--where we've actually got a rigorously defined formal language,
and we're describing how it's to be interpreted in a formal model--but it's
not clear there is such a distinction to be made in natural language. It
seems to me far more natural just to say that English (for example), in
addition to containing terms that refer to planets, numbers, and the like,
also has terms that refer to elements of the language itself, and in the
limiting case, to expressions that contain those very terms. Granted, this
is precisely the capacity that leads us to paradox; but I would like to see
some evidence that this account is wrong other than the fact that it gets
us into trouble. That is, is there any linguistic intuition that Nevin can
cite to justify his account in addition to his claim that he's got a
solution to the paradoxes? Consider an analogy in set theory. Russell's
paradox initially engendered all sorts of confusion and consternation--the
intuitive assumption that for every property there is a corresponding set
of things that have the property leads to contradiction. Eventually, there
came something of an explanation in the form of the so-called {\it
iterative} conception of set--sets are collections that are "built up" from
some initial collection of atomic elements by certain operations. Some
properties pick out collections that could never be the result of any such
building-up process, and hence are not SETS. Not exactly airtight, but
there is an appealing intuition there. Is there anything analogous for
Nevin's account? I'm not being rhetorical here; there may well be, I just
haven't been able to think of any.

Chris Menzel
cmenzel@tamlsr.bitnet
chris.menzel@lsr.tamu.edu

ps: In my first reply to Nevin I included two books for "Recommended
reading."
This I fear came off looking as if I were patronizing him, which
was most definitely not my intent, and I apologize for the misimpression.
I have learned a great deal from both books (Martin's {\it Recent Essays on
Truth and the Liar Paradox} and Barwise and Etchemendy's {\it The Liar: An
Essay on Truth and Circularity}), and my recommendations were sincere, and
intended as a genuine contribution for the benefit of the readers of
AIList Digest.

------------------------------

Date: Tue, 9 Aug 88 16:07:24 EDT
From: "Bruce E. Nevin" <bnevin@cch.bbn.com>
Subject: point of metalanguage in language

We are talking about self-referential sentences of the type:

This sentence is English { grammatical | long | . . . }

I agree that these sentences are paradoxical only if the adjective is
something like `false, a lie'. You get paradox only when successive
readings contradict each other and must be reconciled because it is
after all but one sentence. The contradiction makes it impossible to
ignore the semantic problem of not being able to resolve referentials.

In the paradoxical case, the infinite regress of reading-tokens cannot
be ignored because of the contradiction. But even without the
contradiction between successive readings, you always get an effect that
you might call `referential reverbration', since to evaluate the truth
or appropropriateness of a sentence containing a deictic you naturally
examine the thing to which it refers, which in these cases happens to be
the sentence itself. The rereading (checking out the referent) refers
again to itself. Most language users don't continue doing this for very
many iterations. Stopping runaway loops presumably has some adaptive
value for intelligent entities! Contrast the following case:

Cesuwi tini:maCQati. The preceding sentence is in Achumawi.

No reverbration, just one glance back. (Uppercase letters are for
glottalized stops.)

I think self-referential sentences (sentences that refer to themselves
as a whole, not just to words or constructions in themselves) are
perhaps initially amusing and then later annoying to people because they
are rather a perversion of the machinery of deixis. Such sentences
occur only in the most artificial circumstances. Virtually anything
they can say about themselves is self evident and therefore redundant.
Except for illustrating some oddities about language, they are
pointless, whether true or false. I mean, who cares that `Afti i frasi
ine st anglika' or `This sentence is in Greek' is false? To say such a
self-evident falsehood is foolishness without even the point of humor in
any circumstances that I can think of. This surely contributes to the
feeling of anomaly and is another reason why they are not `wholly
unproblematic,' but the real tale is in the process of resolving
referentials, as described previously.

There is a somewhat similar case in which simple falsehood goes usually
unnoticed, an oversight which is certainly not characteristic of
non-selfreferential situations: I am thinking here of the familiar
notice on an otherwise blank page that says `this page intentionally
left blank'. This is like +-------------------+
+ This box is empty +
+-------------------+. Is the preceding a
sentence in English? It would not go unnoticed. One can construct
cases like

This sentence, about `psemata sta anglika,' is in English.
Afti i frasi, epi `falsehoods in Greek,' sta elinika ine.

but they too would scarcely go unnoticed. Hofstedter plays these
recursion games very nicely.

Multiple negatives do indeed involve similar processes, since negation
is a metalinguistic operator, one of denial. Such cases are problematic
because language users try to resolve them to a simple assertion or
denial. (Easier to do in languages or dialects admitting multiple
negation for intensive expression, as in West African languages and
Black English.) Things like `I don't disagree' and `not unlike the
denial of' are considered stylistically bad but are not semantically
flawed in the way that self-referential sentences are, because the
multiple rereadings have a limit rather than implying infinite regress.

One might argue that inability to resolve referentials is a matter of
performance rather than competence. That limb is open if anyone wants
to go out on it.

The point about natural language containing its own metalanguage is not
that it can be abused in degenerate cases, but rather that we need no
other a prioristic metalanguage for grammar and semantics. Indeed,
language users have recourse to no such external metalanguage for
learning and using their native language. If a description of a
language cannot be stated in the language itself (that is, in its
intrinsic, built-in metalanguage), then it is incorrect. It is bound to
introduce redundancy into the description over and above the redundancy
used by the language for informational purposes, and this has the status
of noise obscuring any account of the information in texts. Formal
notations may be convenient for computational and other purposes, but
they must be straightforward graphic variants of words and constructions
in the metalanguage that is a part of the language that they describe.
The question of the status of the metalanguage thus points to a
criterion for comparison of different descriptions as to their adequacy
for representing language and what it does. See e.g. Z. S. Harris
_Language and Information_. My review of this book should appear in
_Computational Linguistics_ 14.4, scheduled to be mailed in January
1989.

Bruce Nevin
bn@cch.bbn.com
<usual_disclaimer>

PS: I have not read the books you cite but will look for them.

------------------------------

Date: Thu, 11 Aug 88 11:58:35 GMT
From: IT21%SYSB.SALFORD.AC.UK@MITVMA.MIT.EDU
Subject: The Godless assumption

Subject: The Godless assumption

In going through my backlog of AI Mail I found two rather careless
statements.

In article <445@proxftl.UUCP>, bill@proxftl.UUCP (T. William Wells)
writes: > that,.... This means
> that I can test the validity of my definition of free will by
> normal scientific means and thus takes the problem of free will
> out of the religious and into the practical.

Why should 'religious' not also be 'practical'? Many people - especially
ordinary people, not AI researchers - would claim their 'religion' is
immensely 'practical'. I suggest the two things are not opposed. It may
be that many correspondents *assume* that religion is a total falsity or
irrelevance, but this assumption has not been proved correct, and many
people find strong empirical evidence otherwise.


Date: Sun, 03 Jul 88 03:47:51 EST
Jeff Coggshall <KLEMOSG%YALEVM.BITNET@MITVMA.MIT.EDU> writes
>Subject: Metaepistemology & Phil. of Science
> Once we assume that there is no priveledged source knowledge about
>the way things really are, then, it seems, we are left with either
>saying that "anything goes" ...

That there is 'no priviledged source knowledge' is a mere assumption that
has very little evidence to support it. And there are many who do not
make that assumption, believing in religious revalation. Many would
claim the Bible, for instance, is God's revelation to humankind. Some
other religions would make equivalent claims.
Therefore we cannot *assume* such things without the danger of writing
off a huge section of reality which our theories should fit.

Since the non-existence/irrelevance of God has not yet been proved, and many
claim to have strong empirical evidence of God's existence and
effectiveness in their lives, may I ask that correspondents think more
carefully before making statements like the two above.

Thank you,

Andrew Basden.

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT