Copy Link
Add to Bookmark
Report

AIList Digest Volume 2 Issue 178

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest            Sunday, 16 Dec 1984      Volume 2 : Issue 178 

Today's Topics:
Linguistics - Nonverbal Semantics
----------------------------------------------------------------------

Date: Fri, 14 Dec 84 13:13:04 EST
From: Bruce Nevin <bnevin@BBNCCH.ARPA>
Subject: Nonverbal Semantics [Long Message]


>It's . . . convenient to talk about natural language as if
>it's something "on its own". However, I view this attitude
>as scientifically unhealthy, since it leads to an
>overemphasis on linguistic structure. Surely the
>interesting questions about NL concern those cognitive
>processes involved in getting from NL to thoughts in memory
>and back out again to language. These processes involve
>forming models of what the speaker/listener knows, and
>applying world knowledge and context. NL structure plays
>only a small part in these overall processes, since the main
>ones involve knowledge application, memory interactions,
>memory search, inference, etc.

Dyer V2 #160

>Bravo, Dyer! As you suggest, there is indeed much to learn
>from the study of natural language -- but not about "natural
>language itself"; we can learn what kinds of manipulations
>and processes occur in the under-mind with enough frequency
>and significance that it turns out to be useful to signify
>them with surface language features.

>. . . All that is very fine. We should indeed study
>languages. But to "define" them is wrong. You define the
>things YOU invent; you study the things that already exist.
>. . . But when one confuses the two situations, as in the
>subjects of generative linguistics or linguistic competence
>-- ah, a mind is a terrible thing to waste, as today's
>natural language puts it.

Minsky V2 #162


I suspect that the antipathy to natural-language parsers, grammars, and
theories that we often encounter in AI literature reflects a healthy
revulsion from the excesses of generative linguistics. In all of its
many schizmatic forms, generative grammer posits, as the secret inner
mechanism of language, one of various language-like systems that share
historical roots with programming languages, and uses natural-language
data only in a fragmentary and anecdotal way to advance or refute the
latest version. These systems can be quite hairy, but I am convinced
that the hair is mostly inside the heads of the theorists.

Any natural phenomenon, or any artifact of human culture, is a
legitimate object of study. Natural language is both an artifact of
human culture, and a natural phenomenon. There are some who are
studying language, as opposed to the grammatical machinery of
language-like systems.

I recently reviewed a book by the linguist from whom Noam Chomsky
learned about linguistic transformations (among other things). It will
appear in AJCL vol. 10 nos. 3 and 4 (a double issue). The following
excerpt gives an outline of the model of language he has developed:

I refer to the Harrisian model of language as `constructive
grammar' and to the Harrisian paradigm for linguistics as
`constructive linguistics'. A constructive grammar has
at least the following six characteristics:

1 The semantic primes are words in the language, a base
vocabulary that is a proper subset of the vocabulary of
the language as a whole.

2 Generation of sentences in the base is by word entry,
beginning with entry of (mostly concrete) base nouns.
The only condition for a word to enter is that its
argument requirement must be met by some previously
entering word or words, generally the last entry or
entries, which must not already be in the argument of
some other word. The base vocabulary has thus a few
simple classes of words:

N base nouns with null argument
On, Onn operators requiring base nouns as arguments
Oo, Ooo requiring operators as arguments
Ono, Oon requiring combinations of operators and base
nouns
[NOTE: these are intended to be O with subscripts]

This does not exhaust the base vocabulary. In addition
to these, almost all of the operators require
morphophonemic insertion of `argument indicators' such
as -ing and that. (These were termed the `trace' of
`incremental transformations' in Harris 1965 and 1968.)

3 The base generates a sublanguage which is
informationally complete while containing no
paraphrases. This is at the expense of redundancy and
other stylistic awkwardness, so that utterances of any
complexity in the base sublanguage are unlikely to be
encountered in ordinary discourse. As in prior reports
of H's work, base sentences are all assertions, other
forms such as questions and imperatives being derived
from underlying performatives I ask, I request, and the
like.

4 A well-defined system of reductions yields the other
sentences of the language as paraphrases of base
sentences. The reductions were called the
`paraphrastic transformations', and `extended
morphophonemics' in earlier reports. They consist of
permutation of words (movement), zeroing, and
morphophonemic changes of phonological shape. Each
reduction leaves a `trace' so that the underlying
redundancies of the base sublanguage are
recoverable. Linearization of the operator-argument
dependencies--in English either `normal' SVO or a
includes much of what is in the lexicon in generative
`topicalizing' linear order--is accomplished by the
reduction system, not the base. The reduction system
includes much of what is in the lexicon in generative
grammar (cf. Gross 1979).


5 Metalinguistic information required for many
reductions, such as coreferentiality and lexical
identity, is expressed within the language by conjoined
metalanguage sentences, rather than by a separate
grammatical mechanism such as subscripts.
Similarly, `shared knowledge' contextual and pragmatic
information is expressed by conjoined sentences
(including ordinary dictionary definitions) that are
zeroable because of their redundancy. [Harris's book
Mathematical Structures of Language (Wiley 1968) shows
that the metalanguage of natural language necessarily
is contained within the language.]

6 The set of possible arguments for a given operator (or
vice-versa) is graded as to acceptability. These
gradings correspond with differences of meaning in the
base sublanguage, and thence in the whole language.
They diverge in detail from one sublanguage or
subject-matter domain to another. Equivalently, the
fuzzy set of `normal' cooccurrents for a given word
differs from one such domain to another within the base
sublanguage.

In informal, intuitive terms, a constructive grammar
generates sentences from the bottom up, beginning with word
entry, whereas a generative grammar generates sentences from
the top down, beginning with the abstract symbol S. The
grammatical apparatus of constructive grammar (the rules
together with their requirements and exceptions) is very
simple and parsimonious. H's underlying structures, the
rules for producing derived structures, and the structures
to be assigned to surface sentences are all well defined.
Consequently, H's argumentation about alternative ways of
dealing with problematic examples has a welcome concreteness
and specificity about it.

In particular, one may directly assess the semantic
wellformedness of base constructions and of each intermediate
stage of derivation, as well as the sentences ultimately
derived from them, because they are all sentences. By
contrast, in generative argumentation, definitions of base
structures and derived structures are always subject to
controversy because the chief principle for controlling them
is linguists' judgments of paraphrase relations among
sentences derived from them. Even if one could claim to
assess the semantic wellformedness of abstract underlying
structures, these are typically so ill-defined as to compel us
to rely almost totally on surface forms to choose among
alternative adjustments to the base or to the system of rules
for derivation. And as we all know, a seemingly minor tweak
in the base or derivation rules can and usually does have
major and largely unforeseen consequences for the surface
forms generated by the grammar.

This model of language offers an interesting approach to the problem
brought up by Young in V2 #162, 174: how to represent the meaning of
words without (circularly) using words?

Most approaches amount to what I call `translation semantics': having
found a set of language-universal semantic primes, one translates
sentences of a given natural language into those primes and, voila'!,
one has represented the `meaning' of those NL sentences.

Let us ignore the difficulty of finding a set of semantic universals (a bit
of hubris there, what!). The `representation of meaning' is itself a
proposition in a more-or-less artificial language that has its own
presumably very simple syntax (several varieties of logic are promoted
as most suitable) and--yes--its own semantics. Logics boil `meaning'
down to sets of `values' on propositions, such as true/false.

`But my system', rejoins Young, `uses actual nonverbal modalities, it
has real hooks into the neurological and cognitive processes that human
beings use to understand and manipulate not only language, but all other
experience as well'. That may be. It does beg the question to what
degree cognitive processes and even neurological processes are molded by
language and culture. (In Science 224:1325-1326 Nottebohm reports that
the part of the forebrain of adult canaries responsible for singing
becomes twice as large coincident with (a) increased testosterone and
(b) learning of songs. This is the same whether the testosterone
increase is annually in the Spring or experimentally, and the latter
even in females, who consequently learn to sing songs as if they were
males. Vrenson and Cardozo report in Brain Research 218:79-97
experiments indicating that both the size and shape of synapses in the
visual cortices of adult rabbits changed as a result of visual training.
Cotman and Nieto-Sampedro survey research on synapse growth and brain
plasticity in adult animals in Annual Review of Psychology 33:371-401.
Roger Walsh documents other research of this sort in his book Towards an
Ecology of Brain. Conventional wisdom of brain science, that no new
neurons are formed after infancy, is unwise.)

The padres of yore surveyed the primitive languages around their
missions and found so many degenerate forms of Latin. Their grammars
lay these languages on the procrustean bed of inflections and
declensions in a way that we see today as obviously ethnocentric and
downright silly. We run the same risk today, because like those padres
we cannot easily step out of the cultural/cognitive matrix with which we
are analyzing and describing the world. Ask a fish to describe water:
the result is a valid `insider's view', but of limited use to nonfish.

Mr. Chomsky characterized his mentor in linguistics as an Empiricist and
himself as a Rationalist, and in the Oedipal struggle which ensued
mother Linguistics has got screwed. Given that systems based on
constituent analysis are inherently overstructured, with layers of
pseudo-hierarchy increasingly remote from the relatively concrete words
and morphemes of language, an innate language-learning device is
ineluctable: how else could a child learn all of that complexity in so
short a time on so little and so defective evidence? The child cannot
possibly be an Empiricist, she must be a Rationalist. Given a
biologically innate language-acquisition device, there must be a set of
linguistic universals that all children everywhere come into the world
just knowing, and all languages must be specialized realizations of
those archetypes--phenotypes of that genotype, as it were. (Chomsky did
not set out to `define' natural language but to explain it. It is
principally because his `underlying', `innate' constructs have a
connection to empirical data that is remote at best--rather like the
relation of a programmer's spec to compiled binary--that they appear
to be (are?) definitions.)

But consider a model in which the structure of language is actually
quite simple. Might the characteristics of that model not turn out to
be those of some general-purpose cognitive `module'? I believe Harris's
model, sketched above, presents us this opportunity.

Now about Jerry Fodor's book The Modularity of Mind, which Young mentions.
The following is from the review by Matthei in Language 60.4:979,

F presents a theory of the structure of the mind in which two
kinds of functionally distinguishable faculties exist:
`vertical' faculties (modules) and `horizontal' faculties
(central processes). . . . F identifies the modules with the
`input systems, whose function is to interpret information
about the outside world and to make it available to the central
cognitive processes. They include [five modules for] the
perceptual systems . . . and [one for] language. . . .

The central processes, as horizontal faculties, can be
functionally distinguished from modular processes because their
operations cross content domains. The paradigm example of their
operation is the fixation of belief, as in determining the truth
of a sentence. What one believes depends on an evaluation of
what one has seen, heard, etc., in light of background
information. . . .

. . . the condition for successful science is that
nature should have joints to carve it at: relatively
simple subsystems which can be artificially isolated and
which behave, in isolation, in something like the way
that they behave in situ. (128)

[The above, by the way, suggests that, while studying language
in isolation--severing its `joints' with other systems--may be
of limited interest to AI researchers seeking to model language
users' performance, rather than their competence, it is not
`scientifically unhealthy'. It also points to the central
problem of semantics, as Matthei points out . . .]

Modules, F says, satisfy this condition; central processes do
not. If true, this is bad news for those who wish to study
semantics. The burden which F puts on them is that they must
demonstrate that computational formalisms exist which can
overcome the problems he enumerates. These formalisms will have
to be invented, because F maintains that no existing formalisms
are capable of solving the problems.

I, too, feel that notions of modules and modularity, or at least Fodor's
attempt to consolidate them, make a great deal of sense. However, the
caveat about the study of semantics underscores my contention that
semantics properly must be based on an `acceptability model': a body of
knowledge stated in sentences in the informationally complete
sublanguage of Harris's base, whose acceptability is known. This is
akin to a `truth model' in aletheic approaches to semantics in logic.
It is also very simply conceived of as a database such as is constructed
by Sager's LSP systems at NYU. We should note that the sentences of
this base sublanguage correspond very closely across languages (cf. e.g.
the English-Korean comparison in Harris's 1968 book), and that the
vocabulary of the base sublanguage is a subset of that of the whole
language (allowing for derivation, morphological degeneracy, and the
like), much closer to Young's categories than the vocabulary with which
he expresses so much frustration.

There is one pointer I can give to another version of `translation
semantics' that probably satisfies Young's sense of `nonverbal':
Leonard Talmy developed an elaborate system for representing the
semantics and morphological derivation of some pretty diverse languages
in his (1974?) PhD dissertation at UC Berkeley. The languages included
Atsugewi (neighbor and cousin to the Native American language I worked
on), Spanish, and Yiddish. He went to SRI after graduation, but I have
no idea where he is now or what he is doing.

Bruce Nevin (bn@bbncch)

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT