Copy Link
Add to Bookmark
Report

NL-KR Digest Volume 04 No. 16

eZine's profile picture
Published in 
NL KR Digest
 · 11 months ago

NL-KR Digest             (2/08/88 21:08:11)            Volume 4 Number 16 

Today's Topics:
Re: failure of TM
Re: Garden-path sentences
Re: words order in English and Japanese

Submissions: NL-KR@CS.ROCHESTER.EDU
Requests, policy: NL-KR-REQUEST@CS.ROCHESTER.EDU
----------------------------------------------------------------------

Date: Tue, 2 Feb 88 11:37 EST
From: Rick Wojcik <rwojcik@bcsaic.UUCP>
Subject: Re: failure of TM


In article <2132@pdn.UUCP> alan@pdn.UUCP (0000-Alan Lovejoy) writes:
>
>Generative grammar is anologous to an algebra. Just because no one can
>...
>Transformational grammar is analogous to a Calculus. There are many
>
These are interesting remarks to be made in connection with the article
by Maurice Gross, who is well-known as a mathematical linguist. Here is
an excerpt from his much-maligned article "On the Failure of Generative
Grammar"
(Language 55:883, 1979):

"Finally, let us recall Hjelmslev and his glossematics, whose
simple-minded formalism (the rediscovery of Boolean algebra for
kindergarten) permitted speculation about language quite independently
of any data. It appears that much generative work is imprinted with
these mystical aspects. It is well known that manipulating formulas of
logical or programming languages triggers, in the mind of professionals,
a compulsive feeling of satisfaction. Among linguists, this unhealthy
feeling is reinforced by a belief that such mechanisms explain, in some
deep (and as yet unfathomable) fashion, the functioning of human
thought."


Those who might be persuaded to think of Gross as some kind of crank who
doesn't know much about generative theory should take note of his
numerous publications in refereed journals. Before writing his
critique, he was the author of at least four books: Grammaire
transformationnelle du francais, Mathematical models in linguistics,
Methodes en syntaxe, and Grammaire transformationnelle du francais:
syntaxe du verbe. Among those acknowledged for review of his article
are well-known generativists such as James Hoard, Terry Langendoen,
Theodore Lightner, and Paul Postal. (This is not to say that they
endorse his views.) The article was written after Gross had been
engaged for several years with numerous co-workers in an effort to
produce a transformational-generative grammar "encompassing a
significant portion of French."
But don't be swayed by all of this.
His article wasn't worth a giggle :-).
--
Rick Wojcik csnet: rwojcik@boeing.com
uucp: {uw-june uw-beaver!ssc-vax}!bcsaic!rwojcik
address: P.O. Box 24346, MS 7L-64, Seattle, WA 98124-0346
phone: 206-865-3844

------------------------------

Date: Tue, 2 Feb 88 15:33 EST
From: Paul Neubauer <neubauer@bsu-cs.UUCP>
Subject: Re: failure of TM

+ = alan@pdn
+And the fact that a TG
+has not YET been produced which completely describes any natural
+language should not be surprising: look how many CENTURIES passed
+from the invention of calculus to QED theory!!! TG is a YOUNG branch
+of mathematics. It may be centuries before some applied linguist will
+be able to use it to completely describe a natural language.

> = merrill@iuvax.cs.indiana.edu
>There is a fundamental difference between Calculus and TG. Calculus
>was invented to solve problems that it solved; transformational
>grammar was created to solve problems that it has failed to solve. To
>be sure, syntacticians can create grammars for small fragments of the
>written language, but there is no language---not even Turkish---for
^^^^^^^^^^^^^^^^? huh?
>which an adequate grammar has been constructed for a large fragment of
>the written language. No spoken language has been even roughly
>approximated. Wisely, I suspect, since, like this sentence, spoken
>language tends to be ungrammatical.
>
>The claim that "look how many centuries passed &c" is a traditional
>dodge, as commonly used in my own field of AI as it is in Linguistics.
>In either case, the appropriate response is: "if your tools are not
>yielding interesting results in the fields for which they were
>designed, you've got the wrong tools."
Tools are designed to solve
>problems, and those which are designed, and then don't work
>immediately, should be discarded.
^^^^^^^^^^^

I somehow feel compelled to add my $.02 to this discussion and I have to
express (at least limited) agreement with Alan. Transformational (or any
other variety of) grammars ARE tools, but Merrill seems to have an extremely
distorted view of what they are tools FOR. My view is that they are tools
for understanding what language is all about. I contend that the business
of linguistics has, so far, been attempting to understand the nature of
language rather than trying to write complete descriptions of individual
languages. My own feeling is that Gross's criticism of transformational
grammars wrt traditional grammars is valid only to the extent that
traditional grammatical theory facilitated writing complete and adequate
grammars of natural language, i.e. not very.

In theory, of course, grammars can be written for natural languages.
However, anyone who has actually done serious work on one or more human
languages (and that certainly includes Greg Lee, who has also disparaged
Alan) cannot fail to be aware of how extensive a person's knowledge of
his/her language is. The amount of knowledge that must be codified to yield
a complete description of a language is mind-boggling. We are talking about
encyclopedic amounts of information here. Any serious linguist must be
aware of how many questions remain open in the matter of HOW to organize a
grammar of a language, and that presents a daunting prospect when it comes
down to the brass tacks of trying to write a complete and theoretically
consistent grammar of a particular language.

In practice, an intellectually honest linguist (even one who zealously
promotes a particular theory) has to admit (to him/herself, even if to no
one else) that the theoretical upheavals that have regularly wracked the
field make it unlikely that any particular theory will stand for very long.
This also contributes to the unattractiveness of spending years writing an
encyclopedically complete and theoretically consistent grammar of a
language. Any attempt to do so will surely teach one a great deal about the
limitations of the particular theory, but will also probably prove to be
virtually useless and unreadable to future generations of linguists.

I propose yet another insipid analogy: Linguistic theories are like
computer languages. All of the even reasonably explicit theories that we
have yet seen are more or less comparable to assembly languages for
different computers. The explicit grammars that I have seen have often
enough proved to be suitable for accurate (though fragmentary) description
of a language, but when someone wants to use the information contained in
the description in a better (or at least newer) grammar, the original
grammar has turned out to be about as portable as assembly language usually
is. We need high-level languages, but we do not have them and we will not
have them until we know a lot more about how to go about constructing them.
Traditional grammar has never reached even the level of explicitness of
modern (computer) specification languages. Encyclopedic grammars have been
written, but there is no obvious way of using the descriptions contained in
them. For the most part traditional encyclopedic grammars have been useful
principally as sources of data and the underlying traditional grammatical
theory has imposed a minimal amount of structure on that data. (This is not
to disparage the value of good data sources, but there is a difference
between data sources and complete, EXPLICIT grammars.)

In the terms of this metaphor, producing a grammar of a natural language
should seem comparable to writing a major operating system in assembly
language. Operating systems such as OS/360 have required hundreds of worker
years, and OS/360 almost broke IBM. It may not be obvious to the person in
the street who has not actually tried to analyze a major chunk of an actual
human language, but natural languages are at least as complex as large
commercial operating systems and much less well defined (at least for the
present). Nobody in their right minds should suggest that we spend many
millions of dollars writing a monstrosity of a grammar of {English, French,
Russian, ...} when we still have virtually no idea of what we expect it to
look like when we are finished and when the probability that it can be
ported to whatever improved theoretical basis we have come up with in the
meantime is next to nil.

In this light, Merrill's criticism that tools that "don't work immediately
should be discarded"
seems ludicrous, at least if immediate success is
interpreted as meaning immediate production of complete and explicit
grammars. If immediate success is interpreted more reasonably as meaning
that a grammatical theory enables us to improve our understanding of why we
have been having problems figuring out natural languages, then
transformational grammar was actually a roaring success immediately and it
was only 20 years later that its own problems became substantial enough to
cause many of its previous adherents to become disenchanted. Gross's
criticisms appeared at about that time and my hypothesis is that they
attracted as little response as they did less because they were valid (I
think at least this particular line of criticsim was not), but because the
more widespread disenchantment with various other perceived problems of
transformational grammar resulted in a general shortage of enthusiasm.

None of the above should be construed as implying that I endorse
transformational grammar in any known form. There are lots of other reasons
to be dissatisfied with transformational grammar, but the lack of complete
and adequate grammars of any language is a "failing" that TG shares with all
other known theories of grammar and that "failing" is likely to continue to
prevail for a long time. If it is more difficult to construct a
transformational grammar of French than a traditional grammar, the most
plausible reason for this fact is the greater requirements for explicitness
that a transformational grammar imposes, and this is a point for
transformational (or at least generative) grammar, not against it.

--
Paul Neubauer neubauer@bsu-cs.UUCP
<backbones>!{iuvax,pur-ee,uunet}!bsu-cs!neubauer

------------------------------

Date: Thu, 4 Feb 88 12:40 EST
From: Greg Lee <lee@uhccux.UUCP>
Subject: Re: failure of TM

In article <2018@bsu-cs.UUCP> neubauer@bsu-cs.UUCP (Paul Neubauer) writes:
>+ = alan@pdn
>... Greg Lee, who has also disparaged Alan

No, no. I was just disagreeing. The view that syntactic theory is
a mathematical system is certainly respectable. Richard Montague proposed
this, and, in recent years, Paul Postal. I'm sorry if what I said
sounded nasty.

Paul goes on to suggest that we should abstract away from the
idiosyncracies of the various syntactic theories in the way that
HLL permit us to avoid assembler. That's also a view that I respect
but disagree with. The opposing, fortunately popular, view is
that we should try to approach assembler more closely, so as to
aim for an understanding of human language in terms of human
physiology. Someday.

Greg, lee@uhccux.uhcc.hawaii.edu

------------------------------

Date: Thu, 4 Feb 88 12:16 EST
From: Rick Wojcik <ssc-vax!bcsaic!rwojcik@beaver.cs.washington.edu>
Subject: Re: Garden-path sentences

One classic article on garden-pathing is K.S. Lashley's "The Problem of
Serial Order in Behaviour"
(Sol Saporta, ed. _Psycholinguistics:
A Book of Readings_, 1961--originally published in _Cerebral Mechanisms
in Behaviour_1951). When he read the paper to an audience, he
discussed the phrase "rapid writing" at one point. This primed the
audience for a later example in his text:
"Rapid writing with his uninjured hand saved from loss the contents
of the capsized canoe"

Of course, I should have spelled the word as 'righting', but this gives
you the way in which Lashley's audience actually heard the utterance.
Note that this GP sentence can only exist as a spoken, not a written,
sentence.

I think that the spoken/written issue is something of a red herring. It
is true that virtually any GP sentence can be resolved in spoken
English. However, Steedman ("Natural and unnatural language processing"
Jones & Wilks, eds. _Automatic Natural Language Parsing_, 1983) and
others have made the point that the GP effect seldom occurs in natural
language discourse--both spoken and written--because context serves to
resolve it. Not only intonation, but syntactic and pragmatic factors
serve to distract the listener from perceiving the immense amount of
ambiguity that would exist in virtually all sentences if they were
considered in isolation. While intonation can be used to resolve
ambiguity, there is no reason to believe that it always will be used.
So it is quite valid to study the GP phenomena in both spoken and
written contexts.

The controversy in NLP circles revolves around Mitch Marcus' apparent
belief that his parsing methodology chokes on GP sentences in the same
way that humans choke on them. Steedman's point was that humans don't
normally choke on GP sentences because they don't normally perceive them
as such. Lashley's example was so amusing because the discourse context
had to be carefully contrived to produce the effect.
--
Rick Wojcik csnet: rwojcik@boeing.com
uucp: {uw-june uw-beaver!ssc-vax}!bcsaic!rwojcik
address: P.O. Box 24346, MS 7L-64, Seattle, WA 98124-0346
phone: 206-865-3844

------------------------------

Date: Tue, 2 Feb 88 21:37 EST
From: Syun Tutiya <tutiya@alan.STANFORD.EDU>
Subject: Re: words order in English and Japanese

In article <3725@bcsaic.UUCP> rwojcik@bcsaic.UUCP (Rick Wojcik) writes:
>Still,
>it seems to have little to do with the problems that AI researchers busy
>themselves with. And it has everything to do with what language
>scholars busy themselves with. Perhaps the participants realize
>instinctively that their views make more sense in this newsgroup.

I am no AI researcher or language scholar, so find it interesting to
learn that even in AI there could be an argument/discussion as to
whether this is a proper subject or that is not. Does what AI
researchers are busy with define the proper domain of AI research?
People who answer yes to this question can be safely said to live in
an established discipline called AI.

But if AI research is to be something which aims at a theory about
intelligence, whether human or machine, I would say interests in AI
and those in philosophy is almost coextensive.

I do not mind anyone taking the above as a joke but the following
seems to be really a problem for both AI researchers and language
scholars.

A myth has it that variation in language is a matter of what is called
parameter setting, with the same inborn universal linguistic faculty
only modified with respect to a preset range of parameters. That
linguistic faculty is relatively independent of other human faculties,
basically. But on the other hand, AI research seems to be based on the
assumption that all the kinds of intellectual faculty are realilzed in
essentially the same manner. So it is not unnatural for an AI
researcher try to come up with a "theory" which should "explain" the
way one of the human faculties is like, which endeavor sounds very odd
and unnatural to well-educated language scholars. Nakashima's
original theory may have no grain of truth, I agree, but the following
exchange of opinions revealed, at least to me, that AI researchers on
the netland have lost the real challenging spirit their precursors
shared when they embarked on the project of AI.

Sorry for unproductive, onlooker-like comments.

Syun
(tutiya@csli.stanford.edu)
[The fact that I share the nationality and affiliation with Nakashima
has nothing to do with the above comments.]

------------------------------

Date: Thu, 4 Feb 88 11:13 EST
From: Rick Wojcik <ssc-vax!bcsaic!rwojcik@beaver.cs.washington.edu>
Subject: Re: words order in English and Japanese

In article <7390003@hpfclp.HP.COM> fritz@hpfclp.HP.COM (Gary Fritz) writes:
>
>I have been studying Japanese for well over a year now, and if there is
>one thing that is clear to me, it is that Japanese excells at vagueness
>and expression of one's mood. Many times my teacher (who speaks excellent
>English) has tried and failed to explain the subtleties involved in
>seemingly unimportant changes of phrasing. It appears that Japanese

I think that your problem with Japanese is the same one faced by all
language learners. There is nothing special about Japanese. Have you
ever tried to explain English to a Japanese or Russian speaker ;-? Try
explaining the difference between "John likes to ski" and "John likes
skiing"
. How about the distinction between "Eve gave Adam an apple" and
"Eve gave an apple to Adam"? There are reasons why English makes a
distinction between these constructions, but they are not readily
apparent, even to those well-versed in grammatical theory.
--
Rick Wojcik csnet: rwojcik@boeing.com
uucp: {uw-june uw-beaver!ssc-vax}!bcsaic!rwojcik
address: P.O. Box 24346, MS 7L-64, Seattle, WA 98124-0346
phone: 206-865-3844

------------------------------

End of NL-KR Digest
*******************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT