Copy Link
Add to Bookmark
Report

NL-KR Digest Volume 04 No. 46

eZine's profile picture
Published in 
NL KR Digest
 · 10 months ago

NL-KR Digest             (5/02/88 20:28:56)            Volume 4 Number 46 

Today's Topics:
Monthly Abstracts in AI
Advances in Linguistic Rhetoric
What are grammars (for)?
GPSG
know and believe
help

Submissions: NL-KR@CS.ROCHESTER.EDU
Requests, policy: NL-KR-REQUEST@CS.ROCHESTER.EDU
----------------------------------------------------------------------

Date: Mon, 18 Apr 88 07:21 EDT
From: Robin Boswell <robin%aiva.edinburgh.ac.uk@NSS.Cs.Ucl.AC.UK>
Subject: Monthly Abstracts in AI

TURING INSTITUTE PRODUCT ANNOUNCEMENT - MONTHLY ABSTRACTS IN AI

Each issue contains 200 items selected from the latest conference
proceedings, research reports, journals and books on AI and related
topics, divided into 16 categories:

Expert Systems
Applications
Logic Programming
Advanced Computer Vision
Advanced Robotics
Pattern-Recognition
Programming Languages and Software
Automatic Programming
Human-Computer Interaction
Hardware
Machine Learning
Natural Language
Cognitive Modelling
Knowledge Representation
Search control and Planning
General


For a free sample copy and further information, please contact:

robin@turing.ac.uk

or

Jon Ritchie
Turing Institute
George House
36 North Hanover St.
Glasgow G1 2AD
U.K.

Tel: (041) 552-6400

------------------------------

Date: Mon, 18 Apr 88 12:15 EDT
From: Rick Wojcik <rwojcik@bcsaic.UUCP>
Subject: Advances in Linguistic Rhetoric


The current issue of Natural Language & Linguistic Theory (v.6 no.1 1988) contains
an article that should be of special interest to readers of this news group. It is
Paul Postal's "Advances in Linguistic Rhetoric" (pp. 129-137). Note that NLLT,
unlike Language, is approved reading for young linguists. Here is an excerpt from
the article:

"Great strides are being made in linguistic rhetoric, whose progress puts the
stasis in mere description and theorizing to shame. In the great rhetoric
laboratories of the north-eastern United States, defensive shields are being
perfected that can render any theory virtually impervious to factual corrosion."


Postal then goes on to describe such standardized rhetoric techniques as the Phantom
Theorem Move, The Phantom Principle Move, The Phantom Reference Move, and many
others. An added bonus is that virtually all of Postal's examples derive from
respected works in the GB literature. It should be required reading for anyone who
wishes to advance an argument in that theoretical paradigm.
--
Rick Wojcik csnet: rwojcik@boeing.com
uucp: {uw-june uw-beaver!ssc-vax}!bcsaic!rwojcik
address: P.O. Box 24346, MS 7L-64, Seattle, WA 98124-0346
phone: 206-865-3844

------------------------------

Date: Wed, 20 Apr 88 12:00 EDT
From: Rick Wojcik <rwojcik@BOEING.COM>
Subject: What are grammars (for)?

RESPONSE TO: HESTVIK%BRANDEIS.BITNET@MITVMA.MIT.EDU

Arild Hestvik writes:

AH> ... You don't need any pragmatic context to decide that 'He
AH> likes John', with JOHN and HE coreferent is illformed...

Ah yes. That narcissistic fool John. Who does he like above all others?
He likes John. Who does he like to look at? He likes to look at John...

Note that "He likes himself" is also reasonable here. The point is that
no grammaticality judgments exist independently of stipulations about
language use. And that makes sense, since the grammar plays a direct role
in language production.

AH> ... the fact that [something] is ungrammatical TELLS US something very
AH> significant about the nature of human grammars, but we cannot discover what
AH> it is unless the grammar gives an analysis of the sentence.

I don't think we have any fundamental disagreement on what generative
grammars do. We are both well read in the literature. Of course
linguists use grammatical analyses of ill-formed strings to make points
about grammar development. My interest is in the processing of degenerate
language--How does generativism help us to understand this issue? I
believe that the competence/performance dichotomy, the basis of generative
grammar, impedes our understanding of language behavior.

AH> It's not an a priori requirement on generative grammar that it has a
AH> coherent position on the way it interacts with performance...

At least we can agree that generative grammar has no coherent position.

AH> ... Rather, this interaction is an empirical question and people are
AH> working on figuring it out...

This argument has lost some of its punch since the early sixties. In
computer science, the expression is Real Soon Now ;-).

AH> If you actually read any of the scientific literature, you would find that
AH> generative grammar is NOT meant to correspond to any real psychological
AH> PROCESS, but rather to real psychological KNOWLEDGE...

That is the official view with which we are all intimately acquainted. It
is patently absurd. There are two aspects to performance--comprehension
and production. We can all agree that there are psychological processes
or strategies which are needed to produce and understand language. It is
clear that language understanding cannot be rigidly governed by
grammatical processes, since we understand ungrammatical speech. On the
other hand, unless you believe in random generation and incredibly good
luck :-), every grammatical process is a real process of speech
production. I would say that most KNOWLEDGE of language follows from our
understanding of the circumstances under which we would USE it. (This is
not the same as claiming that everything which affects speech production
is a grammatical rule.) Most linguistic theories take the position that
grammars are neutral between production and perception, but the generative
literature almost never discusses language production. Linguistic
performance has become de facto language comprehension--a largely
agrammatical aspect of performance. The derivational complexity issue was
governed by experiments that measured comprehension, not production.

Finally, your attempt at formulating a rule of UG (pronounced "ugh" :-)
did not make much sense to me. You said "If AGR is rich, then you have
pro-drop."
I don't know how the child's brain is supposed to calculate
richness. There is also the question of when to use pro-drop and when not
to. I would propose an alternative. All children come equipped with
automatic pro-drop as a direct constraint on language production.
Children learning languages without pro-drop develop obligatory
pro-insertion.

------------------------------

Date: Tue, 26 Apr 88 12:19 EDT
From: Rick Wojcik <rwojcik@bcsaic.UUCP>
Subject: Re: What are grammars (for)?


Philip Resnik writes:
PR> ... I can easily come up with a context in which
PR> "He likes John" is *NOT* ill-formed: "John is such an egotist. He doesn't
PR> like his co-workers. He doesn't like his students. He likes John. Period."


I tried to submit the same type of example earlier, but the posting
failed. Anyway, I agree with your response to Arild. No grammaticality
judgments can be made independently of stipulations on language use.

Arild Hestvik writes:
AH> With pragmatic context you can make 'He-i likes John-i' understandable, as
AH> you point out. But I don't think this is the same as grammatical, since by
AH> grammatical we mean "wellformed by the grammar"...

The question is what we mean by grammaticality judgments. You claimed
earlier that 'He likes John' was not interpretable as coreference. Now
you want to stipulate that you only meant coreference as it applies in
binding theory. But that concept of coreference comes equipped with the
assumption that sentences occur in a 'null context'--out of the blue.
This is still a stipulation on language use. You can't build a reasonable
theory of grammar if you ignore the way in which grammatical structures
get used. If your goal is to construct a language-understanding system,
then your system will only apply to the contexts that you choose to
examine. And the 'null context' is not a very useful one. It only
seems to occur in conversations about linguistic theory :-).

From Arild Hestvik's earlier reply to me:
AH> It's not an a priori requirement on generative grammar that it has a
AH> coherent position on the way it interacts with performance...

At least we can agree that generative grammar has no coherent position.

AH> ... Rather, this interaction is an empirical question and people are
AH> working on figuring it out...

This argument has lost some of its punch since the early sixties. In
computer science, the expression is Real Soon Now ;-).

AH> If you actually read any of the scientific literature, you would find that
AH> generative grammar is NOT meant to correspond to any real psychological
AH> PROCESS, but rather to real psychological KNOWLEDGE...

That is the official view with which we are all acquainted. It
is patently absurd. There are two aspects to performance--comprehension
and production. We can all agree that there are psychological processes
or strategies which are needed to produce and understand language. It is
clear that language understanding cannot be rigidly governed by
grammatical processes, since we understand ungrammatical speech. On the
other hand, unless you believe in random generation and incredibly good
luck :-), every grammatical process is a real process of speech
production. I would say that most KNOWLEDGE of language follows from our
understanding of the circumstances under which we would USE it. (This is
not the same as claiming that everything which affects speech production
is a grammatical rule.) Most linguistic theories take the position that
grammars are neutral between production and perception, but the generative
literature almost never discusses language production. Linguistic
performance has become de facto language comprehension--a largely
agrammatical aspect of performance. The derivational complexity issue was
governed by experiments that measured comprehension, not production.

Finally, your attempt at formulating a rule of UG
did not make much sense to me. You said "If AGR is rich, then you have
pro-drop."
How does the child's brain richness. There is also the question
of when to use pro-drop and when not to. I would propose an alternative.
All children come equipped with automatic pro-drop as a direct constraint on
language production. Children learning languages without pro-drop develop
obligatory pro-insertion.
--
Rick Wojcik csnet: rwojcik@boeing.com
uucp: uw-beaver!ssc-vax!bcsaic!rwojcik
address: P.O. Box 24346, MS 7L-64, Seattle, WA 98124-0346
phone: 206-865-3844

------------------------------

Date: Wed, 20 Apr 88 12:49 EDT
From: Pete Humphrey <pete@xochitl.UUCP>
Subject: GPSG

While looking at the description of STM2 in the analysis
of unbounded dependencies in Generalized Phrase Structure
Grammar (GKPS), I was unable to determine how missing
subject constructions like "The man that chased Fido returned"
are generated. As the authors point out, STM2 will not
apply to non-lexical ID rules. If the imbedded clause is
not introduced as a subcategorized complement in a lexical
ID rule, how is it generated? I am probably overlooking
something obvious, but could someone explain how GPSG
handles this type of construction? Thanks.

Pete Humphrey
charon.unm.edu!xochitl!pete

------------------------------

Date: Fri, 22 Apr 88 12:52 EDT
From: Jeffrey Goldberg <goldberg@csli.STANFORD.EDU>
Subject: Re: GPSG

In article <117@xochitl.UUCP> pete@xochitl.UUCP (Pete Humphrey) writes:
>While looking at the description of STM2 in the analysis
>of unbounded dependencies in Generalized Phrase Structure
>Grammar (GKPS), I was unable to determine how missing
>subject constructions like "The man that chased Fido returned"
>are generated. As the authors point out, STM2 will not
>apply to non-lexical ID rules. If the embedded clause is
>not introduced as a subcategorized complement in a lexical
>ID rule, how is it generated? I am probably overlooking
>something obvious, but could someone explain how GPSG
>handles this type of construction? Thanks.
>
>Pete Humphrey
>charon.unm.edu!xochitl!pete

You are not really missing something obvious. You are missing a
counter-intuitive kludge. And, a paper was presented at the 1988
meeting of the Berkeley Linguistic Society by Nancy Wiegand which
showed enumerated why the GPSG treatment is counter-intuitive, and
also presented a new kind of argument (from diachronic theory) that
the analysis is wrong. Professor Wiegand can be reached at
Nancy_Wiegand@um.cc.umich.edu.

The GKPS ("Generalized Phrase Structure Grammar" by Gazdar, Klein,
Pullum, and Sag) treatment of short relatives does not involve the
feature SLASH. If we take the NP

(1) the man who chased Fido

we get something like this (I don't have my copy of GKPS with me,
so please forgive errors of some details).

(1t)
..tr
(NP (Det (the)) (N1 (N1 (N (man))) (S\[REL\]
(NP\[REL\] (who)) (VP (V (chased)) (NP (Fido))))))


(1t) NP
____|____
| |
Det N1
| ____|_____
the | |
N1 S[REL]
| ___|____
N | |
| NP[REL] VP
man | ___|____
who | |
V NP
| |
chased Fido

So, there is no SLASHing, the normal S --> NP VP phrase
structure rule is used for expanding the S[REL]. REL is really
short for [WH [WH-MORPH REL]]. WH is a FOOT feature, and there is
an FCR blocking WH on VPs. So, when you have a S[REL] expanding
with the normal S --> NP VP rule the NP must be a relative.

Now, the trick should be clear. The wonderfully ambiguous word
'that' is also a relative pronoun in this construction! (This kind
of argument is made explicitly in an earlier GPSG paper, "Unbounded
Dependencies and Coordinate Structures"
by Gerald Gazdar in LI
198[12].)

As I have said, Wiegand 1988 adds a new reason against this
treatment of Relative 'that'.

One hypothesized solution is to allow for VP's with
complementizers. Again it is counter-intuitive, but I think
that some motivation can be provided by looking at the
Scandanavian languages. The rules we've (Chris Culy and myself)
come up with to get the nitty-gritty details are not pleasant to
look at. I would send them too you if I could find them among
my files.

Another things is to junk the whole GPSG restriction on subject
extraction. This is not really possible because the GPSG
restriction follows from to many things built into the theory.
Furthermore, there is a lot that is really nice in the theory.
Treating short subject extraction as not really extraction has some
nice consequences.

(2) Which glass of milk's been spilt?
(3) *Which glass of milk's Ben spelled?

As well as neatly covering the "'that'-trace" effects very neatly.
Is really pretty when it comes to parasitic gaps, and the
coordinate structure facts come out nicely as well.

Problems arise however. The GPSG account does nothing to rule out
the profoundly ungrammatical (4).

(4) *Who did pictures of hang on a wall?

Because of the Subject-Aux Inversion, the subject "pictures of __"
is a sister to the lexical head "did". Thus, what normally rules
out (nonparasitic) extraction from subject fails to apply here.

This was pointed out in print by Pollard in (I think) "Phrase
Structure Grammar Without Metarules"
which is in one of the WCCFL
volumes. (He proposes an alternative to GPSG that doesn't have the
problem that GPSG does here, but also lacks the highly valued "rich
deductive structure"
of GPSG.)

Another solution may be to give up the treatment of auxiliary verbs
in English as lexical heads. But I won't go into that here.

-jeff goldberg
--
Jeff Goldberg Internet: goldberg@csli.stanford.edu

------------------------------

Date: Mon, 25 Apr 88 11:32 EDT
From: Paul Neubauer <neubauer@bsu-cs.UUCP>
Subject: Re: GPSG

In article <3578@csli.STANFORD.EDU> goldberg@csli.UUCP (Jeffrey Goldberg) writes:
>Another things is to junk the whole GPSG restriction on subject
>extraction. This is not really possible because the GPSG
>restriction follows from to many things built into the theory.
>Furthermore, there is a lot that is really nice in the theory.
>Treating short subject extraction as not really extraction has some
>nice consequences.
>
>(2) Which glass of milk's been spilt?
>(3) *Which glass of milk's Ben spelled?

HUH?? I hope this is a typo of some sort. I certainly agree that (3) is *,
but does that really mean anything? (4) strikes me as not much, if at all,
better.

(4a) ?*Ben's spelled a glass of milk.
b) ?*Ben has spelled ...

On the other hand, if we assume something sensible like:

(5) Ben's spilled a glass of milk.

then (6) strikes me as not so bad.

(6a) Which glass of milk's Ben spilled?
b) Which glass of milk has Ben spilled?

I admit that (6b) sounds a bit better to me in isolation, but I don't see
any relevance to that. I suspect that in the absence of context (either
linguistic or real-world) that the example could be further improved by
substituting "whose" for "which" since it is comparatively unobvious how one
should identify a glass of milk (and 'by owner' seems as likely a means of
identification as any). If (as seems likely to me) "glass of" is irrelevant
to the construction in question, then

(7) Whose milk's Ben spilled this time?

strikes me as absolutely fine. The addition or removal of "glass of"
certainly ought to have no bearing on questions of (subject??) extraction,
if that is really the problem under discussion here.

--
Paul Neubauer neubauer@bsu-cs.UUCP
<backbones>!{iuvax,pur-ee,uunet}!bsu-cs!neubauer

------------------------------

Date: Mon, 25 Apr 88 23:50 EDT
From: Jeffrey Goldberg <goldberg@csli.STANFORD.EDU>
Subject: Re: GPSG

In article <2726@bsu-cs.UUCP> neubauer@bsu-cs.UUCP (Paul Neubauer) writes:
>In article <3578@csli.STANFORD.EDU> goldberg@csli.UUCP (Jeffrey Goldberg) writes:

>>(2) Which glass of milk's been spilt?
>>(3) *Which glass of milk's Ben spelled?

>HUH?? I hope this is a typo of some sort.

Yes, Sorry. The examples should have been

(2') Which glass of milk's been spilled?
(3') *Which glass of milk's Ben spilled?

in contrast to

(2'') Which glass of milk has been spilled?
(3'') Which glass of milk has Ben spilled?

My typos were so bad as to seriously confuse the point. In (2') we
have the auxiliary 'has' contracting to the questioned phrase
"which glass of milk". We already know that "has" can contract to
subjects. "The glass of milk's been spilled." Given the GKPS
analysis of extraction the phrase "which glass of milk" is in the
subject position in (2') but not in (3').

It is certainly true that there are a million other perfectly
plausible treatments of this cute fact (for those who get the
judgments in 2'-3'), but nothing is particulary compelling and it
does fit nicely with the GPSG treatment.
--
Jeff Goldberg Internet: goldberg@csli.stanford.edu

------------------------------

Date: Sun, 24 Apr 88 02:59 EDT
From: Yuan Tsai <ching@uhccux.UUCP>
Subject: know and believe

The meaning of 'know' and 'believe' is intriquing. Scholars
who are concerned with their meaning tend to treat them as
semantic primitives and tend to consider them as not related.
But I think this is worng. I suspect that the two words share
something in common.

Consider the following sentences:


(1) John knows that Mary is smart.
(2) John believes that Mary is smart.


Both (1) and (2) share the same basic meaning: 'John has a piece
of information in his mind that Mary is smart.' But they differ
in whether or not the actual speakers of (1) or (2) have the same
information (or if you like, proposition). The speaker of (1) is
committed to the information. The speaker of (2) is either
committed to the opposite of the information (in this case, 2 is
a false belief on the part of John from the speaker's viewpoint)
or he does NOT have the information at all (in this second
possible reading, 2 is a belief neutral to the speaker).

For the two notions of 'know' and 'believe,' most languages,
I believe, have two distinct forms (as in English) or even three
(as in Mandarin Chinese: chudau 'know', siangsin 'neutral
belief', yiwei 'false belief'; or in Japanese: siru 'know',
sinziru '(neutral) belief', omoikomu 'false belief'), because they
are very important in indicating the reliability (to be exact,
the reliability according to the subjective view of the speaker)
of the information in exchange. But I think there may be some
languages which use the same form for both 'know' and 'believe',
but distinques them with particles or suffixes. If the
morphology of a language is such that some words are formed by
agglutinating more basic morphemes together and if this language
has some systematic uses of particles or suffixes, then it is
possibl that the partial overlapping of 'know' and 'believe' will
be reflected in the same stem or morepheme and that the use of
particles or suffixes will reflect whether the complement is true
or not.

Does anyone know any languages of this kind?

------------------------------

Date: Thu, 28 Apr 88 14:46 EDT
From: James W. Meritt <jwm@stdc.jhuapl.edu>
Subject: help

Would someone please email/post proofs/disproofs/references to the following:

S-W hypo, which I read as "The limits of your language are the limits of
your thoughts"
(corrections welcome)

"Any statement in one language can be translated into any other without loss"


I have seen these, and have no problems with one and immense difficulty
with the other. My training is in problem solving methodologies
(Operations research / general systems analysis) not linguistics, so
I would appreciate, and probably understand, suitable arguments/presentations.
(i.e. "it is commonly held that" and "I wish" are equally null.
Observables are terrific.)

Thank you.

p.s. anybody hear about anything coming from Brown's loglan? I have the
lead-in language manual, but have seen no results.


Disclaimer: Individuals have opinions, organizations have policy.
Therefore, these opinions are mine and not any organizations!
Q.E.D.
jwm@aplvax.jhuapl.edu 128.244.65.5 (James W. Meritt)

------------------------------

End of NL-KR Digest
*******************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT