Copy Link
Add to Bookmark
Report
NL-KR Digest Volume 04 No. 53
NL-KR Digest (5/25/88 00:25:25) Volume 4 Number 53
Today's Topics:
Requests:
Swedish grammar
understanding bibliographic references
Looking for phonetics package for ibm compatible
Ongoing discussions:
On grammaticality judgements
subject extraction
Re: What are grammars (for)?
genderless 3rd person singular pronoun
Seminars:
From CSLI Calendar, May 19, 3:29
BBN AI/Education seminar -- Miriam Reiner.
CSLI Calendar addition
Submissions: NL-KR@CS.ROCHESTER.EDU
Requests, policy: NL-KR-REQUEST@CS.ROCHESTER.EDU
----------------------------------------------------------------------
Date: Tue, 17 May 88 12:25 EDT
From: Jonas Mellin <mcvax!cs.exeter.ac.uk!jme@uunet.UU.NET>
Subject: Swedish grammar
I am a M.Sc student in computer science doing my project work in
computational lingustics. I am going to build a system which
understands swedish in a restricted domain.
I wonder:
a) have anybody done any work on swedish grammar in computational
linguistics? (type morphological analysis, etc)
b) if there are any on-going projects in swedish for the
moment?
I would be delighted for fast answers.
Thanks,
Jonas Mellin, Department of Computer Science,
University of Exeter.
------------------------------
Date: Thu, 19 May 88 15:39 EDT
From: finin@PRC.Unisys.COM
Subject: understanding bibliographic references
We have a need to process bibliographic references, extracting the
relevant information encoded in them. That is, to take a reference like:
J. W. Wallis and Edward H. Shortliffe. Customizing
explanations using causal knowledge. In Bruce G. Buchanan and
Edward H. Shortiffe, editors, Rule Based Expert Systems,
Addison-Wesley, Reading, MA, 1984.
and to produce a data structure something like:
((type bookChapter)
(author "J. W. Wallis and Edward H. Shortliffe")
(title "Customizing Explanations Using Causal Knowledge")
(book (title "Rule-Based Expert Systems")
(publisher "Addison-Wesley")
(editor "Bruce G. Buchanan and Edward H. Shortliffe")
(year "1984")
(address "Reading, MA")))
Put simply, we want to develop a system thast does what BibTeX does,
but in reverse. It should work for references to a variety of types
of documents (e.g. journal articles, books, technical reports,
theses, etc), and bibliographic styles. It should have clear
domain-independant knowledge (e.g. "Edward" is a given name, MA can
be an abbreviation for Massachusettes which is the name of a state,
1984 is a good value for a year of publication, etc.) and
domain-dependant knowledge (e.g. what IJCAI means, that BBN is a
company which has a technical reports series, etc). This would ease
its porting from one domain (e.g. AI) to another (e.g. fluid dynamics).
Such a system would probably be an interesting application drawing on
aspects of computational lingusitics (e.g. parsing, sub-language
theory, proper name recognition), and knowledge-based expert systems
(e.g. expectation-driven parsing, domain modeling). I'm interested in
getting pointers to any research on systems like this. I can't recall
hearing of any.
Tim
------------------------------
Date: Fri, 20 May 88 16:51 EDT
From: Steven Zepp <stevenz@carr.UUCP>
Subject: Looking for phonetics package for ibm compatible
I'm working with Ruth King, a Linguistics Prof here at York University
on a sociolinguistic study of PEI Acadian. We are looking around for
a package (the less expensive the better) that will help manage
phonetically transcribed text. Here's what we're looking for:
---A text-editing/wp package that handles the standard IPA
(International Phonetics Association) Alphabet, complete with the
standard diacritical marks.
---Good quality/readability on screen and dot-matrix printer
---Quick set up, easy to learn, and a reasonable run speed
---As inexpensive as possible (PD/Shareware would be very nice!)
---It should run on a Zenith pc (we're not interested in Mac software)
Please mail advice, recommendations, horror stories, etc., to
either/both of the addresses below (I will summarize and post if there
is interest). Thanks very much,
Steven
Ruth King (416) 736-5016 ext. 8731
Department of Languages, Literatures, and Linguistics
S505A Ross Building
York University RKING@YORKVM1.BITNET
4700 Keele Street RKING@VM1.YORKU.CA
Downsview, Ontario
M3J 1P3
Steven Zepp (416) 736-5376
Computer-Assisted Writing Centre
530 Scott Library
York University ...!{utzoo, mnetor, utgpu}!yunexus!writer!stevenz
4700 Keele Street stevenz@writer.yorku.UUCP
Downsview, Ontario stevenz@writer.yorku.ca
M3J 1P3
------------------------------
Date: Wed, 18 May 88 13:52 EDT
From: HESTVIK%BRANDEIS.BITNET@MITVMA.MIT.EDU
Subject: On grammaticality judgements
Rick Wojcik writes:
>AH> ... You don't need any pragmatic context to decide that 'He
>AH> likes John', with JOHN and HE coreferent is illformed...
>Ah yes. That narcissistic fool John. Who does he like above all others?
>He likes John. Who does he like to look at? He likes to look at John...
>Note that "He likes himself" is also reasonable here. The point is that
>no grammaticality judgments exist independently of stipulations about
>language use. And that makes sense, since the grammar plays a direct role
>in language production.
Of course grammaticality judgements are not independent of language use, in
fact they are a case of language use. But this does not mean that we cannot
get beyond the obscuring factors of language use when trying to discover
the principles of the mental grammar.
The fact that you can create a context where the Binding Theory doesn't
work doesn't mean that the Binding Theory is wrong, it simply means that
you set up a lousy experiment which yielded garbage as a result. That is
very easy, you can do that to everything you learnt in high-school physics,
simply by doing the experiment wrongly.
Of course, Wojcik's answer will now be that I'm doing the same thing: I
create a context where the Binding Theory works! And: How do I decide
which context is the "right" one??!?! Well, there is no right context of
course, you can only be guided by what you believe to be the right theory.
For example, a theoretical linguist believes that there is such a thing
as knowledge of language that is independent of the use of it. Therefore,
when doing an experiment, he will try to construct sentences and perhaps
set up the context (and eliminate others, like Wojcik's) so that the judgement
reflects something about that knowledge, and not other things (like
language use, principles of discourse and pragmatics etc).
------------------------------
Date: Wed, 18 May 88 17:09 EDT
From: Chris Collins <collins@src.honeywell.com>
Subject: subject extraction
In a recent article Bob Frank writes:
>In a recent article Chris Collins writes:
>>
>>Date: Wed, 27 Apr 88 02:19 EDT
>>From: Chris Collins <collins@srcsip.UUCP>
>>Subject: subject extraction
>>
>>
>>There might be another piece of evidence that supports an analysis of
>>questions where the subject does not move. Consider the following
>>pardigm:
>> 1 who does John like?
>> 2 whom does John like?
>> 3 who likes John?
>> 4 ?* whom likes John?
>>
>>Suppose that a NP is assigned accusative case by virtue of its being
>>in the object position, and nomnitive case by virtue of its being in
>>the subject position. Then case marking of the wh-pronoun in 3 and 4
>>indicates that who is in the subject position. If on the other hand we
>>were to say that a empty category in the gap left by the wh-pronoun
>>transmitted case the the equivalence of 1 and 2 is unexplained.
>
>I don't see that your argument really make much sense. You claim that
>1 and 2 are equivalent. The wh-element (in 2) is presumedly moved to COMP
>carrying with it its accusative case. However, why is this not the case with
>subjects? Might they be assigned nominative case in subject position and then
>moved to COMP taking their Case along for the ride? This would certainly
>explain why 3 is OK (there is a transmission of nominative case) and 4 is
>out ('whom' does not receive accusative case from anywhere).
>
>Notice that in neither situation (i.e. subject or object WH) must the
>wh-pronoun move for need of Case (as in NP-movement). In fact, Japanese
>has its Wh-pronouns in-situ. Why then are they moved to COMP in English?
>The standard story (in GB) has been for purposes of interpretation: The WH
>must have scope over its clause in logical form. In English, this requirement presumable holds at S-structure. In Japanese, then, Wh-movement is assumed
>to take place at Logical Form.
>Thus, Case has little to do with the position of Wh-pronouns (except of course
>from the fact that they must be part of a chain which is uniquely assigned
>Case).
As he correctly points out, 2, 3 and 4 have explanations on a GB
approach involving the transmitting of Case. But he did not explain
why 1 is acceptable. The fact that 1 was acceptable constituted what I
was trying to explain with the no-movement of the subject analysis.
The idea was that since 'who' in 3 is in subject position, it receives
nomnitive case. Whereas in 1 and 2 since the wh-element occupies no
grammatical function associated with a case, it doesn't receive any
and optionally takes either nomnitive or accusative as a matter of
being phonetically realized.
There are other reasons for thinking that this is the wrong analysis.
Take the following sentences:
5. Who did Mary say likes John
6. * Whom did Mary say likes John
The relative unacceptability of 6 would be unexplained on the approach
that I took above since the wh-element can not optionally be realized
with either nomnitive or accusative case, but does not occupy a
grammatical function associated with case.
I think this would be accounted for on a GB Case transmitting
approach, with the proviso that the accusative wh-pronouns in English
are 'who' and 'what'. In this way 1 above would not be a case of a
wh-pronoun receiving no case, but rather of an accuative wh-pronoun
taking either one of the alternative realizations of accusative case.
A piece of evidence that there is no subject extraction might be the
following:
7. who does john like
8. who likes john
9. who does like john
10. the boy does like the girl (emphatic)
11. hardly had john left, when mary arrived
It seems to be a fact that when certain element (wh-words, negative
polarity items) are placed in the front of a phrase, an auxillary
element must necessarily appear in the phrase, as in 7. and 11.
Subject questions seem to be the exception, as in 8. (note when the
same question takes an emphatic reading,as in 9. the auxillary is
produced, but
its presence is not due to the fact that 9. is a question, or that a
wh-word has been fronted but rather that the sentence is being said
with emphasis, as in 10.)
The fact that in 8 there is no auxillary might be explained by on the
assumption that no movement takes place for subject questions,
therefore there is no element of the correct kind (wh-questions, or
negative polarity) in the front of the sentence to cause the auxillary
to appear.
As you can see, I am not committed to either analysis of subject
questions (movement and no-movement), rather I am just trying to see
what evidence is out there that supports either analysis.
Lets hear some arguments on this issue!!
Chris Collins Honeywell Systems and Research Center
voice: (612)782-7635 paper: 3660 Technology Drive, Minneapolis, MN 55418
Internet: collins@src.honeywell.com
------------------------------
Date: Fri, 20 May 88 12:36 EDT
From: rwojcik@BOEING.COM
Subject: Re: What are grammars (for)?
Arild Hestvik has written [in reply to my claim that "most KNOWLEDGE of
language follows from our understanding of the circumstances under which
we would USE it."]
AH> ... all English speakers [know] Subjacency (roughly that you cannot
AH> extract out of an an NP and an S', or two S's). However, nobody ever
AH> heard a sentence violating it. How, then, can this knowledge arise from
AH> language use?
I agree with you that children can't learn language from positive examples
alone, and it would be devastating for my position if I were to say that
they did. I'm not sure how you got from my claim that linguistic
knowledge is based in strategies governing behavior to the view that those
strategies must be learned exclusively through positive examples. At the
very worst, I could fall back on the generativist refrain of "It's all too
complex to be learned, so it *must* be innate!" ;-) If a set of competence
strategies can be inherited, why can't a set of performance strategies?
"Performance parameters" has a nice alliterative feel to it :-). One
could just claim that the brain has evolved for language use. Sound
unreasonable?
As for constraints on movement, I see no a priori reason why they can't be
grounded in behavior. Let us propose that subjacency prevents speakers
from using sentences such as "Which books does John know where Bill
bought?" This sentence is certainly rendered more acceptable by the
insertion of a pronoun: "Which books does John know where Bill bought
them?" Languages with resumptive pronouns tend to 'like' these
constructions better. So it looks as if the constraint is designed to
prevent the occurrence of structures that a perceptually difficult. I
find it hard to imagine someone wanting to use such a sentence, but the
existence of island constraints might well serve a behavioral function--as
a kind of 'warning flag' that the message under construction is going to
be difficult to process.
But the motivation for such constraints is not the issue so much as the
question of how well-formedness intuitions arise. I claim that the
'grammar' is best thought of as a set of performance strategies, and that
intuitions arise from our general cognitive ability to examine behavioral
strategies introspectively. One constantly makes intuitive judgments about
all kinds of behavior--how far one can jump, what a proper dance step is,
whether someone is limping or walking properly, etc. Many of these
judgments are about the well-formedness of behavior. Do we have to set up a
dual set of constraints on knowledge and behavior for all types of
well-formedness judgments--or just linguistic well-formedness judgments?
-Rick Wojcik rwojcik@boeing.com
------------------------------
Date: Tue, 24 May 88 10:11 EDT
From: Rich Alpert <alpert@endor.harvard.edu>
Subject: genderless 3rd person singular pronoun
In article <5614@bcsaic.UUCP> rwojcik@bcsaic.UUCP (Rick Wojcik) writes:
> [...]
> What we could really use is a truly
> genderless singular 3rd person pronoun such as that found in Finno-Ugric
> languages (Hungarian, Finnish, Estonian). We could borrow the word 'ta'
> (he/she/it), for example. How about it? I'll use ta if everyone else
> will. So now we say "Everyone flipped ta lid" instead of the sexist
> "Everyone flipped his lid." [...]
Is `ta' Finno-Ugric? `Ta' is the only (hence genderless) third person singular
pronoun in Mandarin Chinese. There exist distinct masculine and feminine
written forms, although the masculine form is appropriate for both. (As is the
case, too, with "you" in Mandarin Chinese.)
What about cases? "Everyone flipped ta's lid."?? "Give it to tam."??
(pronounced "Tom"?) Consider: "Someone is at the door. See who it is and ask
ta what ta wants."
In my household (which is English/Mandarin bilingual) it is not unheard of to
construct utterances such as (when answering the telephone a second too late),
"Ta hung up." (although a complete Mandarin sentence is more likely) in place
of the common but odd English announcement, "They hung up."
I think this is an excellent idea, Rick. You have my vote. (In this case,
I'll vote with my lexicon rather than with my feet. ;-) )
Rich Alpert Aiken Computation Lab
alpert@endor.harvard.edu Harvard University
...{ihnp4!think, seismo}!harvard!alpert Cambridge, Mass 02138
------------------------------
Date: Wed, 18 May 88 20:14 EDT
From: Emma Pease <emma@russell.stanford.edu>
Subject: From CSLI Calendar, May 19, 3:29
How Complex is the Mapping between Semantics and Syntax:
Agents and Themes in Dutch
Annie Zaenen
(zaenen.pa@xerox.com)
May 19
Recently syntacticians have turned their attention again to the
correlations between the meaning of words and their syntactic
properties. A popular view is that the semantics of a verb is the
basis for a classification of its arguments into thematic roles and
that a hierarchy of these roles determines the grammatical realization
of these arguments (as subjects, objects, etc). I will discuss some
data from Dutch that show that this picture has to be complicated in a
least two ways. First, one has to assume broader equivalence classes
that mediate between the semantically defined thematic roles and the
grammatical ones. Second, some of the phenomena that have been
analyzed in terms of thematic roles (or similar lexical notions that
can be thought of as representations of lexical aspect (i.e.,
Aktionsart), need to be analyzed as conditioned by sentence aspect.
--------------
NEXT WEEK'S CSLI TINLUNCH
Reading: "The Algebra of Events"
by Emmon Bach
Discussion led by Bob Carpenter
(carp@drifters.stanford.edu)
May 26
Emmon Bach claims that the "basic aim of this paper is to try and
elucidate this proportion: events:processes :: things:stuff." He
exploits the structural parallels between the domain of individuals
and events to propose a semantics for verbal aspect, and in particular
the progressive, identical to Godehard Link's semantics for mass and
count nominals.
We'll concentrate on the "Puzzles and Problems" section, which
deals with three unresolved issues. The first is the general
mechanism of languages for "packaging" objects into new objects and
"grinding" existing objects into their constituents. The second deals
with the relation between the partitive and the progressive and their
admission of real, but incomplete complements as in "part of a bridge"
and "was building a bridge" where the bridge was never built. The
final puzzle is the key, where the general ontological question of
object individuation and its relation to the attunement of agents is
brought out of the closet.
Time permitting, we can discuss some comments of Fred Landman's (in
"Groups," UMass ms) pertaining to the general topic of collectivity
and individuation, which is closely related to the notions of "actual"
situation in situation semantics.
--------------
NEXT WEEK'S CSLI SEMINAR
A Grammar for Tarski's World
Lauri Karttunen
(karttunen.pa@xerox.com)
May 26
Tarski's World is an educational Macintosh game for teaching
first-order logic, designed by Jon Barwise and John Etchemendy. To
play the game, the student creates a world of geometric objects. A
display window presents a 3-d view of the world. In a text window,
the student can type a formula of first-order logic and have it
verified with respect to the world.
This summer, we are planning to augment Tarski's World with a
natural-language interface. The new version of the program will also
translate between English and first-order logic. For example, it will
be able to tell the student that "Every cube is not small" means
either "Ax (cube(x) -> ~small(x))" or "~Ax (cube(x) -> small(x))" and
it can also translate logical formulas to English.
The grammar for Tarski's World is a categorial unification grammar
in the style of my "Radical Lexicalism" paper. A novel aspect of the
grammar is that translations of English words are layered structures.
When phrases are combined by function application, the functor phrase
selects the relevant layer of its and its argument's translations to
produce an appropriate translation for the result. One advantage of
this approach is that a single entry for "is" covers all the uses of
the copula and one entry for a passive verb form gives the correct
translation for both agentless passives and full passives.
------------------------------
Date: Thu, 19 May 88 12:44 EDT
From: Marc Vilain <MVILAIN@G.BBN.COM>
Subject: BBN AI/Education seminar -- Miriam Reiner.
BBN Science Development Program
AI Seminar Series Lecture
LABORATORY-BASED CONCEPTUAL CONFLICT, AND EXPLANATORY SIMULATIONS
AS A CATALYST FOR RESTRUCTURING PHYSICS KNOWLEDGE.
Miriam Reiner
Learning Research and Development Center (LRDC),
University of Pittsburgh
BBN Labs
10 Moulton Street
2nd floor large conference room
10:30 am, Tuesday May 24
The aim of this study is to identify the restructuring processes of
students' pre-science explanatory frameworks in the field of light.
The study considers restructuring resulting both from conceptual
conflicts and comparisons of experimental findings with results
predicted by explanatory interactive simulations. After identifying
the conceptual frameworks, learning-induced changes in students'
concepts were identified. This was done by means of a series of
laboratory experiments in which the IBM PC was used for real-time
analysis of experimental data. The data were represented in two forms
-- empirical and analytical. The final stage of the study dealt with
the establishment of new conceptual frameworks based on an analogy
between microwaves and light. A series of experiments and explanatory
simulations of microwaves has been developed. The predictions made by
the simulation are compared by students to results of real laboratory
experiments on light. Results and details will be presented.
------------------------------
Date: Mon, 23 May 88 19:31 EDT
From: Emma Pease <emma@csli.stanford.edu>
Subject: CSLI Calendar addition
CSLI COLLOQUIUM
Representation versus Interpretation
J. E. Fenstad
University of Oslo, Norway
Cordura Conference Room, 4:15, May 26
One basic assumption of the Montague approach is the compositionality
principle, i.e., the existence of a homomorphism from the "syntactic"
algebra to the "semantic" algebra. But various problematic aspects of
the "pull-back" from interpretation to linguistic forms argue for an
independent representational level. Another problematic aspect of the
Montague model is the extreme "constructionalism" of the approach,
i.e., everything is constructed by abstraction from individuals and
truth-values. In the talk I will give a survey of some recent work in
Oslo related to these problems.
------------------------------
End of NL-KR Digest
*******************