Copy Link
Add to Bookmark
Report

AIList Digest Volume 1 Issue 077

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest            Friday, 14 Oct 1983       Volume 1 : Issue 77 

Today's Topics:
Natural Language - Semantic Chart Parsing & Macaroni & Grammars,
Games - Rog-O-Matic,
Seminar - Nau at UMaryland, Diagnostic Problem Solving
----------------------------------------------------------------------

Date: Wednesday, 12 October 1983 14:01:50 EDT
From: Robert.Frederking@CMU-CS-CAD
Subject: "Semantic chart parsing"

I should have made it clear in my previous note on the subject that
the phrase "semantic chart parsing" is a name I've coined to describe a
parser which uses the technique of syntactic chart parsing, but includes
semantic information right from the start. In a way, it's an attempt to
reconcile Schank-style immediate semantic interpretation with syntactically
oriented parsing, since both sources of information seem worthwhile.

------------------------------

Date: Wednesday, 12-Oct-83 17:52:33-BST
From: RICHARD HPS (on ERCC DEC-10) <okeefe.r.a.@edxa>
Reply-to: okeefe.r.a. <okeefe.r.a.%edxa@ucl-cs>
Subject: Natural Language


There was rather more inflammation than information in the
exchanges between Dr Pereira and Whats-His-Name-Who-Butchers-
Leprechauns. Possibly it's because I've only read one or two
[well, to be perfectly honest, three] papers on PHRAN and the
others in that PHamily, but I still can't see why it is that
their data structures aren't a grammar. Admittedly they don't
look much like rules in an XG, but then rules in an XG don't
look much like an ATN either, and no-one has qualms about
calling ATNs grammars. Can someone please explain in words
suitable for a 16-year-old child what makes phrasal analysis
so different from
XGs (Extraposition grammars, include DCGS in this)
ATNs
Marcus-style parsers
template-matching
so different that it is hailed as "solving" the parsing problem?
I have written grammars for tiny fragments of English in DCG,
ATN, and PIDGIN -styles [the adverbs get me every time]. I am not
a linguist, and the coverage of these grammars was ludicrously
small. So my claim that I found it vastly easier to extend and
debug the DCG version [DCGs are very like EAGs] will probably be
dismissed with the contempt it deserves. Dr Pereira has published
his parser, and in other papers has published an XG interpreter.
I believe a micro-PHRAN has been published, and I would be grateful
for a pointer to it. Has anyone published a phrasal-analysis
grimoire (if the term "grammar" doesn't suit) with say >100 "things"
(I forget the right name for the data structures), and how can I
get a copy?

People certainly can accept ill-formed sentences. But they DO
have quite definite notions of what is a well-formed sentence and
what is not. I was recently in a London Underground station, and
saw a Telecom poster. It was perfectly obvious that it was written
by an Englishman trying to write in American. It finally dawned on
me that he was using American vocabulary and English syntax. At
first sight the poster read easily enough, and the meaning came through.
But it was sufficiently strange to retain my attention until I saw what
was odd about it. Our judgements of grammaticality are as sensitive as
that. [I repeat, I am no linguist. I once came away from a talk by
Gazdar saying to one of my fellow students, who was writing a parser:
"This extraposition, I don't believe people do that."] I suggest that
people DO learn grammars, and what is more, they learn them in a form
that is not wholly unlike [note the caution] DCGs or ATNs. We know that
DCGs are learnable, given positive and negative instances. [Oh yes,
before someone jumps up and down and says that children don't get
negative instances, that is utter rubbish. When a child says something
and is corrected by an adult, is that not a negative instance? Of course
it is!] However, when people APPLY grammars for parsing, I suggest that
they use repair methods to match what they hear against what they
expect. [This is probably frames again.] These repair methods range
all the way from subconscious signal cleaning [coping with say a lisp]
to fully conscious attempts to handle "Colourless Green ideas sleep
furiously". [Maybe parentheses like this are handled by a repair
mechanism?] If this is granted, some of the complexity required to
handle say ellipsis would move out of the grammar and into the repair
mechanisms. But if there is anything we know about human psychology,
it is that people DO have repair mechanisms. There is a lot of work
on how children learn mathematics [not just Brown & co], and it turns
out that children will go to extraordinary lengths to patch a buggy
hack rather than admit they don't know. So the fact that people can
cope with ungrammatical sentences is not evidence against grammars.

As evidence FOR grammars, I would like to offer Macaroni. Not
the comestible, the verse form. Strictly speaking, Macaroni is a
mixture of the vernacular and Latin, but since it is no longer
popular we can allow any mixture of languages. The odd thing about
Macaroni is that people can judge it grammatical or ungrammatical,
and what is more, can agree about their judgements as well as they
can agree about the vernacular or Latin taken separately. My Latin
is so rusty there is no iron left, so here is something else.

[Prolog is] [ho protos logos] [en programmation logiciel]
English Greek French

This of course is (NP copula NP) PP, which is admissible in all
three languages, and the individual chunks are well-formed in their
several languages. The main thing about Macaroni is that when
two languages have a very similar syntactic class, such as NP,
a sentence which starts off in one language may rewrite that
category in the other language, and someone who speaks both languages
will judge it acceptable. Other ways of dividing up the sentence are
not judged acceptable, e.g.

Prolog estin ho protos mot en logic programmation

is just silly. S is very similar in most languages, which would account
for the acceptability of complete sentences in another language. N is
pretty similar too, and we feel no real difficulty with single isolated
words from other languages like "chutzpa" or "pyjama" or "mana". When
the syntactic classes are not such a good match, we feel rather more
uneasy about the mixture. For example, "[ka ora] [teenei tangata]"
and "[these men] [are well]" both say much the same thing, but because
the Maaori nominal phrase and the English noun phrase aren't all that
similar, "[teenei tangata] [are well]" seems strained.

The fact that bilingual people have little or no difficulty with
Macaroni is just as much a fact as the fact the people in general have
little difficulty with mildly malformed sentences. Maybe they're the
same fact. But I think the former deserves as much attention as the
latter.
Does anyone have a parser with a grammar for English and a grammar
for [UK -> French or German; Canada -> French; USA -> Spanish] which use
the same categories as far as possible? Have a go at putting the two
together, and try it on some Macaroni. I suspect that if you have some
genuinely bilingual speakers to assist you, you will find it easier to
develo/correc the grammars together than separately. [This does not
hold for non-related languages. I would not expect English and Japanese
to mix well, but then I don't know any Japanese. Maybe it's worth trying.]

------------------------------

Date: Thu 13 Oct 83 11:07:26-PDT
From: WYLAND@SRI-KL.ARPA
Subject: Dave Curry's request for a Simple English Grammer

I think the book "Natural Language Information
Processing" by Naomi Sager (Addison-Wesley, 1981) may be useful.
This book represents the results of the Linguistic String project
at New York University, and Dr. Sager is its director. The book
contains a BNF grammer set of 400 or so rules for parsing English
sentences. It has been applied to medical text, such as
radiology reports and narrative documents in patient records.

Dave Wyland
WYLAND@SRI

------------------------------

Date: 11 Oct 83 19:41:39-PDT (Tue)
From: harpo!utah-cs!shebs @ Ucb-Vax
Subject: Re: WANTED: Simple English Grammar - (nf)
Article-I.D.: utah-cs.1994

(Oh no, here he goes again! and with his water-cooled keyboard too!)

Yes, analysis of syntax alone cannot possibly work - as near as I can
tell, syntax-based parsers need an enormous amount of semantic processing,
which seems to be dismissed as "just pragmatics" or whatever. I'm
not an "in" member of the NLP community, so I haven't been able to
find out the facts, but I have a bad feeling that some of the well-known
NLP systems are gigantic hacks, whose syntactic analyzer is just a bag
hanging off the side, but about which all the papers are written. Mind
you, this is just a suspicion, and I welcome any disproof...

stan the l.h.
utah-cs!shebs

------------------------------

Date: 7 Oct 83 9:54:21-PDT (Fri)
From: decvax!linus!vaxine!wjh12!foxvax1!brunix!rayssd!asa @ Ucb-Vax
Subject: Re: WANTED: Simple English Grammar - (nf)
Article-I.D.: rayssd.187

date: 10/7/83

Yesterday I sent a suggestion that you look at Winograd's
new book on syntax. Upon reflection, I realized that there are
several aspects of syntax not clearly stated therein. In particular,
there is one aspect which you might wish to think about, if you
are interested in building models and using the 'expectations'
approach. This aspect has to do with the synergism of syntax and
semantics. The particular case which occured to me is an example
of the specific ways that Latin grammar terminology is innapropriate
for English. In English, there is no 'present' tense in the intuitive
sense of that word. The stem of the verb (which Winograd calls the
'infinitive' form, in contrast to the traditional use of this term to
signify the 'to+stem' form) actually encodes the semantic concept
of 'indefinite habitual' Thus, to say only 'I eat.' sounds
peculiar. When the stem is used alone, we expect a qualifier, as in
'I eat regularly', or 'I eat very little', or 'I eat every day'. In
this framework, there is a connection with the present, in the sense
that the process described is continuous, has existed in the past,
and is expected to continue in the future. Thus, what we call the
'present' is really a 'modal' form, and might better be described
as the 'present state of a continuing habitual process'. If we wish
to describe something related to our actual state at this time,
we use what I think of as the 'actual present', which is 'I am eating'.
Winograd hints at this, especially in Appendix B, in discussing verb
forms. However, he does not go into it in detail, so it might help
you understand better what's happening if you keep in mind the fact
that there exist specific underlying semantic functions being
implemented, which are in turn based on the ltype of information
to be conveyed and the subtlety of the disinctions desired. Knowing
this at the outset may help you decide the elements you wish to
model in a simplified program. It will certainly help if you
want to try the expectations technique. This is an ideal situation
in which to try a 'blackboard' type of expert system, where the
sensing, semantics, and parsing/generation engines operate in
parallel. Good luck!

A final note: if you would like to explore further a view
of grammar which totally dispenses with the terms and concepts of
Latin grammar, you might read "The Languages of Africa" (I think
that's the title), by William Welmer.

By the way! Does anyone out there know if Welmer ever published
his fascinating work on the memory of colors as a function of time?
Did it at least get stored in the archives at Berkeley?

Asa Simmons
rayssd!asa

------------------------------

Date: Thursday, 13 October 1983 22:24:18 EDT
From: Michael.Mauldin@CMU-CS-CAD
Subject: Total Winner


@ @ @ @ @ @@@ @ @
@ @ @@ @@ @ @ @ @
@ @ @@@ @ @ @ @@@ @@@@ @@@ @ @@@ @
@@@@@ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @@@@@ @ @ @@@@ @ @ @@@@@ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @@@ @ @ @@@@ @@@@ @@@ @@@ @@ @


Well, thanks to the modern miracles of parallel processing (i.e. using
the UUCPNet as one giant distributed processor) Rog-O-Matic became an
honest member of the Fighter's guild on October 10, 1983. This is the
fourth total victory for our Heuristic Hero, but the first time he has
done so without using a "Magic Arrow". This comes only a year and two
weeks after his first total victory. He will be two years old on
October 19. Happy Birthday!

Damon Permezel of Waterloo was the lucky user. Here is his announcement:

- - - - - - - -
Date: Mon, 10 Oct 83 20:35:22 PDT
From: allegra!watmath!dapermezel@Berkeley
Subject: total winner
To: mauldin@cmu-cs-a

It won! The lucky SOB started out with armour class of 1 and a (-1,0)
two handed sword (found right next to it on level 1). Numerous 'enchant
armour' scrolls were found, as well as a +2 ring of dexterity, +1 add
strength, and slow digestion, not to mention +1 protection. Luck had an
important part to play, as initial confrontations with 'U's got him
confused and almost killed, but for the timely stumbling onto the stairs
(while still confused). A scroll of teleportation was seen to be used to
advantage once, while it was pinned between 2 'X's in a corridor.
- - - - - - - -
Date: Thu, 13 Oct 83 10:58:26 PDT
From: allegra!watmath!dapermezel@Berkeley
To: mlm@cmu-cs-cad.ARPA
Subject: log

Unfortunately, I was not logging it. I did make sure that there
were several witnesses to the game, who could verify that it (It?)
was a total winner.
- - - - - - - -

The paper is still available; for a copy of "Rog-O-Matic: A Belligerent
Expert System", please send your physical address to "Mauldin@CMU-CS-A"
and include the phrase "paper request" in the subject line.

Michael Mauldin (Fuzzy)
Department of Computer Science
Carnegie-Mellon University
Pittsburgh, PA 15213
(412) 578-3065, mauldin@cmu-cs-a.

------------------------------

Date: 13 Oct 83 21:35:12 EDT (Thu)
From: Dana S. Nau <dsn%umcp-cs@CSNet-Relay>
Subject: University of Maryland Colloquium

University of Maryland
Department of Computer Science
Colloquium

Monday, October 24 -- 4:00 PM
Room 2324 - Computer Science Building


A Formal Model of Diagnostic Problem Solving


Dana S. Nau
Computer Science Dept.
University of Maryland
College Park, Md.


Most expert computer systems are based on production rules, and to
some readers the terms "expert computer system" and "production rule
system" may seem almost synonymous. However, there are problem domains
for which the usual production rule techniques appear to be inadequate.

This talk presents a useful alternative to rule-based problem
solving: a formal model of diagnostic problem solving based on a
generalization of the set covering problem, and formalized algorithms
for diagnostic problem solving based on this model. The model and the
resulting algorithms have the following features:
(1) they capture several intuitively plausible features of human
diagnostic inference;
(2) they directly address the issue of multiple simultaneous causative
disorders;
(3) they can serve as a basis for expert systems for diagnostic problem
solving; and
(4) they provide a conceptual framework within which to view recent
work on diagnostic problem solving in general.

Coffee and refreshments - Rm. 3316 - 3:30
------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT