Copy Link
Add to Bookmark
Report

NL-KR Digest Volume 05 No. 12

eZine's profile picture
Published in 
NL KR Digest
 · 1 year ago

NL-KR Digest             (8/31/88 18:45:37)            Volume 5 Number 12 

Today's Topics:
English grammar (open/closed classes)
Re: Category Theory in AI
Re: Chomsky reference
Speech recognition with neural nets
Model-theoretic semantics

Submissions: NL-KR@CS.ROCHESTER.EDU
Requests, policy: NL-KR-REQUEST@CS.ROCHESTER.EDU
----------------------------------------------------------------------

Date: Sun, 14 Aug 88 01:34 EDT
From: Stephen D. Crocker <crocker@tis-w.arpa>
Subject: open versus closed classes of words in English grammar

McGuire replied to Nagle's query about open versus closed classes of
words in English grammar, viz nouns, verbs, adjectives and adverbs are
open and conjunctions, articles, prepositions, etc. are closed. He then
comments:

> While I'm familiar with this distinction, and think that it may have
> been around in linguistics for quite some while (Bernard Bloch maybe?),
> I don't remember it being used much. The only references that spring to
> mind are some studies in speech production and slips of the tongue done
> in the 70s by Anne Cunningham (she's a Brit though I'm not sure of her
> last name) and maybe Victoria Fromkin claiming that less errors are
> associated with closed class words and that they play some privileged role
> in speech_production/syntax/lexical_access/the_archetecture_of_the_mind.

I recall in the mid or late 60's reading about a parser built in the UK that
relied heavily on the closed classes -- I think the term was "functions words".
I believe the parser determined which class the other words were in, noun,
verb, etc., solely by the slots created from the function words. To that
parser, McGuire's four example sentences would be equivalent to

"Foo frobbed fie"
"Foo has frobbed fie"
"Foo might frob fie"
"Foo fums to frob fie"

The parser was exceedingly fast, but I don't remember any follow up from
this work. If pressed, I can probably find a reference, but I suspect
many readers of this digest are more familiar with the work than I.

In the speech understanding work of the early 70's, I found it interesting
that the functions words played a lesser role than might have been expected
because they tended to be unstressed when spoken and hence reduced in duration
and clarity. I don't recall whether they played a major role in any of the
later systems. It's evident that humans depend on these words and learn
new open class words from context created by a combination of the closed
class words and known meanings for the open class words elsewhere in the
sentence. This suggests that one attribute to look for in truly mature
speech understanding systems is reliable "hearing" of function words. I'd
be interested if anyone knows the current status of speech understanding
in this area.

Along somewhat separate lines, Balzer at ISI built a rudimentary parser for
English in the early 70's. It was aimed at extracting formal program specs
from an English specification. His key example was based heavily on
interpeting the closed classes and treating the open classes as variables.

------------------------------

Date: Mon, 15 Aug 88 17:42 EDT
From: HILLS%reston.unisys.com@RELAY.CS.NET
Subject: Re: English Grammar

In AI List V8 #35 John Nagle described a grammar which divided words into
four catagories and requested a reference for the list of 'special' words.

This may be related to the work of Miller, Newman, and Friedman of Harvard.
In 1958 they proposed that words should be divided into two classes which they
defined as follows:

We will call these two classes the "function words" and the "content
words"
. Function words include those which are traditionally called
articles, prepositions, pronouns, conjunctions, and auxillary verbs,
plus certain irregular forms. The function words have rather specific
syntactic functions which must, by and large, be known individually
to the speaker of English. The content words include those which are
traditionally called nouns, verbs, and adjectives, plus most of the
adverbs. It is relatively easy to add new content words to a language,
but the set of function words is much more resistant to inovations.


The list of function words is included in the book: 'Elements of Software
Science' by Maurice H. Hallstead, Elsevier, 1977. This list contains about
330 words. I suspect that the list of 'special words' sought by Nagle is
contained within this list of function words.

-- Fred Hills

------------------------------

Date: Tue, 23 Aug 88 11:39 EDT
From: GA3662%SIUCVMB.BITNET@CUNYVM.CUNY.EDU
Subject: English grammar (open/closed classes)

From: ga3662@siucvmb ('Geoff Nathan' to humans)

Further to John Nagle's question about the concepts of open and closed
classes. An excellent summarry of the set and concept for English can be
found in an ancient book: Charles C. Fries. The structure of English.
Harcourt Brace. 1952. This is a classic structuralist description of
English, with things like 'class A, B, 1, 2' etc. replacing traditional
'noun, verb' etc. labels. The distinction is also discussed in such works
as Gleason (Intro. to Descriptive Linguistics, and Linguistics and English
Grammar). While it has no formal place in most versions of generative
grammar, it does figure in Montague semantics (albeit indirectly).
Generally, open class items are not provided with a translation, except
something like the meaning of 'horse' is {horse'} (notation changed because
of the limitations of this keyboard). On the other hand, closed class
members such as 'the', 'is', 'may' etc. will get some translation into
intensional logic, complete with lambdas etc. Some would say that the
distinction is not useful for synchronic descriptions, since it is merely a
predictor of where new words are likely to come from. Further to Mcguire's
discussion, closed class members are *sometimes* terminal symbols in a PS
grammar (as opposed to being inserted under category symbols like N, V etc.)
The references to psycholinguistic investigations about this topic are by
Ann Cutler. Some of her work may be found in Linguistic Inquiry in articles
dealing with what she calls the 'mental lexicon'. Joan Bybee's book
'Morphology' suggests some semantic reasons why languages might put certain
categories in closed classes and others in open classes.

--
Geoff Nathan (ga3662@siucvmb)
Human Address: Department of Linguistics,
Southern Illinois University
Carbondale, IL, 62901

------------------------------

Date: Thu, 18 Aug 88 06:23 EDT
From: Jack Campin <jack@cs.glasgow.ac.uk>
Subject: Re: Category Theory in AI

geddis@atr-la.atr.junet (Donald F. Geddis) wrote:
>>dpb@philabs.philips.com (Paul Benjamin) writes:
>> Some of us here at Philips Laboratories are using universal
>> algebra, and more particularly category theory, to formalize
>> concepts in the areas of representation, inference and
>> learning.

>I'm familiar with those areas of AI, but not with category theory (or
>universal algebra, for that matter). Can anyone give a short summary for the
>layman of those two mathematical topics? And perhaps a pointer as to how
>they might be useful in formalizing certain AI concepts. Thanks!

A short summary is tricky without knowing your mathematical background and
maybe impossible for a real honest-to-goodness layman. A good book to start
with is Herrlich and Strecker's, but if you don't know what a group is, forget
it. Arbib and Manes' "Arrows, Structures and Functors" is also OK, but mainly
applies it to automata theory (not a booming enterprise these days).

Category theory generalizes the notions of "set" and "function", or more
generally "mathematical structure" and "mapping that preserves that structure"
(where the structures might be, say, n-dimensional Euclidean spaces, and the
mappings projections, embeddings and other distance-preserving functions).

Its aim is to describe classes of mathematical object (groups, topological
spaces, partially ordered sets, ...) by looking at the maps between them, and
then to describe relationships between these classes. It captures a lot of
otherwise indescribable mathematical notions of "unique" or "natural" objects
or maps in a class (the empty set, Descartes' construction of the Euclidean
plane as the "product" of two lines, the class of all possible strings in an
alphabet, ...).

The major application of it to computer science so far is in the semantics
of higher-order polymorphic type systems (which can't be described in set
theory). David Rydeheard and Rod Burstall have just published a book
"Computational Category Theory" that describes categorical constructions
algorithmically (in Standard ML) and has a useful bibliography.

But a lot of computer science literature that uses category theory does not
do so in an essential way; the commutative diagrams are just there to give
the authors some mathematical street cred.

I can't imagine what category theory has to contribute to knowledge
representation (though I can just about imagine it helping to describe
neural nets in a more abstract way). Can the philabs people say more
about what they're up to?

--
ARPA: jack%cs.glasgow.ac.uk@nss.cs.ucl.ac.uk USENET: jack@cs.glasgow.uucp
JANET:jack@uk.ac.glasgow.cs useBANGnet: ...mcvax!ukc!cs.glasgow.ac.uk!jack
Mail: Jack Campin, Computing Science Dept., Glasgow Univ., 17 Lilybank Gardens,
Glasgow G12 8QQ, SCOTLAND work 041 339 8855 x 6045; home 041 556 1878

------------------------------

Date: Mon, 22 Aug 88 11:35 EDT
From: Paul Benjamin <dpb@hen3ry.Philips.Com>
Subject: Re: Category Theory in AI


In article <1572@crete.cs.glasgow.ac.uk> jack@cs.glasgow.ac.uk (Jack Campin) writes:
>I can't imagine what category theory has to contribute to knowledge
>representation (though I can just about imagine it helping to describe
>neural nets in a more abstract way). Can the philabs people say more
>about what they're up to?

Well, not really, in a public forum. But Mike Lowry of the Kestrel
Institute has pointed out that a representation can be viewed as
a category, and a shift of representation as a morphism. The
question of whether this insight is very productive is open, but at
least it gives us a formal notion of representation, and we've
built on this some formal notions of abstraction and learning.
We'll let you know if this turns out to be fruitful.

Paul Benjamin

------------------------------

Date: Fri, 19 Aug 88 19:13 EDT
From: T. William Wells <bill@proxftl.UUCP>
Subject: Re: Chomsky reference


In article <4044@pdn.UUCP> colin@pdn.UUCP (Colin Kendall) writes:
: In article <573@proxftl.UUCP>, bill@proxftl.UUCP (T. William Wells) writes:
: > In article <6942@bcsaic.UUCP> rwojcik@bcsaic.UUCP (Rick Wojcik) writes:
: > :
: > : ... There is no evidence that the well-formedness judgments
: > : which people actually make are independent of semantics.
: >
: > People seem to be able to assign syntactic structure to those
: > Lewis Carroll poems.
:
: Only insofar as the semantics may be guessed at. Let's examine the
: famous opening lines from the most famous poem, 'Jabberwocky':

Which opens the can of worm labeled: "What is semantics?"

As the rest of your posting describes, some knowledge of the
world is assumed when assigning syntax. I distinguish the
semantics associated with the syntactic function of words (being
a noun, verb, adjective, etc.) from the semantics associated with
the meanings of the words. (B.T.W. Those categories are much
too coarse to describe the actual syntactic categories of words;
I'd suggest that, for example, each kind of verb form,
distinguished by its objects taken and the basic kind of action
being described, is a separate syntactic category.)

I think of this kind of semantic information as syntactic. I do
believe that we make use of semantics while forming syntactic
judgements in order to eliminate the ambiguity that would
otherwise result from the exclusion of that information.

A legitimate counter-argument supposes that, in order to properly
assign a syntactic structure to a sentence, one must elaborate
these categories to the extent that each contains only one word
(actually concept). In that case, the distinction I make is
meaningless.

------------------------------

Date: Tue, 23 Aug 88 14:32 EDT
From: Antti Ylikoski <ayl%hutds.hut.fi%FINGATE.BITNET@MITVMA.MIT.EDU>
Subject: Speech recognition with neural nets

In AIList Digest V8 #63,
att!chinet!mcdchg!clyde!watmath!watvlsi!watale!dixit@bloom-beacon.mit.edu
(Nibha Dixit) writes:

>Subject: Speech rec. using neural nets

>Is anyody out there looking at speech recognition using neural
>networks? There has been some amount of work done in pattern
>recognition for images, but is there anything specific being done
>about speech?

In the Helsinki University of Technology, in the Department of Technical
Physics, the group of Professor Teuvo Kohonen has been studying the
usage of neural nets for speech recognition for several years.

Professor Kohonen gave a talk on their results in the Finnish AI
symposium in this year. They have an experimental system which uses a
neural net board in a PC. I cannot remember whether the paper is
written in English or in Finnish, but should you wish to get the
symposium proceedings, contact

Finnish Artificial Intelligence Society (FAIS)
c/o Dr Antti Hautamaeki
HM & V Research
Helsinki, Finland

I understand Kohonen's results are comparable to other approaches to speech
recognition.

--- Andy

------------------------------

Date: Fri, 26 Aug 88 17:43 EDT
From: kurt geisel <kgeisel@nfsun.UUCP>
Subject: Re: Speech rec. using neural nets


Teuvo Kohonen describes success at Helsinki University with a
speaker-independent neural system which recognizes phonemes (the box spits
out phonemes, not words - you would still need a sophisticated parsing stage)
in the article "The 'Neural' Phonetic Typewriter" in the March 1988 issue of
the IEEE's _Computer_.

+--------------------------------------------------------------------------+
| Kurt Geisel, Intelligent Technology Group, Inc. |
| Bix: kgeisel |
| ARPA: kgeisel%nfsun@uunet.uu.net US Snail: |
| UUCP: uunet!nfsun!kgeisel 65 Lambeth Dr. |
| Pittsburgh, PA 15241 |
| If a rule fires and no one sees it, did it really fire? |
+--------------------------------------------------------------------------+

------------------------------

Date: Sun, 28 Aug 88 21:05 EDT
From: Kai-Fu.Lee@SPEECH2.CS.CMU.EDU
Subject: Speech rec. using neural nets

In response to Nibha Dixit's question about speech recognition using
neural networks, I would recommend the following two articles by
Richard Lippmann:

An Introduction to Computing with Neural Nets, IEEE ASSP Magazine,
Vol. 4, No. 2, April 1987.
Neural Nets for Computing, IEEE International Conference on Acoustics,
Speech, and Signal Processing (ICASSP), April, 1988.

The ICASSP conference proceedings contain quite a few interesting
papers on speech recognition with neural networks.

Kai-Fu Lee
Computer Science Department
Carnegie Mellon University
Pittsburgh, PA 15213

------------------------------

Date: Sun, 28 Aug 88 03:06 EDT
From: Ching-Yuan Tsai <ching@uhccux.uhcc.hawaii.edu>
Subject: Model-theoretic semantics

I am interested in knowing whether model-theoretic semantics can
be used adequately as a tool to describe the meaning of natural
languages. At this moment, my opinion is that it cannot because there
is so much in human languages that cannot be formalized. For
instance, presupposition and propositional attitudes have long been
controversial and problematic; and I believe they are still not solved
to any satisfactory extent. In fact, I would say pragmatics is an
attempt to compensate for the inadequacies of model-theoretic
semantics in dealing with natural languages.

The main attraction of model-theoretic semantics is its
formalization and rigidness. And this alone makes a lot of people
believe that they can use it to provide a theory of truth and meaning
for natural languages.

My opinion above is vague and maybe biased, but I would like to
hear any comments, pros or cons, related to model-theoretic semantics.
So far, I could only find three papers sharing my view, and they are
listed below:

LePore, Ernest. 1983. What model theoretic semantics cannot do?
Synthese 54, pp. 167-187.

Jardine, Nicholas. 1975. Model theoretic semantics and natural
language. In Edward L. Keenan ed. Formal Semantics of Natural
Language, pp. 219-240. Cambridge: Cambridge Univ. Press.

Potts, Timothy C. 1975. Model theory and linguistics. In
Edward L. Keenan ed. Formal Semantics of Natural Language, pp.
241-250. Cambridge: Cambridge Univ. Press.


========================================================================

Bitnet: ching@uhccux.bitnet
Internet: ching@uhccux.uhcc.hawaii.edu

---- Ching-yuan Ken Tsai

------------------------------

Date: Sun, 28 Aug 88 09:37 EDT
From: Greg Lee <lee@uhccux.uhcc.hawaii.edu>
Subject: Re: Model-theoretic semantics


From article <2309@uhccux.uhcc.hawaii.edu>, by ching@uhccux.uhcc.hawaii.edu (Ching-Yuan Tsai):

> I am interested in knowing whether model-theoretic semantics can
>be used adequately as a tool to describe the meaning of natural
>languages.

The model-theoretic theories that I have seen are more appropriately
described as theories of types than as theories of semantics, since they
make reference to nothing other than types in the interpretations of
expressions. That makes them a species of syntactic theory, I take it,
so of course they are not adequate to describe meaning.

> At this moment, my opinion is that it cannot because there
>is so much in human languages that cannot be formalized.

Whatever reservations one might have about model-theoretic "semantics",
I don't think *this* is a problem. To the extent that facts of language
are known, they can be assigned a representation. To the extent that
a theory of the facts is specific, it can be formalized. Whether
there is any point to formalization is another matter. If you mean
that there is much in human languages that is not known or not
understood, of course that's true enough.

>... In fact, I would say pragmatics is an
>attempt to compensate for the inadequacies of model-theoretic
>semantics in dealing with natural languages.

Supposing 'semantics' to concern the reference of expressions, and
'pragmatics' to concern this as well as how people use expressions,
what you say here is true by definition, because the domain of pragmatics
properly includes that of semantics. You've just managed to phrase
it in a pejorative way.

> The main attraction of model-theoretic semantics is its
>formalization and rigidness. ...

I don't think model-theoretic semantics is formalized in any but
an occasional and incidental way. The form of the statements in
such theories has no special significance, so far as I know.
What do you mean by "rigidness"? If you mean that the theories
are somehow constrained so as to have empirical force, well, I
don't think so. Did you ever notice anyone propose a counterexample
to Montague Grammar? The general theory, I mean, as opposed to
example applications, such as the one in PTQ. (But if the
compositionality assumption were taken seriously, any multi-morpheme
idiom would serve as a counterexample, as would the existence
of subject-verb agreement.)

Greg, lee@uhccux.uhcc.hawaii.edu

------------------------------

Date: Mon, 29 Aug 88 11:14 EDT
From: Rick Wojcik <rwojcik@bcsaic.UUCP>
Subject: Re: Model-theoretic semantics


In response to ching@uhccux.UUCP (Ching-Yuan Tsai):

The following is a major work that takes a stand against the use of model
theoretic semantics:

George Lakoff. 1987. Women, Fire, and Dangerous Things. Chicago
University Press.

--
Rick Wojcik csnet: rwojcik@boeing.com
uucp: uw-beaver!ssc-vax!bcsaic!rwojcik
address: P.O. Box 24346, MS 7L-64, Seattle, WA 98124-0346
phone: 206-865-3844

------------------------------

End of NL-KR Digest
*******************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT