Copy Link
Add to Bookmark
Report

NL-KR Digest Volume 02 No. 10

eZine's profile picture
Published in 
NL KR Digest
 · 1 year ago

NL-KR Digest             (2/26/87 17:19:16)            Volume 2 Number 10 

Today's Topics:
For the Record:
Seminar: Interjections: an Initial Inquiry
Seminar: Commonsense and Nonmonotonic Reasoning
From CSLI Calendar, Feb. 19, No.17
Seminar: Why Linguists Should Consider Working in Machine Translation
Seminar: From Spoken Words to Sentences

book announcement - Knowledge Systems and Prolog

----------------------------------------------------------------------

Date: Mon, 16 Feb 87 21:30:11 PST
From: admin%cogsci.Berkeley.EDU@berkeley.edu (Cognitive Science Program)
Subject: Seminar: Interjections: an Initial Inquiry

BERKELEY COGNITIVE SCIENCE PROGRAM
Cognitive Science Seminar - IDS 237B

Tuesday, February 24, 11:00 - 12:30
2515 Tolman Hall
Discussion: 12:30 - 1:30, 2515 Tolman Hall

``Interjections: an initial inquiry''
Richard A. Rhodes
Linguistics Department
UC Berkeley

Interjections have long been neglected in theoretical
linguistics, because they have little or no morphology and they
do not participate in syntactic relations. However, the study
of interjections sheds light on the two other major areas of
language structure: phonology and semantics.
The fact that the class of interjections is asyntactic
gives rise to the frequent directness of the connection between
the phonological shape of an interjection and its meaning. In
fact the class of interjections draws on a wider range of phono-
logical resources than the rest of the lexicon and it does not
exploit in a paradigmatic way the resources that it does employ.
This has important consequences for a theory of phonology. The
phonetic material of which interjections consist ranges from the
`tame' sounds which are used to build the normal sort of lexical
material through `wild' sounds which speakers cannot generally
control other than in the interjections in which they occur.
This strongly supports the position that at least some forms are
stored at very low phonological levels. An even more important
result for phonological theory is that the evidence from inter-
jections shows that the working out of phonological contrasts
involves the pretheoretical judgment that only the tame sounds
matter in establishing contrasts, which all theories of phonol-
ogy are built on either explicitly or implicitly.
An important consequence of the discovery of wildness for
cognitive theory is that since `wild' sounds do not participate
fully in phonological systems they must be at least partially
uncategorized.
Semantically, interjections fall into three classes:
expressive, relational and predicational. Expressive interjec-
tions convey information about the (emotional) state of the
speaker, e.g. ouch, whew. Relational interjections mediate
between speaker and listener, e.g. huh?, yes. Predicational
interjections render comment, e.g. yuck, uh oh. There is, of
course, significant pragmatic complication to this simple taxon-
omy, but only by recognizing such distinctions can the meanings
of interjections be understood. Thus interjections are semanti-
cally related to performatives and hence to the linguistic
notion of person.
A second semantically based fact about interjections is
that they are iconic in form. For example, ouch is louder
and/or longer for more intense pain. Again this points to
further non-categorized aspects of language.
---------------------------------------------------------------
UPCOMING TALKS
Feb 24: David Touretzky, Computer Science, Carnegie-Mellon
University (see attached announcement)
Mar 10: Fred Dretsky, Philosophy, University of Wisconsin;
currently at UC Berkeley
Mar 31: Terry Sejnowksy, Biophysics, Biology, Johns Hopkins;
California Institute of Technology
---------------------------------------------------------------
ELSEWHERE ON CAMPUS
SESAME Colloquium: Dana West of EMST will speak on "User
Knowledge Representation in the Design of Computer Instruction"
on Monday, Feb. 23 at 4:00 p.m. in 2515 Tolman.

------------------------------

Date: 18 Feb 87 0107 PST
From: Vladimir Lifschitz <VAL@SAIL.STANFORD.EDU>
Subject: Commonsense and Nonmonotonic Reasoning Seminar

LOCALITY IN NON-MONOTONIC THEORIES

Benjamin Grosof

Thursday, February 19, 4pm
Bldg. 160, Room 161K

We define a new non-monotonic logical formalism that combines what we
argue to be the advantages of several previous ones. It directly
expresses both default beliefs, e.g. ``believe a bird flies,
unless you have information to the contrary'', and preferences, a.k.a.
Priorities, among default beliefs, e.g. ``Believe that ostriches
do not fly, more strongly than you believe birds do fly''. It has a
strong semantics based on first- and second- order logic, and
maintains a unique theory, unlike some non-monotonic formalisms that
often split into a large number of multiple extensions. It treats
defaults and priorities (and also ``fixture'' axioms) on a uniform
footing with ordinary, non-retractible, axioms, so that, for example,
it is well-defined to derive them. The new formalism is most closely
related to circumscription.

Except about inheritance networks, previous work has mainly treated
non-monotonic theories as monoliths without much internal structure.
We argue that locality is crucial in dealing with the central
difficulty: context-dependence, and conflicting interactions, among
beliefs. We define modular DECOMPOSITIONS of (non-monotonic) theories,
hierarchically, into sub-theories. The essential idea is to summarize
the relevant external context of a MODULE (a set of axioms, including
defaults), via imported and exported ``declarations'', in a manner
analogous to programming languages. We then discuss reformulating and
updating non-monotonic theories. We focus on PARTIAL
monotonicities. We suggest the advantages of partial monotonicities,
decompositions, and reformulations: 1) for structuring specifications,
e.g. knowledge engineering in large knowledge bases; 2) for designing
new inference algorithms; and 3) for improving the efficiency of
inference. All of this bears on a wide variety of
applications of non-monotonic theories.

------------------------------

Date: Wed 18 Feb 87 17:32:47-PST
From: Emma Pease <Emma@CSLI.Stanford.EDU>
Subject: From CSLI Calendar, Feb. 19, No.17
Tel: (415) 723-3561

[Excerpted from CSLI Calendar]

Reading: "From Polysemy to Internal Semantic Change"
by Elizabeth C. Traugott
"On the Historical Relation between Mental and
Speech Act Verbs in English and Japanese"
(especially section 3)
by Elizabeth C. Traugott and Richard Dasher
Discussion led by Elizabeth C. Traugott
February 26

These two papers show that semantic change is highly regular. The
hypothesis that over time meanings tend to refer less to objective
situations and more to subjective ones, less to the described
situation and more to the discourse situation, is used to predict the
order of change and even reconstruct meaning changes backward from
synchronic polysemies in two domains: (i) scalar particles like
`just', `even', (ii) mental and speech act verbs such as `agree' and
`assume'.
I plan to discuss a slight revision of my hypothesis about semantic
change, and to focus on new evidence (provided largely by Kristin
Hanson) for the change from less to more subjective meanings in the
domain of modal verbs and adverbs.

------------------------------

Date: 19 Feb 1987 0820-EST
From: Rich Thomason <THOMASON@C.CS.CMU.EDU>
Subject: Seminar: Why Linguists Should Consider Working in Machine Translation

PITT LINGUISTICS DEPARTMENT SEMINAR

Time: 4:00 PM, Wednesday, Feb. 25
Place: 2818 Cathedral of Learning
Speaker: Sergei Nirenburg (Center for Machine Translation, CMU)
Title: "Why Linguists Should Consider Working
in Machine Translation (Again)"

Abstract

I will describe the state of the art in MT, in particular, the advent of
knowledge-based approaches and why it is better to go for a "comprehensive"
analysis of the meaning of the input text than to make do with shallow
analysis, even though the automation of the latter is much more immediately
feasible than the former. I will try to show how much of the work involved
in KBMT is really pure linguistic field work, albeit couched in such terms
that even computers could make use of its (the field work's) results.

------------------------------

Date: 24 Feb 87 16:03:40 EST
From: Rick.Kazman@cad.cs.cmu.edu
Subject: Seminar: From Spoken Words to Sentences

COMPUTATIONAL LINGUISTICS SEMINAR

Speaker: Alex Hauptman, CMU
Date: Thursday, February 26
Time: 12:00 noon
Place: Porter Hall 125-C
Topic: From Spoken Words to Sentences

ABSTRACT:

In this talk I will describe the research of the speech recognition
group which deals with combining words into sentences. Often there
are several thousand words recognized for a 5 to 10 word sentence.
The problem is to pick the "right" ones, and make a sentence out of them.
Compounding the problem are completely missing words.
Classic natural language parsing techniques have failed in this
environment.

First I will discuss the requirements placed on a natural language
parser that functions as part of a speech recogniton system.
Then I will present some results and insights from using case frame grammars,
trigram scoring methods and semantic networks. Island-driven parsing
is a bright star on the horizon, but much more fragile in practice.
Finally I will describe our ideas for the future, and what is still
missing.

------------------------------

Date: 13 February 1987, 17:50:57 EST
From: Adrian Walker <ADRIAN@ibm.com>
Subject: book announcement - Knowledge Systems and Prolog

[Excerpted from AIList]

KNOWLEDGE SYSTEMS AND PROLOG
A LOGICAL APPROACH TO EXPERT SYSTEMS
and
NATURAL LANGUAGE PROCESSING

Adrian Walker (Ed.), Michael McCord,
John F. Sowa, Walter G. Wilson

Addison-Wesley, 1987

This book introduces Prolog and two important areas of Pro-
log use-- expert systems and natural language processing
systems (together known as knowledge systems.) The book
covers basic and more advanced Prolog programming, describes
practical expert systems and natural language processing in
depth, and provides an introduction to the formal basis in
mathematical logic for the meaning of Prolog programs.

HIGHLIGHTS

y Presents significant examples of knowledge systems, with
useful parts of actual programs included.

y Describes important research results in expert systems,
natural language processing, and logic programming.

y Integrates many trends in knowledge systems by bringing
diverse representations of knowledge together in one
practical framework.

y Though useful with any Prolog implementation, provides
an introductory tutorial followed by advanced program-
ming techniques for IBM Prolog.

TABLE OF CONTENTS

Chapter 1. Knowledge Systems: Principles and Practice (Adrian Walker )
1.1 What is a Knowledge System?
1.2 From General to Specific, and Back Again
1.3 Prolog and Logic Programming
1.4 Knowledge Representation
1.5 Getting the Computer to Understand English
1.6 Some Trends in Knowledge Acquisition
1.6.1 Learning by Being Told
1.6.2 Learning by Induction from Examples
1.6.3 Learning by Observation and Discovery
1.7 Summary

Chapter 2. A Prolog to Prolog (John Sowa)
2.1 Features of Prolog
2.1.1 Nonprocedural Programming
2.1.2 Facts and Predicates
2.1.3 Variables and Rules
2.1.4 Goals
2.1.5 Prolog Structures
2.1.6 Built-in Predicates
2.1.7 The Inference Engine
2.2 Pure Prolog
2.2.1 Solving Problems Stated in English
2.2.2 Subtle Properties of English
2.2.3 Representing Quantifiers
2.2.4 Choosing a Data Structure
2.2.5 Unification: Binding Values to Variables
2.2.6 List-Handling Predicates
2.2.7 Reversible Predicates
2.3 Procedural Prolog
2.3.1 Backtracking and Cuts
2.3.2 Saving Computed Values
2.3.3 Searching a State Space
2.3.4 Input/Output
2.3.5 String Handling
2.3.6 Changing Syntax
2.4 Performance and Optimization
2.4.1 Choosing an Algorithm
2.4.2 Generate and Test
2.4.3 Reordering the Generate and Test
2.4.4 Observations on the Method
Exercises

Chapter 3. Programming Techniques in Prolog (Walter Wilson)
3.1 How to Structure Prolog Programs
3.1.1 Logic Programming Development Process
3.1.2 Declarative Style
3.1.3 Data Representation
3.1.4 Structuring and Verifying Recursive Programs
3.1.5 Control Structures
3.2 Techniques and Examples
3.2.1 Meta-level Programming
3.2.2 Graph Searching
3.2.3 Balanced Trees
3.2.4 Playing Games and Alpha-beta Pruning
3.2.5 Most-Specific Generalizations
3.3 Summary of Prolog Programming Principles
Exercises

Chapter 4. Expert Systems in Prolog (Adrian Walker)
4.1 Knowledge Representation and Use
4.1.1 Rules
4.1.2 Frames
4.1.3 Logic
4.1.4 Summary
4.2 Syllog: an Expert and Data System Shell
4.2.1 Introduction to Syllog
4.2.2 A Manufacturing Knowledge Base in Syllog
4.2.3 Inside the Syllog Shell
4.2.4 Summary of Syllog
4.3 Plantdoc
4.3.1 Using Plantdoc
4.3.2 The Plantdoc Inference Engine
4.3.3 Weighing the Evidence
4.3.4 Summary of Plantdoc
4.4 Generating Useful Explanations
4.4.1 Explaining Yes Answers, Stopping at a Negation
4.4.2 Explaining Yes and No Answers, Stopping at a Negation
4.4.3 Full Explanations of Both Yes and No Answers
4.5 Checking Incoming Knowledge
4.5.1 Subject-Independent Checking of Individual Rules
4.5.2 Subject-Independent Checking of the Knowledge Base
4.5.3 Subject-Dependent Checking of the Knowledge Base
4.6 Summary
Exercises

Chapter 5. Natural Language Processing in Prolog (Michael McCord)
5.1 The Logical Form Language
5.1.1 The Formation Rules for LFL
5.1.2 Verbs
5.1.3 Nouns
5.1.4 Determiners
5.1.5 Pronouns
5.1.6 Adverbs and the Notion of Focalizer
5.1.7 Adjectives
5.1.8 Prepositions
5.1.9 Conjunctions
5.1.10 Nonlexical Predicates in LFL
5.1.11 The Indexing Operator
5.2 Logic Grammars
5.2.1 Definite Clause Grammars
5.2.2 Modular Logic Grammars
5.3 Words
5.3.1 Tokenizing
5.3.2 Inflections
5.3.3 Slot Frames
5.3.4 Semantic Types
5.3.5 Lexical Look-up
5.4 Syntactic Constructions
5.4.1 Verb Phrases, Complements, and Adjuncts
5.4.2 Left Extraposition
5.4.3 Noun Phrases
5.4.4 Left-Recursive Constructions
5.5 Semantic Interpretation
5.5.1 The Top Level
5.5.2 Modification
5.5.3 Reshaping
5.5.4 A One-Pass Approach
5.6 Application to Question Answering
5.6.1 A Sample Database
5.6.2 Setting up the Lexicon
5.6.3 Translation to Executable Form
5.6.4 A Driver for Question Answering
Exercises

Chapter 6. Conclusions (Adrian Walker)

Appendix A. How to Use IBM Prolog (Adrian Walker & Walter Wilson)
A.1 A Simple Example
A.2 Detailed Programming of a Metainterpeter
A.3 Testing the Metainterpreter at the Terminal
A.4 VM/Prolog Input and Output
A.5 VM/Prolog and the VM Operating System
A.6 Tailoring VM/prolog
A.7 Clause Names and Modules
A.8 Types, Expressions, and Sets
A.9 MVS/Prolog

Appendix B. Logical Basis for Prolog and Syllog (Adrian Walker)
B.1 Model Theory Provides the Declarative View
B.2 Logical Basis for Prolog without Negation
B.3 Logical Basis for Prolog with Negation
B.4 Further Techniques for Interpreting Knowledge

Bibliography

Author Index

Subject Index


The book can be ordered direct from Addison-Wesley. In the
USA, phone 617-944-3700, ask for the Order Department, and
quote title, authors, and Order Number ISBN 09044.

Adrian Walker
IBM T.J. Watson Research Center
PO Box 704
Yorktown Heights
NY 10598
Tel: 914-789-7806
Adrian @ IBM.COM

------------------------------

End of NL-KR Digest
*******************


← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT