Copy Link
Add to Bookmark
Report

NL-KR Digest Volume 03 No. 42

eZine's profile picture
Published in 
NL KR Digest
 · 11 months ago

NL-KR Digest             (11/02/87 18:28:58)            Volume 3 Number 42 

Today's Topics:
Seminars:
SUNY Buffalo Cog. Sci. Colloq.: Contini-Morava
BBN AI Seminar Reminder -- Amy Lansky
Learning in Connectionist Networks
BBN AI Seminar -- Jeffrey Siskind
Speech Recognition Using Connectionist Networks (UNISYS)
From CSLI Calendar, Oct. 29, 3:5

Linguistics in Encyclopedia of AI

----------------------------------------------------------------------

Date: Fri, 23 Oct 87 14:51 EDT
From: William J. Rapaport <rapaport@cs.Buffalo.EDU>
Subject: SUNY Buffalo Cog. Sci. Colloq.: Contini-Morava


STATE UNIVERSITY OF NEW YORK AT BUFFALO

GRADUATE GROUP IN COGNITIVE SCIENCE

ELLEN CONTINI-MORAVA

Department of Linguistics
University of Virginia

TEMPORAL EXPLICITNESS IN EVENT CONTINUITY IN SWAHILI DISCOURSE

Swahili verb sequences consisting of an inflected form of `kuwa', "to
be"
, followed by another inflected verb are constructions in which
`kuwa' supplies deictic orientation for verbs whose orientation is not
obvious from context. Such explicit orientation is needed in situations
where there is a break in continuity between events, such as introduc-
tion of a new subject, that might cause difficulty in integrating a verb
into its context. The argument is supported by examples from texts and
quantitative data. It is suggested that the notion of event continuity
in discourse involves more than purely temporal relatedness between
events.

Friday, October 30, 1987
3:30 P.M.
Baldy 684, Amherst Campus

Informal discussion at 8:00 P.M. on Friday evening at David Zubin's
house, 157 Highland Ave., Buffalo. Call Bill Rapaport (Dept. of Com-
puter Science, 636-3193 or 3181) or Gail Bruder (Dept. of Psychology,
636-3676) for further information.

------------------------------

Date: Fri, 23 Oct 87 16:43 EDT
From: Marc Vilain <MVILAIN@G.BBN.COM>
Subject: BBN AI Seminar Reminder -- Amy Lansky

BBN Science Development Program
AI Seminar Series Lecture

LOCALIZED EVENT-BASED REASONING FOR MULTIAGENT DOMAINS

Amy L. Lansky
Artificial Intelligence Center,
SRI International
(LANSKY@VENICE.AI.SRI.COM)

BBN Labs
10 Moulton Street
2nd floor large conference room
10:30 am, Monday October 26


This talk will present the GEM concurrency model and GEMPLAN, a
multiagent planner based on this model. Unlike standard state-based
AI representations, GEM is unique in its explicit emphasis on events
and domain structure -- a world domain is modeled as a set of regions
composed of interrelated events. Event-based temporal logic
constraints are then associated with each region to delimit legal
domain behavior. GEM's emphasis on constraints is directly reflected
in the architecture of the GEMPLAN planner -- it can be viewed as a
general purpose constraint satisfaction facility. Its task is to
construct a network of interrelated events that satisfies all
applicable regional constraints and also achieves some stated goal.

A key focus of our work has been on the use of localized techniques
for domain representation and reasoning. Such techniques partition
domain descriptions and reasoning tasks according to the regions of
activity within a domain. For example, GEM localizes the
applicability of domain constraints and also imposes additional
``locality constraints'' based on domain structure. Together,
constraint localization and locality constraints help solve several
aspects of the frame problem for multiagent domains. The GEMPLAN
planner also reflects the use of locality; its constraint satisfaction
search space is subdivided into regional planning search spaces. By
explicitly utilizing constraint localization, GEMPLAN can pinpoint and
rectify interactions among regional search spaces, thereby
reducing the burden of ``interaction analysis'' ubiquitous to most
planning systems.

------------------------------

Date: Sun, 25 Oct 87 17:34 EST
From: Michael Friendly <FRIENDLY%YORKVM1.BITNET@wiscvm.wisc.edu>
Subject: Seminar - Learning in Connectionist Networks


Cognitive Science Discussion Group

Speaker : Geoffrey Hinton, University of Toronto
Title : "Learning in connectionist networks"
Date : Friday, Oct. 30, 1pm
Location: Rm 207 Behavioural Science Bldg., York University
4700 Keele St., Downsview, Ont.

Abstract

Parallel networks of simple processing elements can be trained to compute
a function by being shown examples of input and output vectors. The
network stores its knowledge about the function as the strengths of
interactions between pairs of processors. For networks with many layers
of processors between the input and output, the learning procedure must
decide which aspects of the input vector need to be represented by the
internal processors. By choosing to represent important underlying
features of the task domain, the network can learn to model the function
efficiently in the strengths of the interactions between processors, and
it will then generalize appropriately when presented with new input
vectors. These parallel networks are much more similar to the brain than
conventional computers and may provide insight into the basis of natural
intelligence.

------------------------------

Date: Tue, 27 Oct 87 10:38 EST
From: Marc Vilain <MVILAIN@G.BBN.COM>
Subject: BBN AI Seminar -- Jeffrey Siskind

BBN Science Development Program
AI Seminar Series Lecture

DEPENDENCY DIRECTED PROLOG

Jeffrey Mark Siskind
MIT Laboratory for Computer Science
(also: summer intern at Xerox PARC)
(Qobi@ZERMATT.LCS.MIT.EDU)

BBN Labs
10 Moulton Street
2nd floor large conference room
10:30 am, Tuesday November 3


In this talk I will describe an implementation of pure Prolog which uses
dependency directed backtracking as a control strategy for pruning the
search space. The implementation uses a strategy whereby the Prolog
program is compiled into a finite set of templates which characterize a
potentially infinite boolean expression which is satisfiable iff there
is a proof of the goal query. These templates are incrementally
unraveled into a sequence of propositional CNF SAT problems and
represented in a TMS which is used to find solutions using dependency
directed backtracking. The technique can be extended to use ATMS-like
strategies for searching for multiple solutions simultaneously.

Two different strategies have been implemented for dealing with
unification. The first compiles the unification constraints into SAT
clauses and integrates them in the TMS along with the and/or goal tree
produced by unraveling the templates. The second uses a separate module
for doing unification at run time. This unifier is novel in that it
records dependencies and allows nonchronological retraction. The
interface protocol between the TMS and the unifier module has been
generalized to allow integration of other "domains" of predicates, such
as linear arithmetic and simple linear inequalities, to be built into
the system while still preserving the soundness and completeness of the
pure logical interpretation of Prolog.

In the talk, time permitting, I will discuss the search prunning
advantages of this approach and its relation to previous approaches, the
implementation mechanism, and some recent work indicating the potential
applicability of this approach to parsing with disjunctive feature
structures, such as done with the LFG and related grammar formalisms.

------------------------------

Date: Tue, 27 Oct 87 15:37 EST
From: Tim Finin <finin@bigburd.PRC.Unisys.COM>
Subject: Speech Recognition Using Connectionist Networks (UNISYS)


AI Seminar
UNISYS Knowledge Systems
Paoli Research Center
Paoli PA


SPEECH RECOGNITION USING CONNECTIONIST NETWORKS

Raymond Watrous
Siemens Corporate Research
and
University of Pennsylvania


The thesis of this research is that connectionist networks are
adequate models for the problem of acoustic phonetic speech
recognition by computer. Adequacy is defined as suitably high
recognition performance on a representative set of speech recognition
problems. Six acoustic phonetic problems are selected and discussed
in relation to a physiological theory of phonetics. It is argued that
the selected tasks are sufficiently representative and difficult to
constitute a reasonable test of adequacy.

A connectionist network is a fine-grained parallel distributed
processing configuration, in which simple processing elements are
interconnected by simple links. A connectionist network model for
speech recognition has been defined called the TEMPORAL FLOW MODEL.
The model incorporates link propagation delay and internal feedback to
express temporal relationships.

It has been shown that temporal flow models can be 'trained' to
perform successfully some speech recognition tasks. A method of
'learning' using techniques of numerical nonlinear optimization has
been demonstrated for the minimal pair "no/go", and voiced stop
consonant discrimination in the context of various vowels. Methods for
extending these results to new problems are discussed.

10:00am Wednesday, November 4, 1987
Cafeteria Conference Room
Unisys Paloi Research Center
Route 252 and Central Ave.
Paoli PA 19311

-- non-UNISYS visitors who are interested in attending should --
-- send email to finin@prc.unisys.com or call 215-648-7446 --

------------------------------

Date: Wed, 28 Oct 87 19:39 EST
From: Emma Pease <Emma@CSLI.Stanford.EDU>
Subject: From CSLI Calendar, Oct. 29, 3:5

[Excerpted from CSLI Calendar]

Reading: "Cognitive Significance and the New Theories of Reference"
by John Perry
Discussion led by Bob Moore
October 29

In this paper, John Perry replies to Howard Wettstein's article "Has
Semantics Rested on a Mistake?"
Wettstein has argued that the New
Theory of Reference (actually a family of theories based on the notion
of direct reference) cannot handle puzzles posed by Frege concerning
the cognitive significance of language. Since Wettstein finds the
arguments for the New Theory absolutely convincing, he is driven to
the conclusion that semantics has nothing to say about cognitive
significance. Perry argues that this is an overly pessimistic
assessment, and that Frege's puzzles can be solved by drawing a
distinction between the conditions under which an utterance expresses
a true proposition and the proposition expressed. Perry's principal
claim is that, while the New Theorists have mainly concerned
themselves with the latter, it is the former that should be identified
with cognitive significance. Thus arguments designed to show that the
proposition expressed by an utterance cannot be its cognitive
significance are irrelevant, and a broader theory of semantics can and
should account for both. In the discussion, I would like to raise the
issue of whether getting the semantics of propositional attitude
reports right forces a tighter connection between cognitive
significance of an utterance and the proposition expressed by an
utterance than either Wettstein or Perry is prepared to allow for.

An Introduction to Situated Automata
Part II: Applications
Stan Rosenschein
November 5

This is the second of two lectures on the situated-automata approach
to the analysis and design of embedded systems. This approach seeks
to ground our understanding of embedded systems in a rigorous,
objective analysis of their informational properties, where
information is modeled mathematically in terms of correlations between
states of the system and conditions in the environment. In the first
talk we motivated the general framework, presented the central
mathematical ideas on how information is carried in the states of
automata, and related the mathematical properties of the model to key
theoretical issues in AI, including the nature of knowledge, its
representation in machines, the role of syntactic deduction,
"nonmonotonic" reasoning, and the relation of knowledge and action.
Some general technological implications of the approach, including
reduced reliance on conventional symbolic inference and increased
opportunities for parallelism were discussed.

In the second lecture, I will describe the application of the
situated-automata perspective to specific problems arising in the
design of integrated intelligent agents, including problems of
perception, planning and action selection, and linguistic
communication.

------------------------------

Date: Fri, 30 Oct 87 15:05 EST
From: William J. Rapaport <rapaport@cs.Buffalo.EDU>
Subject: Linguistics in Encyclopedia of AI


I created the following for use in my natural-language understanding
course, and thought some other people might find it useful. It may be a
slightly idiosyncratic list: In order for it not to become a complete
list of _all_ articles in the _Encyclopedia_, I did not include articles
on knowledge representation or some others on topics that readers might
have thought were relevant. Please let me know if there are any
outright errors.

William J. Rapaport
Assistant Professor

Dept. of Computer Science, SUNY Buffalo, Buffalo, NY 14260

(716) 636-3193, 3181

uucp: ..!{ames,boulder,decvax,rutgers}!sunybcs!rapaport
internet: rapaport@cs.buffalo.edu
[if that fails, try: rapaport%cs.buffalo.edu@relay.cs.net
or: rapaport@buffalo.csnet ]
bitnet: rapaport@sunybcs.bitnet

========================================================================

A GUIDE TO LINGUISTICS ARTICLES IN

The Encyclopedia of Artificial Intelligence

Stuart C. Shapiro (editor)
(John Wiley & Sons, 1987).

compiled by

William J. Rapaport

Department of Computer Science
SUNY Buffalo
Buffalo, NY 14260
rapaport@cs.buffalo.edu

Volume 1:

(1) Hull, J. J., "Character Recognition," pp. 82-88.

(2) Ballard, B., & Jones, M., "Computational Linguistics," pp. 133-151.

(3) Hardt, S., "Conceptual Dependency," pp. 194-199.

(4) Hindle, D., "Deep Structure," pp. 230-231.

(5) Scha, R.; Bruce, B. C.; & Polanyi, L., "Discourse Understanding," pp.
233-245.

(6) Woods, W. A., "Grammar, Augmented Transition Network," pp. 323-333.

(7) Bruce, B., & Moser, M. G., "Grammar, Case," pp. 333-339.

(8) Coelho, H., "Grammar, Definite Clause," pp. 339-342.

(9) Gazdar, G., "Grammar, Generalized Phrase Structure," pp. 342-344.

(10) Joshi, A., "Grammar, Phrase Structure," pp. 344-351.

(11) Burton, R., "Grammar, Semantic," pp. 351-353.

(12) Berwick, R., "Grammar, Transformational," p. 353-361.

(13) Mallery, J. C.; Hurwitz, R.; & Duffy, G., "Hermeneutics," pp. 362-376.

(14) Hill, J. C., "Language Acquisition," pp. 443-452.

(15) Newmeyer, F. J., "Linguistics, Competence and Performance," pp. 503-
508.

(16) Wilks, Y., "Machine Translation," pp. 564-571.

(17) Tennant, H., "Menu-Based Natural Language," pp. 594-597.

(18) Koskenniemi, K., "Morphology," pp. 619-620.

(19) McDonald, D. D., "Natural-Language Generation," pp. 642-655.

(20) Bates, M., "Natural-Language Interfaces," pp. 655-660.

(21) Carbonell, J. G., & Hayes, P. J., "Natural-Language Understanding,"
pp. 660-677.

Volume 2:

(22) Petrick, S., "Parsing," pp. 687-696.

(23) Riesbeck, C. K., "Parsing, Expectation-Driven," pp. 696-701.

(24) Small, S. L., "Parsing, Word-Expert," pp. 701-708.

(25) Keyser, S. J., "Phonemes," pp. 744-746.

(26) Webber, B., "Question Answering," pp. 814-822.

(27) Dyer, M.; Cullingford, R.; & Alvarado, S., "Scripts," pp. 980-994.

(28) Smith, B. C., "Self-Reference," pp. 1005-1010.

(29) Sowa, J., "Semantic Networks," pp. 1011-1024.

(30) Hirst, G., "Semantics," pp. 1024-1029.

(31) Woods, W., "Semantics, Procedural," pp. 1029-1031.

(32) Allen, J. F., "Speech Acts," pp. 1062-1065.

(33) Allen, J., "Speech Recognition," pp. 1065-1070.

(34) Allen, J., "Speech Synthesis," pp. 1070-1076.

(35) Briscoe, E. J., "Speech Understanding," pp. 1076-1083.

(36) Lehnert, W. G., "Story Analysis," pp. 1090-1099.
END OF FILE

------------------------------

End of NL-KR Digest
*******************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT