Copy Link
Add to Bookmark
Report

NL-KR Digest Volume 04 No. 37

eZine's profile picture
Published in 
NL KR Digest
 · 11 months ago

NL-KR Digest             (3/29/88 20:04:29)            Volume 4 Number 37 

Today's Topics:
Software Wanted to Build a Mind
Seminar - A Formalization of Inheritance (Unisys)
Language & Cognition seminar
From CSLI Calendar, March 24, 3:21
Cognitive Science Colloqium
Seminar - DIOGENES: A Natural Language Generation System
CFP: Language and Language Acquisition Conference 4

Submissions: NL-KR@CS.ROCHESTER.EDU
Requests, policy: NL-KR-REQUEST@CS.ROCHESTER.EDU
----------------------------------------------------------------------

Date: Wed, 23 Mar 88 18:31 EST
From: POPX@VAX.OXFORD.AC.UK
Subject: Software Wanted to Build a Mind

[Excerpted from AIList]

From: Jocelyn Paine,
Experimental Psychology Department,
South Parks Road,
Oxford.

Janet Address: POPX @ OX.VAX



SOFTWARE WANTED
-
TO BUILD A MIND


I'm trying to teach Oxford undergraduates an information-processing
model of psychology, by giving them a computerised organism (named P1O,
after the course) which has a "mind" which they can watch and experiment
with. To do this, I've depicted how a mind might be built out of units,
each performing a simpler task than the original mind (my depiction is
loosely based on Dennett's "Towards a Cognitive Model of
Consciousness"). Each of my units does some well-defined task: for
example, parsing, edge-detection, conversion of a semantic
representation to text.

Now I have to implement each unit, and hook them together. The units are
not black boxes, but black boxes with windows: i.e. I intend that my
students can inspect and modify some of the representations in each box.
The units will be coded in Prolog or Pop-11, and run on VAX Poplog.
Taking the parser as an example: if it is built to use a Prolog definite
clause grammar, then my students should be able to: print the grammar;
watch the parser generate parse trees, and use the editor to walk round
them; change the grammar and see how this affects the response to
sentences.

P1O will live in a simulated world which it perceives by seeing objects
as sharp-edged images on a retina. This retina is a rectangular grid of
perhaps 2000 pixels, each spot sensing either nothing, or a spot of some
particular colour. One of the images will be that of P1O's manipulator,
which can detect whether it is touching an object. P1O can also perceive
loud noises, which direct its attention toward some non-localised region
of space. Finally, P1O can hear sentences (stored as a list of atoms in
its "auditory buffer"), and can treat them either as commands to be
obeyed, statements to be believed (if it trusts the speaker), or as
questions to be answered.

P1O's perceptual interpreter takes the images on its retina, and
converts them via edge-detection and boundary-detection into hypotheses
about the locations of types of objects. The interpreter then checks
these hypotheses for consistency with P1O's belief memory, determining
the while which individuals of a type it's seeing. Hypotheses consistent
with past beliefs are then put into the belief memory, as Prolog
propositions.

The sentences which P1O hears are also converted into propositions, plus
a mood (question, command, or statement). This is done by generating a
parse tree, and then a propositional representation of the sentence's
meaning. Statements are checked for consistency with the belief memory
before being entered into it; questions cause the belief memory to be
searched; commands invoke P1O's planner, telling it for example to plan
a sequence of actions with which it can pick up the brown chocolate
button which it sees.

These action sequences then go to P1O's motor control unit, which moves
the manipulator. This involves positional feedback - P1O moves a small
step at a time, and has to correct after each step.

P1O's simulated environment is responsible for tracking the manipulator,
and updating the retinal image accordingly. Students can also update the
image for themselves.

At the top level, P1O has some goals, which keep it active even in the
absence of commands from the student. The most important of these is to
search for food. The type of food sought depends on P1O's current
feeling of hunger, which depends in turn on what it has recently eaten.
The goals are processed by the top-level control module, which calls the
other modules as appropriate.



Above, I've described P1O as if I've already built it. I haven't, yet,
and I'm seeking Prolog or Pop-11 software to help. I'd also accept
software in other languages which can be translated easily. I'll enter
any software I receive into my Prolog library (see AILIST V5.279, 3rd
Dec 1987; IKBS Bulletin 87-32, 18 Dec 1987; the Winter 1987 AISB News)
for use by others.

I think so far that I need these most:

(1) LANGUAGE ANALYSIS:

(1.1) A grammar, and its parser, for some subset of English, in a
notation similar to DCG's (though it need not be _implemented_ as
DCG's). Preferably with parse trees as output, represented as Prolog
terms. The notation certainly doesn't have to be Prolog, though it may
be translatable thereto: it should be comprehensible to linguists who've
studied formal grammar.

(1.2) As above, but for the translation from parse-trees into some kind
of meaning (preferably propositions, but possibly conceptual graphs,
Schankian CD, etc) represented as Prolog terms. I'm really not sure what
the clearest notation would be for beginners.

(1.3) For teaching reasons, I'd prefer my analyser to be 2-stage; parse,
and then convert the trees to some meaning. However, in case I can't do
this: one grammar and analyser which does both stages in one go. Perhaps
a chart parser using functional unification grammars?

(1.4) A morphological analyser, for splitting words into root, suffixes,
etc.

(2) VISION

(2.1) An edge-detector. This should take a 2-D character array as input,
and return a list of edges with their orientation. I'm content to limit
it to vertical and horizontal edges. It need not deal with fuzzy data,
since the images will be drawn by students, and not taken from the real
world. This can be in any algorithmic language: speed is fairly
important, and I can call most other languages from Poplog.

(2.2) A boundary-detector. This should take either the character array,
or the list of edges, and return a list of closed polygons. Again, it
can be in any algorithmic language.

(3) SPEAKING

(3.1) A speech planner, which takes some meaning representation, and
converts into a list of words. This need not use the same grammar and
other knowledge as the language analyser (though it would be nicer if it
did).

(4) WINDOWING

(4.1) Any software for allowing the Poplog editor VED to display more
than two windows on the same screen, and for making VED highlight text.
Alternatively, Pop-11 routines which control cursor-addressable
terminals directly, bypassing VED, but still being able to do immediate
input of characters.

(5) OTHER

(5.1) If I model P1O's mind as co-operating experts, perhaps a
blackboard shell would be useful. Does anyone have a Prolog one?



I'd also like to hear from anyone who has other software they think
useful, or who has done this kind of thing already - surely I can't be
the first to try teaching in this way? In particular, does anyone have
ideas on how to manage the environment efficiently, and what form the
knowledge in top-level control should take. I'll acknowledge any help in
the course documentation.


Jocelyn Paine

------------------------------

Date: Thu, 17 Mar 88 13:35 EST
From: finin@PRC.Unisys.COM
Subject: Seminar - A Formalization of Inheritance (Unisys)

AI SEMINAR
UNISYS PAOLI RESEARCH CENTER


A Formalization of Inheritance Hierarchies
with Exceptions and Multiple Ancestors

Lokendra Shastri
University of Pennsylvania

Many knowledge-based systems express domain knowledge in terms of a hierarchy
of concepts/frames - where each concept is a collection of attribute-value (or
slot-filler pairs). Such information structures are variably referred to as
frame-based languages, semantic networks, inheritance hierachies, etc. One
can associate two interesting classes of inference with such information
structures, namely, inheritance and classification. Attempts at formalizing
inheritance and classification, however, have been confounded by the presence
of conflicting attribute-values among related concepts. Such conflicting
information gives rise to the problems of exceptions and multiple inheritance
during inheritance, and partial matching during classification. Although
existing formalizations of inheritance hierarchies (e.g., those proposed by
Etherington and Reiter, and Touretzky) deal adequately with exceptions, they
do not address the problems of multiple inheritance and partial matching.
This talk presents a formalization of inheritance hierarchies based on the
principle of maximum entropy. The suggested formalization offers several
advantages: it admits necessary as well as default attribute-values, it deals
with conflicting information in a principled manner, and it solves the
problems of exceptions, multiple inheritance, as well as partial matching. It
can also be shown that there exists an extremely efficient realization of this
formalization.

2:00 pm Tuesday, March 22
Unisys Paloi Research Center
Route 252 and Central Ave.
Paoli PA 19311

-- non-Unisys visitors who are interested in attending should --
-- send email to finin@prc.unisys.com or call 215-648-7446 --

------------------------------

Date: Wed, 23 Mar 88 11:31 EST
From: Dori Wells <DWELLS@G.BBN.COM>
Subject: Language & Cognition seminar


BBN Science Development Program
Language & Cognition Seminar Series

PLANNING COHERENT MULTISENTENTIAL TEXT

Eduard Hovy
Information Sciences Institute of USC
4676 Admiralty Way
Marina del Rey, CA 90282-6695

BBN Laboratories Inc.
10 Moulton Street
Large Conference Room, 2nd Floor

10:30 a.m., Thursday, March 31, 1988



Abstract: Generating multisentential text is hard. Though most text
generators are capable of simply stringing together more than one sentence,
they cannot determine coherent order. Very few programs attempt to plan
out the structure of multisentential paragraphs.

Clearly, the key notion is coherence. The reason some paragraphs are
coherent is that the information in successive sentences follows some
pattern of inference or of knowledge with which the hearer is familiar,
so that the hearer is able to relate each part to the whole. To signal
such inferences, people usually link successive blocks of text in one
of a fixed set of ways. The inferential nature of such linkage was
noted by Hobbs in 1978. In 1982, McKeown built schemas (scripts) for
constructing some paragraphs with stereotypical structure. Around
the same time, after a wide-ranging linguistic study, Mann proposed
a relatively small number of intersentential relations that suffice to
bind together coherently most of the things people tend to speak about.

The talk will describe a prototype text structurer that is based on the
inferential ideas of Hobbs, uses Mann's relations, and is more general
than the schema applier built by McKeown. The structurer takes the form
of a standard hierarchical expansion planner, in which the relations
act as plans and their constraints on relation fillers (represented
in a formalism similar to Cohen and Levesque's work) as subgoals in the
expansion. The structurer is conceived as part of a general text planner,
but currently functions on its own. It is being tested in two domains:
database output and expert system explanantion.

------------------------------

Date: Thu, 24 Mar 88 20:35 EST
From: Emma Pease <emma@russell.stanford.edu>
Subject: From CSLI Calendar, March 24, 3:21

[Excerpted from CSLI Calendar]

Panel Discussion on Compositionality
Per-Kristian Halvorsen, Stanley Peters, and Craige Roberts
March 31

Compositionality, conceived as a strong constraint on the relationship
between sentential structures and interpretations, has been one of the
central issues in semantic theory. Since Montague's seminal work on
this question, a number of analyses of specific interpretive problems
have called into question whether we can maintain compositionality as
a guiding principle in constructing semantic theories. And some
recent theories call into question in a more general way whether
compositionality is the kind of constraint we want on semantic theory.
These include theories which take seriously the contribution of
contextual information to interpretation, including situation
semantics and discourse representation theory, and also the recent
work by Fenstad, Halvorsen, Langholm, and van Benthem exploring
constraint-based interpretative theories operating on unification
grammars. In this panel discussion, we will briefly consider how
compositionality has generally been understood in the semantic
literature, give an overview of what we take to be the central
problems that call its utility into question, and discuss some
alternative conceptions of how semantic theory can be appropriately
constrained.

------------------------------

Date: Mon, 28 Mar 88 11:49 EST
From: Cognitive Science Center Office <CGS.Office@R20.UTEXAS.EDU>
Subject: Cognitive Science Colloqium

The Center for Cognitive Science presents

Kent Wittenberg, MCC
and
Robert Wall, Linguistics

"Eliminating Spurious Ambiguity
in Categorical Grammar"

Thursday, March 31, 1988
3:30 p.m.
Center for Cognitive Science
Geography Building (GRG) 220



ABSTRACT
Normal Forms for Categorial Grammars

A basic foundational belief for much of theoretical syntax, namely, that
derivational structure in parsing is isomorphic to constituent (and even
predicate-argument) structure, is rejected in some recent approaches to
Categorial Grammar (Steedman 1985, Dowty 1987, Moortgat 1987). In a
move paralleled by Lambek's extension (Lambek 1958) to the Ajdukiewicz
calculus (Ajdukiewicz 1935), these linguists propose reduction rules
such as function composition, commutativity, and type raising that
typically allow the combination of nonconstituents in derivations.
These rules, if allowed to apply across the board in the grammar,
typically lead to an explosive number of derivations that produce the
very same predicate-argument structures, a property that has been
labeled spurious ambiguity (Wittenburg 1986).

A solution to this parsing problem that we propose is to find equivalent
forms for the grammars that do not have the spurious ambiguity property.
This sort of approach is paralleled in context-free parsing by finding
normal forms for context-free grammars that happen to be more efficient
for processing purposes. The desired normalized forms of the Categorial
Grammars would be characterized by the property that each derivation
would once again be associated with a distinct predicate-argument
structure. We might call this property, in contrast to structural
completeness of the Lambek calculus, "structural distinctness."
Wittenburg (1987) suggests that Categorial Grammars that employ
"predictive combinators" may fit the bill. An example of a derivation
employing a predictive form of function composition (pc>) is shown below
beside a derivation with composition (c>) as it would appear in the
original grammar.

S S
---------------------------a> ---------------------------a>
S/NP S/(VP/NP)
------------------c> ---------------------pc>
S/VP S/VP
-----------a> -----------a>
S/(S/NP) S/VP/NP NP VP/NP S/(S/NP) S/VP/NP NP VP/NP
what did John eat what did John eat


In this talk we present mathematical results concerning Categorial
Grammars using predictive forms of function composition (G') and their
relation to a source grammar (G) the uses the classic form of function
composition. Among the results are that L(G) is a subset of L(G') but
that L(G') is not necessarily a subset of L(G) except under certain
additional restrictions.

------------------------------

Date: Tue, 29 Mar 88 15:37 EST
From: Anurag.Acharya@CENTRO.SOAR.CS.CMU.EDU
Subject: seminar - DIOGENES: A Natural Language Generation System


AI SEMINAR

TOPIC: DIOGENES: A Natural Language Generation System

SPEAKER: Sergei Nirenburg (412) 268-6593
109B BOM, Center for Machine Translation
Carnegie Mellon University, Pittsburgh, PA 15213
sergei@@nl.cs.cmu.edu

WHEN: Tuesday, April 19, 1988 3:30pm

WHERE: Wean Hall 5409

ABSTRACT

I will describe DIOGENES, a natural language generation system under
development at the Center for Machine Translation at CMU. It is envisaged as
a component of a knowledge-based machine translation system. Two major
properties distinguish this system from other natural language generation
systems. First, the architecture chosen for this generator is a version of
the well-known blackboard approach to control. Second, unlike a majority of
current natural language generation systems, DIOGENES devotes substantial
attention to the problem of lexical selection, in addition to dealing with
syntactic realization tasks. In this talk I will describe a) the
distinguishing features of DIOGENES, b) the knowledge it uses, c) its
knowledge structures and architecture, and d) a sampling of DIOGENES lexical
selection algorithms. In order to illustrate the blackboard and the control
structure, I will trace a sample generation of a short text.


------------------------------

Date: Fri, 25 Mar 88 04:50 EST
From: Francis LOWENTHAL <PLOWEN%BMSUEM11.BITNET@CUNYVM.CUNY.EDU>
Subject: CFP: Language and Language Acquisition Conference 4


ANNOUNCING A CONFERENCE : LANGUAGE AND LANGUAGE ACQUISITION 4
=============================================================


CALL FOR PAPERS

FIRST ANNOUNCEMENT


Dear colleague,

I have the pleasure to invite you to the fourth
conference we organize on Language and Language Acquisition
at the University of Mons, Belgium.

The specific theme of this conference will be :
"LANGUAGE DEVELOPMENT AND COGNITIVE DEVELOPMENT"

Date : From August 22 to August 27, 1988
Place : Mons University.

The aim of this meeting is to further an interdiscipli-
nary and international collaboration among researchers connec-
ted one way or the other with the field of communication and
subjacent logic : this includes as well studies concerning
normal children as handicapped subjects.

Five topics have been chosen : Mathematics, Philosophy,
Logic and Computer Sciences, Psycholinguistics, Psychology and
Medical Sciences. During the conference, each morning will be
devoted to two 45-minutes lectures on one of these domains, and
to a wide discussion concerning all the papers already presen-
ted. The aftrnoon will be devoted to short presentations by
panelists and to further discussions concerning the panel and
everything that preceded it.

There will be no parallel sessions and, as the organi-
zers want to favour as much as possible discussions between the
participants, it has been decided to reduce the number of par-
ticipants to 70. The selection procedure will be supervised by
an international committee.

Further informations and registration forms can be
obtained by old fashioned mail or by E-mail from :

F. LOWENTHAL
Universite de l'Etat a Mons
Laboratoire N.V.C.D.
Place du Parc, 20
B-7000 MONS (Belgium)
tel : (32)65.37.37.41
TELEX 57764 - UEMONS B
E-MAIL : PLOWEN@BMSUEM11.bitnet

Please, feel free to communicate this call for papers
to other potential interested researchers.

Thank you for your help and best wishes for 1988.

F. LOWENTHAL
JANUARY 7, 1988
Acknowledge-To: <PLOWEN@BMSUEM11>

------------------------------

End of NL-KR Digest
*******************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT