Copy Link
Add to Bookmark
Report

NL-KR Digest Volume 02 No. 44

eZine's profile picture
Published in 
NL KR Digest
 · 11 months ago

NL-KR Digest             (5/28/87 17:33:56)            Volume 2 Number 44 

Today's Topics:
Conceptual Information Research
references re (approximate) structure matching
Re: S as an Object of Grammatical Study
NL vs AL
From CSLI Calendar, May 21, No.29
From CSLI Calendar, May 28, No.30

----------------------------------------------------------------------

Date: Mon, 25 May 87 12:51 EDT
From: Walter Bunch <mcvax!ukc!its63b!hwcs!aimmi!walt@seismo.css.gov>
Subject: Conceptual Information Research


I'm interested in researching knowledge representation as part of a PhD
program.

What universities are supporting research in the use and properties of
conceptual information, e.g. in light of Sowa's "Conceptual Structures"
(1984)?

I've read about some work going on at RMIT (Royal Melbourne Inst Tech).
I suppose that anyone grappling with frames/schemas/etc. could say they
are exploring the use of conceptual information, in a broad sense.

My interest is less in the application of schema-like data structures to
specific problems than in the manipulation of the structures themselves,
e.g. in generic conceptual recognition and generalization.

As an aside: Is there anyone working with conceptual structures on
connectionist machines?

Thanks in advance,
Walt
--
Walter Bunch, Scottish HCI Centre, Ben Line Building, Edinburgh, EH1 1TN
UUCP: walt@uk.ac.hw.aimmi
ARPA: walt%aimmi.hw.ac.uk@cs.ucl.ac.uk
JANET: walt@uk.ac.hw.aimmi "Is that you, Dave?"

------------------------------

Date: Wed, 27 May 87 15:10 EDT
From: Roland Zito-Wolf <RJZ@JASPER.PALLADIAN.COM>
Subject: references re (approximate) structure matching

I am looking for references regarding the matching of complex structures
(matching on semantic networks or portions of networks) such as arise in doing
retrieval operations on knowledge-bases so represented.
Since the general mathcing problem is most likely intractable, I'm
looking for approximate or incomplete techniques, such as partial match,
resource-bounded match, matches using preference rules, etc.
References which explore algorithms in detail, and implemented systems,
would be especially useful. For example, does anyone know of a detailed description
of the KRL matcher?

Information on the more general problem of query/data-retrieval from
semantic networks would also be useful.

If there's sufficient interest, I'll post the results to the digest.
Thanks in advance.

Roland J. Zito-wolf
Palladian Software
4 Cambridge Center
Cambridge, Mass 02142
617-661-7171
RJZ%JASPER@LIVE-OAK.LCS.MIT.EDU

------------------------------

Date: Thu, 28 May 87 14:17 EDT
From: Bruce Nevin <bnevin@cch.bbn.com>
Subject: Re: S as an Object of Grammatical Study

In 2.41, David Pesetsky <PESETSKY%cs.umass.edu@RELAY.CS.NET> argues for
the validity of syntax research that concerns itself only with
sentential phenomena, and summarizes some nice work by Luigi Rizzi on
linguistic non-universals wrt to transposition, elision, and
wh-extraction of subject.

If indeed we were discussing

DP> . . . the broad question of whether
DP> it is right to work on "sentence syntax" . . .

I can only agree that this is valid work. Of course! Indeed, it is a
truism that most constraints on word cooccurrence apply within the
boundary of the sentence.

My claim was only that syntax so construed is woefully incomplete, and
that many phenomena within the sentence cannot be understood--the issues
cannot even be stated--except in context of a theory of discourse. I
also think that many of the theories of discourse (really, theories of
the social psychology of conversation) that are around today are not
terribly helpful precisely because they leave the syntactic structures
of discourses--the results of discourse analysis in the original sense
of the term--as a black box.

There is a deeper problem here, and that has to do with the relatively
recent practice (only about 30 years old) of relying on anecdotal
evidence in linguistics. THERE IS NO COMPLETE GENERATIVE GRAMMAR OF ANY
LANGUAGE. One has only fragments of grammars. The occasional attempts
at a grand synthesis (e.g. Stockwell, Schachter, and Partee's _The Major
Syntactic Structures of English) have all failed.

This is the reason I emphasize the need for a comprehensive approach,
aiming for complete coverage from the outset. Attempts to generalize
from a seemingly successful fragment have not worked.

Re the linguistic phenomena that Rizzi discusses, I recommend studying
carefully section 3.1, `Linearizations and Transpositions', in Harris,
_A Grammar of English on Mathematical Principles_.

This 1982 book does describe a complete grammar of a natural language.
(Frawley's review in _Language_ is mistaken in a number of ways; for
example, Harris' operator grammar does not employ or depend upon
first-order predicate logic, and Harris is very much concerned with
establishing and using an empirically based semantic representation of
sentences and discourses. Cf. my review in _Computational Linguistics_
10.4.)

The principles in this section for predicting what may be extracted or
elided from what may be transposed would apply in obvious ways in a
similarly comprehensive grammar of Italian, Spanish, or Bani-Hassan
Arabic as they have been applied in English and French. Had Rizzi read
this, he might have arrived at his conclusions a couple of years
earlier.


Bruce Nevin
bn@cch.bbn.com

(This is my own personal communication, and in no way expresses or
implies anything about the opinions of my employer, its clients, etc.)

------------------------------

Date: Tue, 26 May 87 10:15 EDT
From: Michael T. Gately <gately%resbld%ti-csl.csnet@RELAY.CS.NET>
Subject: NL vs AL

I was thinking about the 'natural' vs. 'artificial' language discussion
and beginning to believe that natural languages could be characterized
by their use by people in communicating mental processes to each other.

Of course you can see that this line of reasoning didn't work, but one
of the examples I came up with for its failure is interesting. I recall
being told a story about some LISP Machine programmers at MIT who used
to communicate in LISP. For example, a programmer might send a message
to another programmer reading "LUNCH-P." In the LISP language, a "-P"
suffix effectively asks a question - thus "Are you ready for lunch?"

I also seem to remember reading a science fiction book where two computer
types communicated with a language more like Pascal - although I cannot
remember any of the details. The book may have been Valentina: Soul in
Sapphire.

Regards,
Mike Gately
GATELY@TI-CSL.CSNET
--These comments are my own and do not reflect the beliefs of my employer.

[One can argue that such use of Lisp, or whatever, is still NL use: literally
LUNCH-P should mean `is the object of type LUNCH' which is not what is asked.
A lot of inference of the intentions of the `speaker' by the `hearer' must go
on for the query to make semantic/pragmatic sense, most of which would be
picked up from context. Such inference does not go on in (normal) programming
languages. Thus we simply have a shorthanded natural language utterance, that
depends on S and H sharing an understanding that appending a P to the end of a
noun makes it a question, similar to a lisp stylistic rule --- not a true case
of computer language being used for human <-> human interaction. BWM]

------------------------------

Date: Thu, 21 May 87 11:50 EDT
From: Emma Pease <Emma@CSLI.Stanford.EDU>
Subject: From CSLI Calendar, May 21, No.29

[Excerpted from CSLI Calendar]

NEXT WEEK'S CSLI SEMINAR
Designing a Situated Inference Engine
Brian Smith
May 28

The premise is familiar: language and reasoning arise out of mutual
constraints between a representational system and its embedding
environment. The Situated Inference Engine (SIE) is a pilot project
exploring this perspective in the context of a simple scheduling
system. In this talk I will introduce the project, present the
language the system will use to "converse" with its users, and discuss
our current conception of its internal architecture.

NATURAL LANGUAGE PROCESSING
PANEL DISCUSSION
21 May 1987, 7:00 P.M.
Foothill College, Appreciation Hall, Room A61
(right next to the theater, next to the front stairs)

Panelists:
Jerrold Ginsparg or John Manferdelli founders of Natural Language
Products, Berkeley, CA

Gary Hendrix, Founder and VP Advanced Technology Symantec Inc.

Ray Perrault, Director of NLP, AI Center SRI International

Dan Flickenger, Manager NLP Project, HP Labs Hewlett Packard

Organized by the local AI committe of the IEEE,

Admittance is free

------------------------------

Date: Thu, 28 May 87 11:08 EDT
From: Emma Pease <Emma@CSLI.Stanford.EDU>
Subject: From CSLI Calendar, May 28, No.30

[Excerpted from CSLI Calendar]

NEXT WEEK'S CSLI SEMINAR
Situated Language, Situated Processing
Susan Stucky
June 4

Here's a familiar way that natural-language processing systems get
built: you pick your favorite theory of language; then you code up a
grammar formalism that is appropriate to the theory (and if you're
lucky, to computational implementation) Next, you implement a parser.
And finally, you put in place a mechanism for deriving a semantic
representation from the syntatic representation produced by the
parser. When you go to conferences or talk with your collegues, the
discussions you have are about what sorts of formalism are most
appropriate for computational implementation--for example, which of
the various semantic representations (first-order logic, situation
semantics, story schemata, what have you) best do justice to natural
language. In short, the search is for the best theory and/or
formalism that does justice to both natural language and to the
machine. There is growing evidence, however, that suggests the need
for a re-evaluation of that assumption. One might suppose that the
biggest challenge would come from fundamental differences between
humans (and hence, human language) and computers. And there have been
some arguments to this effect. But, oddly enough, there is actually
an argument that comes from a fundamental similarity in how language
is used and how computers are used. Language is efficient, we have
learned, in the sense that it depends on context for interpretation.
And so is computational state, or so it can be argued. On the other
hand, what context determines the interpretation of the one is not
necessarily the context that determines the interpretation of the
other. This discrepancy is at the heart of the things, I believe, and
should cause us to think hard about the relation between theories of
language and computational implementation and then about the nature of
language use itself.
In the seminar, I will present the argument outlined above, and in
the context of the the Situated Inference Engine Project (which you
will have heard about the week before), draw some of the inferences it
has for a theory of language use that will do justice to language,
machines, and, if we're on the right track, humans too.

------------------------------

End of NL-KR Digest
*******************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT