Copy Link
Add to Bookmark
Report

NL-KR Digest Volume 03 No. 44

eZine's profile picture
Published in 
NL KR Digest
 · 20 Dec 2023

NL-KR Digest             (11/05/87 22:52:04)            Volume 3 Number 44 

Today's Topics:
WASP Parser
dynamic KB restructuring
Prolog Partial Evaluation ....
Statistical measures of Englishness
Re: false dichotomies
Langendoen and Postal (posted by: Berke)

Seminars:
Order Parser Word A Free- (MIT - past)
Experience, Memory, and Reasoning (MIT - past)

----------------------------------------------------------------------

Date: Tue, 27 Oct 87 05:48 EST
From: Donal O'Mahony <DOMAHONY%IRLEARN.BITNET@wiscvm.wisc.edu>
Subject: WASP Parser

Can anybody tell me where I can get hold of the source for a WASP parser
written in SCHEME (or any dialect of LISP) that I can use for instructional
purposes.

Donal O'Mahony OMAHONY@TCDCS.UUCP or DOMAHONY@IRLEARN.BITNET
Computer Science Dept.,
Trinity College,
Dublin 2,
Ireland.

------------------------------

Date: Mon, 2 Nov 87 13:49 EST
From: Len%AIP1%TSD%atc.bendix.com@RELAY.CS.NET
Subject: dynamic KB restructuring

Date: Mon, 2 Nov 87 14:51 EST
From: Len Moskowitz <Len@HEART-OF-GOLD>
Subject: dynamic KB restructuring
To: "3077::IN%\"NL-KR@cs.rochester.edu\""@TSD1
Message-ID: <871102145150.0.LEN@HEART-OF-GOLD>

In NL-KR Digest, Vol. 3 No. 41, Bruce Nevin (bnevin@cch.bbn.com) writes:

> Has there been any work on the problem of reorganizing knowledge bases
> in light of evidence for a different set of conceptual categories?
> I'm thinking of a `hindsight is better' realization that there is
> something seriously deficient or limiting in the system of concepts and
> links used at the beginning of building a KR system.

I've been working on something that may address a part of this. As
Dreyfus and Dreyfus point out, for the most part AI takes a microworld
perspective; for convenience, we make believe we completely understand what we
are representing. Thus, we build our KRs to be conceptual (i.e., categorical,
representing relations between concepts), expressing exactly the assumed
structure. When we find that the concept-organized structure we assumed is
inadequate, restructuring is difficult or undoable. Examples of the conceptual
approach are Kolodner's E-MOPs, Lebowitz's UNIMEM, and the more formal KRs
(e.g., KL-ONE, KRYPTON, KODIAK). For both Kolodner and Lebowitz, restructuring
of the knowledge bases in light of new knowledge (the process of recovering
from erroneous generalizations, or recovery from concept overextension due to
training instance presentation order) is a hairy, ad hoc operation that can
potentially lose information.
The limitations inherent in these models stem from organizing the KB
conceptually. We force the rich, detailed knowledge we acquire experientially
into abstract, sparse, concepts, and throw away the knowledge not used to
arrive at the concept (i.e., the experiences). The "concept" becomes the
primary organizing factor in the KR. An alternative is to organize the KRs at
a lower level, an experiential one expressing the richness of an experience
(perceptual or otherwise). If we preserve the experiential information when we
transform it from experiential to conceptual levels, we should be better able
to reorganize the KB in light of new knowledge. And since the KB is not
organized by concept, we don't have to have just one version of a particular
concept. We can have as many as experience dictates, providing a measure of
diversity in our KB all along. This may reduce the need for the
recovery/reorganizing process.
This idea doesn't prescribe how to restructure the knowledge base. It
just provides a KR that may allow for it.

Len Moskowitz
moskowitz%bendix.com@relay.cs.net (ARPAnet)
moskowitz@bendix.com (CSnet)
moskowit@rutgers.edu

------------------------------

Date: Tue, 3 Nov 87 14:50 EST
From: Arun Lakhotia <arun@mandrill.CWRU.Edu>
Subject: Prolog Partial Evaluation ....

I am organizing a mailing list of people interested in Partial
Evaluation in/with Prolog.

I already have a list of about 15 *groups* active in this area.
The list primarily consists of people who attended the workshop on
Partial Evaluation and Mixed Computation, Denmark Oct 17-24, 87.

If there are other people who would like to be included in the lists
please send me a mail.

Folks interested in a more general group (logic and functional) may
get in touch with Olivier Danvy (..!mcvax!diku!danvy)

**** ESTABLISHING COMMUNICATION ****

For folks who gave me their names during the workshop

I am sending mails to individuals to verify connection as per the
e-mail addresses given to me. Some of the mails have already started to
bomb back. If you do not hear from me in the next week, please try to get
in touch with me.

Arun Lakhotia


arun@mandrill.cwru.edu
..!{cbosgd, decvax}!mandrill!arun


------------------------------

Date: Tue, 3 Nov 87 14:58 EST
From: Michael Friendly <FRIENDLY%YORKVM1.BITNET@wiscvm.wisc.edu>
Subject: Statistical measures of Englishness

I'm looking for statistical measures of Englishness one can compute
for non-word strings like BUNE or BNUE that captures the degree to
which the string is composed of familiar English letter sequences.

I have available the Kucera-Francis word frequency counts of written
English, from which I can obtain the frequency of occurrence of
the 27 characters A-Z,space, as well as frequencies of bigrams and
trigrams. Going beyond trigrams, however, is extremely difficult.

Simply using the average, say, bigram frequency of a letter string
is not very satisfactory, since an extremely unlikely bigram, like
VL or MB can be balanced out by one or more extremely likely ones,
and the frequency distribution of bigram frequencies is extremely
long-tailed, ala a lognormal.

The frequency counts for bigrams in VLAD and BLAN below show what such
data look like:
1 VLAD V 6506
2 VLAD VL 5
3 VLAD LA 18443
4 VLAD AD 16284
5 VLAD D 105465

1 BLAN B 46279
2 BLAN BL 8898
3 BLAN LA 18443
4 BLAN AN 72699
5 BLAN N 87770

I've found one paper that proposes a measure based on log conditional
probabilities. (Travers & Olivier, ``Pronounceability and statistical
"Englishness" as determinants of letter identification'', Amer. J.
Psychol., 1978, 91,523-538)

I'm wondering if someone can steer me to any other (more recent?)
papers on this topic.

------------------------------

Date: Tue, 27 Oct 87 02:29 EST
From: Sarge Gerbode <sarge@thirdi.UUCP>
Subject: Re: false dichotomies


In article <344@odyssey.ATT.COM> gls@odyssey.ATT.COM (g.l.sicherman) writes:
>> I don't think a concept is a pattern, but even if a concept were a pattern,
>> that still doesn't say that it would be perceptible. Lots of patterns are
>> only *inferred* and cannot be perceived.
>
>It's the rule that you infer, not the pattern. You must perceive a pattern
>before you can establish a rule!

I think I see what you mean. You seem to be defining "pattern" to mean a
perception. That's OK, so long as that usage is consistent. However,
sometimes one speaks of pattern in different ways, such as "a pattern of
crimes"
. One may not have perceived any of these crimes, but they appear to
have a certain regularity. Per what you have said, I imagine you would call
this an inferred rule, not a pattern. I can go along with this, although there
is a sense in which *any* perceived pattern involves a certain degree of
inference or interpretation of perception, not a pure perception.

>The nature of illusion was discussed here a few months ago. Do you know
>the story of the sage who dreamed he was a butterfly? Afterwards he was
>never sure that he was not a buttefly dreaming it was a sage.

But what one accepts as reality is not entirely arbitrary as this example
appears to suggest. Standards of consistency, as well as considerations of
predictability, order, aesthetics, and other considerations relative to
credibility come into play in determining which interpretation a person accepts
of his experience.

>The only difference it makes to _you_ is that you interpret what you
>perceive differently.

Another way of putting this is that one "steps back" from the previous
perception one had and sees it now as, not a perception, but a
misinterpretation of a "lower level" perception. For instance, one sees a
ghost in the dead of night. Then, when the light comes, one sees that it was
only a shirt on a chair. In looking at one's prior perception, one "steps
back"
from the interpretation of it as a ghost and sees "ghost" as having been
a mere interpretation (In fact, an incorrect one) of the "lower-level"
perception of "a whitish shape in my room".s That doesn't mean that it was
not originally a perception. But now one looks from a different viewpoint,
minus that interpretation. Any act of perception contains multiple levels of
interpretation, into which one can "step forward" or from which one can "step
back"
.

>The only difference it makes to others is that
>they may want to rely on what you say. If you shout, "Look out! Here
>come the flying pigs again!"
they can choose between running for cover
>and looking foolish when a herd of swine walks past, or laughing, "Ha,
>ha! Sarge has been dreaming again!"
and (just maybe) being killed by
>flying pigs.

Yes. One selects amongst possible interpretations of perceptions those that
are conducive to the establishment of a universe of discourse with other
people. If not, one is likely to be very lonely at best and at worst end up in
a back ward somewhere or dead.

>On the other hand, if you're dealing with Old Age people, you just say
>"I had a dream of flying pigs," and they answer, "We don't believe you!
>Where is your dream? Let's see it!"


Great point. Those who do not believe in telepathy demand telepathy as proof!

--
"Absolute knowledge means never having to change your mind."

Sarge Gerbode
Institute for Research in Metapsychology
950 Guinda St.
Palo Alto, CA 94301
UUCP: pyramid!thirdi!sarge

------------------------------

Date: Sun, 1 Nov 87 11:34 EST
From: berke@CS.UCLA.EDU
Subject: Langendoen and Postal (posted by: Berke)

I just read this fabulous book over the weekend, called "The Vastness
of Natural Languages,"
by D. Terence Langendoen and Paul M. Postal.

If you have read this, I have some questions, and could use some help,
especially on the more Linguistics aspects of the book.

Are Langendoen or Postal on the net somewhere? They might be in England,
the Publisher is Blackwell 1984.

Their basic proof/conclusion holds that natural languages, as linguistics
construes them (as products of grammars), are what they call mega-collections,
Quine calls proper classes, and some people hold cannot exist. That is,
they maintain that (1) Sentences cannot be excluded from being of any,
even transfinite size, by the laws of a grammar, and (2) Collections of
these sentences are bigger than even the continuum. They are the size
of the collection of all sets: too big to be sets.

It's wonderfully written. Clear wording, proofs, etc. Good reading.
Help!

Regards,
Pete

------------------------------

Date: Mon, 2 Nov 87 12:00 EST
From: William J. Rapaport <rapaport@sunybcs.uucp>
Subject: Re: Langendoen and Postal (posted by: Berke)

In article <8941@shemp.UCLA.EDU> berke@CS.UCLA.EDU (Peter Berke) writes:
>I just read this fabulous book over the weekend, called "The Vastness
>of Natural Languages,"
by D. Terence Langendoen and Paul M. Postal.
>
>Are Langendoen or Postal on the net somewhere?

Langendoen used to be on the net as: tergc%cunyvm@wiscvm.wisc.edu

but he's moved to, I think, U of Arizona. Postal, I think, used to be
at IBM Watson.

------------------------------

Date: Mon, 2 Nov 87 15:23 EST
From: Russell Turpin <turpin@ut-sally.UUCP>
Subject: Re: Langendoen and Postal (posted by: Berke)

In article <8941@shemp.UCLA.EDU>, berke@CS.UCLA.EDU writes:
> I just read this fabulous book over the weekend, called "The Vastness
> of Natural Languages,"
by D. Terence Langendoen and Paul M. Postal.
> ...
>
> Their basic proof/conclusion holds that natural languages, as linguistics
> construes them (as products of grammars), are what they call mega-collections,
> Quine calls proper classes, and some people hold cannot exist. That is,
> they maintain that (1) Sentences cannot be excluded from being of any,
> even transfinite size, by the laws of a grammar, and (2) Collections of
> these sentences are bigger than even the continuum. They are the size
> of the collection of all sets: too big to be sets.

Let me switch contexts. I have not read the above-mentioned book,
but it seems to me that this claim is just plain wrong. I would
think a minimum requirement for a sentence in a natural language
is that some person who knows the language can read and
understand the sentence in a finite amount of time. This would
exclude any infinitely long sentences. Perhaps less obviously, it
also excludes infinite languages. The reason is that there will
never be more than a finite number of people (ET's included), and
that each will fail to parse sentences beyond some maximum
length, given a finite life for each. (I am not saying that
natural languages include only those sentences that are in fact
spoken and understood, but that only those sentences that could
be understood are included.)

In this view, infinite languages are solely a mathematical
construct.

Russell

------------------------------

Date: Tue, 3 Nov 87 06:42 EST
From: Greg Lee <lee@uhccux.UUCP>
Subject: Re: Langendoen and Postal (posted by: Berke)


In article <9445@ut-sally.UUCP> turpin@ut-sally.UUCP (Russell Turpin) writes:
>In article <8941@shemp.UCLA.EDU>, berke@CS.UCLA.EDU writes:
>> I just read this fabulous book over the weekend, called "The Vastness
>> of Natural Languages,"
by D. Terence Langendoen and Paul M. Postal.
>> ...
>
>Let me switch contexts. I have not read the above-mentioned book,
>but it seems to me that this claim is just plain wrong. I would
> ...
>also excludes infinite languages. The reason is that there will
>never be more than a finite number of people (ET's included), and
> ...
>Russell

Although the number of sentences in a natural language might be
finite, the most appropriate model for human language processing
might reasonably assume the contrary. Suppose, for instance, that
we wish to compare the complexities of various languages with
regard to how easily they could be used by humans, and that we
take the number of phrase structure rules in a phrase structure
grammar as a measure of such complexity. A grammar to generate
100,000 sentences of the pattern "Oh boy, oh boy, ...!" would be
much more complex than a grammar to generate an infinite number
of such sentences. And the pattern seems easy enough to learn ...

Concerning the length of sentences, I think Postal and Langendoen
are not very persuasive. Most of their arguments are to the
effect that previously given attempted demonstrations that
sentences cannot be of infinite length are incorrect. I think
they make that point very well. But obviously this is not
enough To show that one should assume some sentences of infinite
length.
Greg Lee, lee@uhccux.uhcc.hawaii.edu

------------------------------

Date: Tue, 3 Nov 87 09:56 EST
From: David J. Hutches <djh@beach.cis.ufl.edu>
Subject: Re: Langendoen and Postal (posted by: Berke)


In article <9445@ut-sally.UUCP> turpin@ut-sally.UUCP (Russell Turpin) writes:
>In article <8941@shemp.UCLA.EDU>, berke@CS.UCLA.EDU writes:
>> ... That is,
>> they maintain that (1) Sentences cannot be excluded from being of any,
>> even transfinite size, by the laws of a grammar, and (2) Collections of
>> these sentences are bigger than even the continuum. They are the size
>> of the collection of all sets: too big to be sets.
>
>... I would
>think a minimum requirement for a sentence in a natural language
>is that some person who knows the language can read and
>understand the sentence in a finite amount of time. This would
>exclude any infinitely long sentences.
>
>Russell

Because of the processing capabilities of human beings (actually, on a
person-by-person basis), sentences of greater and greater length (and
complexity) are more and more difficult to understand. Past a certain
point, a human being will go into cognitive overload when asked to
process a sentence which his or her capacities (short-term memory, stack
space, whatever you want to call it) are not designed to handle. What
the human being can, in practice, process and what is *possible* in a
language are two different things. I think that it is the case that
some theories of language/grammar explain the production of sentences
which are grammatical by use of a generative model. In such a model, it
is possible to generate sentences of potentially infinite length, even
though it would not be possible for a human being to understand them.

== David J. Hutches CIS Department ==
== University of Florida ==
== Internet: djh@beach.cis.ufl.edu Gainesville, FL 32611 ==
== UUCP: ...{ihnp4,rutgers}!codas!ufcsv!ufcsg!djh (904) 335-8049 ==

------------------------------

Date: Mon, 5 Oct 87 18:46 EDT
From: Peter de Jong <DEJONG%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU>
Subject: Cognitive Science Calendar

[Extracted from IRLIST]

Date: Friday, 2 October 1987 16:17-EDT
From: Paul Resnick <pr at ht.ai.mit.edu>
Re: AI Revolving Seminar-- Michael Kashket

Thursday 8, October 4:00pm Room: NE43- 8th floor Playroom

The Artificial Intelligence Lab
Revolving Seminar Series

Order Parser Word A Free-

Mike Kashket
(kash@oz.ai.mit.edu)

MIT AI Lab


Free-word order languages (where the words of a sentence may be spoken
in virtually any order with little effect on meaning) pose a great
problem for traditional natural language parsers. Standard, rule-based
parsers have operated with some degree of success on fixed-word order
languages (e.g., English), relying on the order between words to
drive the construction of the parse tree. In order to cover the varying
sequences of free word order, however, these parsers have had to use
grammars that contain one rule for each permutation of a sentence. The
result was a linguistically uninteresting parse that did not even
represent the basic distinction between the verb's subject and object.

A shift from rule-based to principle-based parsing seems to be the
answer. A parser grounded on a linguistically principled theory---in
this case, the recently developed Government-Binding theory---has a
grammar that consists of independent modules, each representing a
different facet of the language. For order phenomena two
representations are mandated: one that encodes linear precedence, and
one that encodes hierarchical, syntactic relations (such as subject and
object). In this scheme, linear ordering is represented only where it
is syntactically relevant.

This new parsing technique should also work for fixed-word order
languages. Here we take advantage of the parameters of GB theory.
The claim is that, rather than allowing unconstrained differences
between grammars, we can account for the variation among languages of
the world by encoding the grammar for each language with only a simple,
finite list of parameter settings. For ordering phenomena, there are
two parameters: the part of speech that identifies the subject and the
object, and whether words or morphemes are involved.

In this talk, I will present an implemented, GB-based parser that
handles Warlpiri, a free-word order aboriginal language of central
Australia. I will also discuss the promise of this approach for
handling fixed-order languages such as English.

Ngakarnanyarra nyanyi.
(All come.)

------------------------------

Date: Tue, 20 Oct 87 09:58 EDT
From: Peter de Jong <DEJONG%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU>
Subject: Cognitive Science Calendar

[Extracted from IRList]

Date: Monday, 19 October 1987 17:51-EDT
From: Paul Resnick <pr at ht.ai.mit.edu>
Re: AI Revolving Seminar Thursday-- Janet Kolodner


Thursday 22, October 4:00pm Room: NE43- 8th floor Playroom

The Artificial Intelligence Lab
Revolving Seminar Series

Experience, Memory, and Reasoning

Janet Kolodner


Much of the reasoning people do is based on previous experiences
similar to their current situation. The process of using a previous
experience to reason about a current one is called case-based
reasoning. In case-based reasoning, a reasoner remembers a
previous case and then adapts it to fit the current situation.
A reasoner that uses case-based reasoning can take reasoning
shortcuts, avoid previously-made errors, and focus on important
parts of a problem and important knowledge that might otherwise
have been missed.

To build usable case-based reasoning systems on the computer,
we must discover the best ways to make case-based inferences,
how to best organize and retrieve cases in memory, and how to
integrate case-based with other reasoning methods.

In this talk, I will present several case-based reasoning
methods and present some of the problems involved in developing
case-based problem solving systems. Examples will come from
several expert and common-sense domains, and examples from
several experimental programs will be shown.

------------------------------

End of NL-KR Digest
*******************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT