Copy Link
Add to Bookmark
Report

NL-KR Digest Volume 04 No. 02

eZine's profile picture
Published in 
NL KR Digest
 · 11 months ago

NL-KR Digest             (1/05/88 00:08:23)            Volume 4 Number 2 

Today's Topics:
Language engineering
Computer Languages as Social/Philosophical Models
Early Linguistic Emphasis
Re: Linguistics & artificial language design ("linguistic science")
Re: Language Learning (a Turing test)

Seminar - Intelligent Agents as NL Interfaces
Seminar - The lexicon in human speech understanding
Seminar - Lang. & Cognition
Thiersch Seminar

Submissions: NL-KR@CS.ROCHESTER.EDU
Requests, policy: NL-KR-REQUEST@CS.ROCHESTER.EDU
----------------------------------------------------------------------

Date: Wed, 30 Dec 87 10:34 EST
From: WATKINS@rvax.ccit.arizona.edu
Subject: Language engineering

I believe there is some (very small-scale) language engineering going on
out there, on English idioms anyway--done by advertisers, mostly.

I am interpreting engineering as deliberate creation of new mechanisms
intended to serve a purpose specified in advance (though of course
serendipity applies). If one is simply tinkering with language to figure
out how it ticks and/or how it can tick, that's closer to
experimentation...though where language is concerned it's hard to construct
experiments with adequate controls, repeatability, and other
characteristics required by the physical sciences.

Thus one measure of an engineered product lies in its ultimate use in order
to do something else (fly, heal the injured, teach children, communicate
accurately); and the test of the product's success lies in the success of
that use. On these grounds I find such products as Esperanto and Loglan
marginally successful at best; most computer languages eminently
successful; and the products of a few (lucky? ingenious?) advertisers quite
good too.

But this attitude toward engineering makes us all engineers of a sort (more
or less deliberate, more or less successful). I engineer our family
dialect quite deliberately sometimes, as my husband engineers our door
latches for our small children. Maybe it would be useful to define more
closely what we mean when we talk of someone engineering a language
(bearing in mind also that many engineering projects are group efforts).

------------------------------

Date: Sun, 3 Jan 88 14:41 EST
From: Ken Laws <LAWS@IU.AI.SRI.COM>
Subject: Computer Languages as Social/Philosophical Models

As I recall earlier discussions of computer languages (or attempts
to avoid such discussions), the claim was made that since such languages
are artificial there is nothing unknown and in need of study.

Since modern CLs were designed with BNF grammars in mind, there
doesn't seem to be much point in studying their syntax. Their
semantics as formal systems also seems well understood, although
subroutine calls and other interfaces create interesting complexities.
The linguistically interesting thing about these languages is, or
should be, their adequacy.

CLs and Computer Science are largely concerned with precise characterization
of process and information flow. LISP, in particular, has been touted
as an appropriate representation language for both structured knowledge
and operations on that knowledge -- a strong claim, similar to those
made for various logics.

The "connectionists" have recently begun making similar claims for
some of thier networks. They do not offer well developed languages
or logics yet, but are moving in that direction. Efforts are being
made to embed LISP in connectionist representations and vice versa.
Rather than wish the connectionists luck, I hope that they fail in the
unification but discover spatio-temporal representations of process
and information beyond anything yet available -- or at least
comparable to the power of natural language.

Another modern trend in computer science is the study of distributed
systems, either as interacting autonomous agents or as cooperating
processes. Contract nets and other models of human social
organizations are being tested and characterized, and someday we
may develop effective languages and logics for this work. This
experimental science should be of interest to many linguists.

Current computer languages are very limited compared to natural
ones, but in their own domains they model the structure and
possibilities of the real world better than do natural languages.
Would that we could develop such rigor in human thought and
communication, where algorithmic notions are poorly expressed.

The study of natural languages tends toward ever-increasing discussion
(knowledge?) of many inconsistent views of reality, both because these
views (i.e., languages) are inherently interesting and because we hope
that their intersection will lead to a consistent view of reality.
The design of computer languages tends toward ever-decreasing
complexity as we learn to combine the functions of older languages
in more powerful ones, thus homing in on the consistent view that
linguists and philosophers seek.

-- Ken Laws
------------------------------

Date: Sun, 3 Jan 88 15:27 EST
From: Ken Laws <LAWS@IU.AI.SRI.COM>
Subject: Early Linguistic Emphasis

A hypothesis in cognitive psychology:

We all know that first/only children go to MIT and get listed in
Who's Who. This may be due to many environmental factors, particularly
those related to interaction with siblings. A linguistic influence
(for general IQ) has also been posited: that first children are exposed
to adult conversation whereas later children are exposed to childish
speech and thought patterns.

I would like to suggest another linguistic influence, one that might
be tested by experiment. I notice that my third child is a happy
baby, exchanging eye contact and grins with everyone around him. I
suspect that he is going to be a "people person" rather than a "nerd".
Is this genetic or environmental? What makes him different from my
first child, also a boy, who strikes me as more interested in the
physical world? A linguistic difference in their upbringing is that
the first words we are teaching the third child, after "mama" and
"dada", are the names of his siblings. We have also been pointing out
Big Bird and other characters on TV, but have made no special effort
to name the dozens (hundreds?) of toys lying around the house.

My first born, on the other hand, was taught the names of physical
objects long before other people or TV characters became important
in his life. Each new toy was presented as something special, to
be learned and studied. He was drilled on the alphabet rather than
learning it from simple exposure as my second child did. I will
never know whether differences between my children are due to such
differences in parental emphasis, but I hypothesize that such an
effect could be measured by a controlled experiment.

The linguistic experiment, then, involves the raising of two groups
of children who are matched for number of siblings and other
environmental factors. One group of parents and siblings would be
asked to stress object names, the other to stress people names.
Only the earliest linguistic training would be modified (e.g., the
first nine months), so parents needn't feel that their assigned role
might ruin the kid. Any significant difference in the kids' later
school performance or personality profiles could be attributed
to the early linguistic training.

Such experiments aren't my field, so I offer the idea gratis to
anyone who wants it. Perhaps such studies have already been done,
in which case I'd be interested in reading a summary.

-- Ken Laws

------------------------------

Date: Sun, 3 Jan 88 19:28 EST
From: Mark William Hopkins <markh@csd4.milw.wisc.edu>
Subject: Re: Linguistics & artificial language design ("linguistic science")


In article <37343@sun.uucp> landauer@sun.UUCP (Doug Landauer) writes:
>I'm not a linguist; sci.lang and comp.lang.* are just about the only
>reading I do in anything resembling linguistics. I'm ignoring the
>questions Walter raised about the study of language as a behavioral
>science (not because it's not a good question, but because it's not
>what I'm most interested in). I'm also almost ignoring John Chambers'
>questions and comments about computer languages, because I believe that
>computer languages are (currently) too small and too precisely
>specified to be particularly relevant to the study of the languages
>that people use to talk with one another. In fact, I don't even care
>whether linguistics is a science.
>
>Walter Rolandi thought ...
>> that the purpose of linguistics was to determine a causal
>> explanation of linguistic phenomena by means of the scientific method.
>
>My first reaction is that because languages are man-made, there *could

English is not man-made in the sense that Esperanto is. None of its features
was ever designed. Everything arose as a product of historical evolution, by
accident, as it were. Languages tie in closely to cultures (but to a lesser
degree for world languages, such as English, French or Spanish.) For example,
the classification of words in many languages has ties to the underlying
mythology and world-view of the culture in which the language is spoken. Even
the noun classification of German may have arisen for a mythology and religious
tradition that regarded things as masculine or feminine.

>be* much more of an engineering aspect to linguistics, or to some
>sub-branch of linguistics. Maybe "linguistics" is a specialized branch
>of the more general study of languages and of language features. There
>might be sort of a spectrum, from descriptive linguistics (what most
>linguists do today), through prescriptive (grammarians), through
>reformers (not too many of these around any more), to language
>engineers (language designers -- e.g., Zamenhof, Jesperson, James Cooke
>Brown, Dennis Ritchie, Niklaus Wirth, John Backus, et al.).
>
>
>The question I have is -- why do linguists show such little interest in
>language design? Why do linguists totally ignore Esperanto, Loglan,
>and all computer languages?

Perhaps I can offer an answer here. To design a language a viable semantic
theory is necessary. I come from the "school of thought" that says that a
language is an instrument of communication before all else. Therefore, in
my frame of mind, the design of a language has to be based on a semanatic
model before all else. The problem, in both Programming Languages and in
Linguistics is that people have been concentrating too much on Syntax at the
expense of Semantics. It is a semantics that makes a language, the syntax
is just there as "sugar-coating".
Also, there are not very many people in Linguistics with an expertise in
Computer Science or one of the Engineering disciplines or vice versa (like
myself), except in Computational Linguistics.
There is also the problem of where to start, as a natural language is a
very big system. Finally, there is the question of how to deal with the
"corrupting" influences of inevitable historical change and cultural and
religious factors. If this is not taken into consideration, then you'll
end up designing another Latin.

I offer an answer to the question of where to start, and that is to start
with the adverbials that denote space, time, causality and logic. As soon as
one can reconstruct the underlying semantics to this group then one has the
skeleton of a natural language.
Language design has been one of my pet projects for a very long time. In a
language like Hungarian, the space and time denoting words already show a
regularity that makes it possible to organise them into paradigms. This needs
to be done with other languages to come up with either language Universals or a
classification of the different types of semantics that exist in today's
languages.
A good place to start is with English with the following set of words:

TIME & SERIES: SPACE:
until,during,since,while, where,here,there,
before,early,first, whence,hence,thence,
after,late,last, whither,hither,thither,
when,now,then, in,into,out,out of,
always,sometimes,never,ever, on,onto,off,off of,
again,once,twice,thrice,often,rarely at,to,from,away from,
near,far,away,
LOGIC & CAUSALITY: which way,toward,onward,upward,
unless,if,then, downward,forward,backward,
why,because,how,so,thus, everywhere,somewhere,nowhere,anywhere
wherefore,therefore
anyhow,somehow,no how,not

I am not a linguist (meaning I have no PhD in Linguistics) but I have a very
broad background in the field accquired on and off for the last 10 to 15 years.
So, to answer your questiuon, there IS somebody currently working on language
design.

Another answer to your question is that if you want to see something done,
you might have to do it yourself (designing a language, that is).


------------------------------

Date: Thu, 31 Dec 87 12:36 EST
From: Robert K. Coe <bobcoe@cca.CCA.COM>
Subject: Re: Language Learning (a Turing test)


In article <3111@bcsaic.UUCP> rwojcik@bcsaic.UUCP (Rick Wojcik) writes:

>So the issue is not just degree of fluency. Why is there such a
>general discrepancy between adult and child language learning? What
>is it that so severely damages our ability to acquire a foreign language?

I don't understand why the subscribers of this newsgroup find the answer to
this question so elusive. The young child readily learns the patterns of his
native language because he lacks preconceived notions of how a language ought
to sound. Once a set of speech patterns becomes ingrained over time, a person
tends to react to what he thinks he heard, based on his previous experience,
rather than on what he really heard. Phonetic distinctions that are not
phonemic in a speaker's native language are often overlooked. Examples in
American English include nasalation of vowels, aspiration of stops, and
devoicing of "l" and "r".

As far as the degree to which an adult can achieve command over a foreign
language is concerned, one should be careful not to draw unwarranted conclu-
sions from apparent "evidence". Note that when you hear someone whom you
don't know speak, you really have no idea what his native language is. Full
native command over more than one language may be more common than we think.
--
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
| Robert K. Coe | bobcoe@cca.cca.com |
| Computer Corporation of America | |
| 4 Cambridge Center, Cambridge, Mass. 02142 | 617-492-8860, ext. 428 |


------------------------------

Date: Sat, 2 Jan 88 20:40 EST
From: Dr. Thomas Schlesinger <tos@psc90.UUCP>
Subject: Re: Language Learning (a Turing test)

Ref. posting ventured that there may be more second (or multi-?)
native language learning than is commonly thought.
AMEN! Think about Africa, e.g. Nigeria with 200 languages plus
English; South Africa, where every Black person grows up with their
own Black language plus either Afrikaans or English, sometimes both.
India with app. 200-500 languages -- 14 official languages, NOT
counting English... where vast numbers of people grow up bi-lingual or
multi-lingual from the start. One could go on and on... hundreds of
thousands in Islamic countries learn Arabic as a second language in
Koranic school at age 3 or 4 in addition to their mother tongue. My
own children were 3 and 4 respectively when I spent three years in
Frankfurt, Germany. It fascinated me to watch how they had no inkling
of their total bilinguality (yes, I know a horrible "word"). But
when a German spoke to them they automatically answered in German, and
when an American spoke to them they answered in English. If I spoke
to them in the "wrong" language, i.e. German, they'd get angry at me.
But they didn't know why... they didn't really know what "languages"
were and that they were "bilingual." But I believe that literally
hundreds of millions of children in the world grow up that way.

------------------------------

Date: Tue, 22 Dec 87 19:19 EST
From: David N. Chin <chin@renoir.Berkeley.EDU>
Subject: Seminar - Intelligent Agents as NL Interfaces


Doctoral thesis Seminar
1-3pm, Wednesday, December 23, 1987
206 Evans Hall

Intelligent Agents as a Basis for Natural Language Interfaces
David Ngi Chin

Computer Science Division
University of California, Berkeley
Berkeley, CA 94720

ABSTRACT

Typical natural language interfaces respond passively to the
user's commands and queries. They cannot volunteer information,
correct user misconceptions, or reject unethical requests. In order
to do these things, a system must be an intelligent agent. UC (UNIX
Consultant), a natural language system that helps the user solve prob-
lems in using the UNIX operating system, is such an intelligent agent.

The agent component of UC is UCEgo. UCEgo provides UC with its
own goals and plans. By adopting different goals in different situa-
tions, UCEgo creates and executes different plans, enabling it to
interact appropriately with the user. UCEgo adopts goals from its
themes, adopts sub-goals during planning, and adopts meta-goals for
dealing with goal interactions. It also adopts goals when it notices
that the user either lacks necessary knowledge, or has incorrect
beliefs. In these cases, UCEgo plans to volunteer information or
correct the user's misconception as appropriate. These plans are pre-
stored skeletal plans that are indexed under the types of situations
in which they are typically useful. Plan suggestion situations
include the goal which the plan is used to achieve, the preconditions
of the plan, and appropriateness conditions for the plan. Indexing
plans by situations improves efficiency and allows UC to respond
appropriately to the user in real time. Detecting situations in which
a plan should be suggested or a goal adopted is implemented using if-
detected daemons.

The user's knowledge and beliefs are modeled by the KNOME
(KNOwledge Model of Expertise) component of UC. KNOME is a double-
stereotype system which categorizes users by expertise and categorizes
UNIX facts by difficulty. KNOME deduces the user's level of expertise
during the dialog with the user.

After UCEgo has selected a plan, it is refined through the pro-
cess of answer expression by the UCExpress component. UCExpress first
prunes the answer to avoid telling the user something that the user
already knows, and to mark where to use anaphora or ellipsis in
generation. UCExpress also uses specialized expository formats to
express different types of information in a clear, concise manner.
The result is ready for generation into English.

------------------------------

Date: Fri, 1 Jan 88 16:36 EST
From: Yigal Arens <arens@vaxa.isi.edu>
Subject: Seminar - The lexicon in human speech understanding


Speaker: William Marslen-Wilson, Cambridge
Title: Human speech understanding: The role of the lexicon
Place: ISI, 11th floor conference room
Time: Wednesday, January 6, 1988, 3-5pm

Biographical information:

William Marslen-Wilson received his Ph.D in experimental
psycholinguistics from MIT in 1973. His principal interest is in the
comprehension of spoken language, and he has worked on several different
aspects of this problem, ranging from the nature of the
acoustic-phonetic interface with the mental lexicon, to the manner in
which listeners construct a mental model of the current discourse. He
taught at the University of Chicago from 1973 to 1977, before moving to
the Max-Planck Institute in Nijmegen from 1977 to 1982. Following a
spell teaching at the University of Cambridge, he returned to Nijmegen
as director from 1984 to 1987. He now works in the Medical Research
Council Applied Psychology Unit, in Cambridge, England.

Abstract

The process of spoken word-recognition breaks down into three basic
functions -- of access, selection, and integration. Access concerns the
mapping of the speech input onto the representations of lexical
form, selection concerns the discrimination of the best-fitting
match to this input, and integration covers the mapping of syntactic
and semantic information at the lexical level onto higher levels of
processing. The lecture will present a "cohort" based approach
to the lexical processing problem, showing how it embodies the
the concepts of multiple access and multiple assessment, allowing
a maximally efficient recognition process, based on the principle
of the contingency of perceptual choice.

If any one wants some background reading, I recommend the paper by myself
"Functional parallelism in spoken word-recognition", in Cognition, 25,
71-102, 1987.

------------------------------

Date: Mon, 4 Jan 88 10:02 EST
From: Dori Wells <DWELLS@G.BBN.COM>
Subject: Reminder Lang. & Cognition Seminar

BBN Science Development Program
Language and Cognition Seminar


THE EMERGENCE OF UTTERANCE MEANING THROUGH SOCIAL INTERACTION

Charles and Marjorie Goodwin
Department of Anthropology
University of South Carolina
Columbia, South Carolina

BBN Laboraatories
10 Moulton Street
Large Conference Room, 2nd Floor

10:30 a.m., Thursday, January 7, 1988


Abstract: Using micro-analysis of video-taped materials, we will
show how utterances (and the sentences being made visible through
them) are shaped by ongoing processes of interaction between speaker and
recipient(s) that is occurring while the utterance is being spoken.
The emerging utterance is modified as various contingencies emerge within the
interaction. For example as speaker moves his or her gaze from one
possible recipient to another, the emerging sentence is changed so that it
remains appropriate to its recipient of the moment. As the
interaction unfolds new segments are added to the emerging utterance,
other projected segments are deleted and the emerging meaning of the
utterance is reconstructed. The utterance thus emerges not from the
actions of speaker alone, but rather as the result of an
collaborative process of interaction that includes the active participation of
recipient(s) as well.

For information about this Seminar Series contact Livia Polanyi
at 873-3455 [lpolanyi@g.bbn.com]

------------------------------

Date: Mon, 4 Jan 88 10:08 EST
From: Dori Wells <DWELLS@G.BBN.COM>
Subject: Thiersch Seminar

BBN Science Development Program
Language & Cognition Seminar


PARSING WITH PRINCIPLES & PARAMETERS:
Prolegomena to a Universal Parser

Craig Thiersch
Kath. Universiteit Brabant
Tilburg, Netherlands

BBN Laboratories Inc.
10 Moulton Street
Large Conference Room, 2nd Floor

10:30 a.m., Wednesday, January 13, 1988


Abstract: We have constructed a pilot parser for natural languages based on
recent advances in linguistic theory, and using a different structural
concept from that of previous natural language parsers. Many
linguistic theories [e.g. Generalized Phrase Structure Grammar (GPSG)]
can be regarded as rule-oriented, in that they propose to characterize
the speaker's linguistic competence through a set of language specific
rules. In contrast, some generative grammarians [now represented
chiefly by the Government/Binding (G/B) school] now hypothesize that a
speaker's competence consists of a set of (universal) principles
governing the potential phrase-structures available to human language
and conditions restricting their deformations and their semantic
properties rather than collections of language- (and construction-)
specific rules.

The apparent wide variety of constructions occurring in natural
languages is presumed to be the result of the interaction of these
general mechanisms with (a) idiosyncracies of lexical items and (b)
language specific parameters (or defaults), which are set early in the
child's language acquisition process. More specifically, we assume

[a] strict modularity of the components of the grammar;
[b] projection from the lexicon;
[c] use of general principles, rather than specific rules;
[d] parametrizability across languages.

We have built a first experimental pilot parser which adheres as
closely as possible to these basic premises as a foundation for future
research. We allow enough flexibility so that it is not bound, for
example, to a particular instantiation of G/B theory, but can be used
as a testing ground for various theoretical hypotheses in this general
framework, across a wide variety of languages.

This is joint work with Hans-Peter Kolb.

------------------------------

End of NL-KR Digest
*******************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT