Copy Link
Add to Bookmark
Report
NL-KR Digest Volume 05 No. 17
NL-KR Digest (10/04/88 20:44:15) Volume 5 Number 17
Today's Topics:
Theft or Honest Toil, (was Re: Pinker & Prince Reply (long version))
BBN AI Seminar -- Mary Haper
CfT: Machine Learning
SUNY Buffalo Linguistics Colloquia
CSLI Calendar, September 22, 4:1
Unisys AI Seminar: The SB-ONE Knowledge Representation Workbench
From CSLI Calendar, September 29, 4:2
Submissions: NL-KR@CS.ROCHESTER.EDU
Requests, policy: NL-KR-REQUEST@CS.ROCHESTER.EDU
----------------------------------------------------------------------
Date: Fri, 2 Sep 88 18:35 EDT
From: Martin Taylor <mmt@client2.DRETOR.UUCP>
Subject: Theft or Honest Toil, (was Re: Pinker & Prince Reply (long version))
Harnad characterizes learning rules from a rule-provider as "theft",
whereas obtaining them by evaluation of the statistics of input data
is "honest toil". But the analogy is perhaps better in a different
domain: learning by evaluating the statistics of the environment
is like building up amino acids and other nutritious things from
inorganic molecules through photosynthesis, whereas obtaining rules
from rule-providers is like eating already built nutritious things.
One ofthe great advantages of language is that we CAN take advantage
of the regularities discovered in the data by other people. The rules
they tell us may be wrong, but to use them is easier than to discover
our own rules. It is hardly to be taken as an analogy to "theft".
If we look at early child learning, the "theft" question becomes:
Has evolution provided us with a set of rules that we do not have to
obtain from the data, so that we can later obtain more rules from
people who did themselves learn from data? Obviously in some sense
the answer is "yes" there are SOME innate rules regarding how we
interpret sensory input, even if those rules are as low-level as
to indicate how to put together a learning net. Obviously, also,
there are MANY rules that we have to get from the data and/or from
people who learned them from the data. The question then becomes
whether the "rules" regarding past-tense formation are of the innate
kind, of the data-induced kind, or of the passed-on kind.
My understanding of the developmental literature is that children
pass through three phases: (i) correct past-tense formation for those
verbs for which the child uses the past tense frequently; (ii) false
regularization, in which non-regular past tenses (went) are replaced
by regularized ones (goed); (iii) more-or-less correct past tense
formation, in which exceptions are properly used, AND novel or
neologized verbs are given regular past tenses (in some sense of
regular). This sequence suggests to me that the pattern does not
have any innate rule component. Initially, all words are separate,
in the sense that "went" is a different word from "go". Later,
relations among words are made (I will not say "noticed"), and
the notion of "go" becomes part of the notion of "went". Furthermore,
the notion of a root meaning with tense modification becomes part
of verbs in general. Again, I will not say that this is connected
with any kind of symbolic rule. It may be the development of net
nodes that are activated for root parts and for modifer parts of
words. It would be overly rash to claim either that rules are involved
or that they are not. In the final stage, the rule-like way of
obtaining past tenses is well established enough that the exceptions
can be clearly distinguished (whether statistically or otherwise is
again disputable).
One thing that seems perfectly clear is that humans are in general
capable of inducing rules in the sense that some people can verbalize
those rules. When such a person "teaches" a rule to a "student",
the student must, initially at least, apply it AS a rule. But even
in this case, it is not clear that skilled use of what has been learned
involves continuing to use the rule AS a rule. It may have served
to induce new node structures in a net.
In "The Psychology of Reading" (Academic Press, 1983), my wife and I
discussed such a sequence under the heading of "Three-phased Learning",
which we took to be a fairly general pattern in the learning of skilled
behaviour (such as reading). Phase 1 is the learning of large-scale
unique patterns. Phase 2 is the discovery of consistent sub-patterns
and consistent ways in which the sub-patterns relate to each other
(induction or acquisition of rules). Phase 3 is the incorporation
of these sub-elements and relational patterns into newly structured
global patterns--the acquisition of true skill.
"Theft," in Harnad's terms, can occur only as part of Phase 2. Both
Phase 1 and Phase 3 involve "honest toil." My feeling is that
current connectionist models are mainly appropriate to Phase 1,
and that symbolic approaches are mainly appropriate to Phase 2,
though there is necessarily overlap. There should not be a contention among
models using one or other approach, if this is so. They are both
correct, but under different circumstances.
--
Martin Taylor DCIEM, Box 2000, Downsview, Ontario, Canada M3M 3B9
uunet!mnetor!dciem!client1!mmt or mmt@zorac.arpa (416) 635-2048
------------------------------
Date: Tue, 13 Sep 88 15:58 EDT
From: Marc Vilain <MVILAIN@G.BBN.COM>
Subject: BBN AI Seminar -- Mary Haper
BBN Science Development Program
AI Seminar Series Lecture
THE REPRESENTATION OF PRONOUNS AND DEFINITE NOUN PHRASES IN
LOGICAL FORM
Mary P. Harper
Brown University
Computer Science Dept.
(MPH%cs.brown.edu@RELAY.CS.NET)
BBN Labs
10 Moulton Street
2nd floor large conference room
10:30 am, Thursday September 15
Initially, I will discuss the representation of pronouns in logical
form. Two factors influence the representation of pronouns. The first
factor is computational. This factor imposes certain requirements on
the logical form representation of a pronoun. For example, the initial
representation of a pronoun in logical form should be derivable
before its antecedent is known. The antecedent, when determined,
should be specified in a way consistent with the initial representation of
the pronoun. The second factor is linguistic. This factor requires
that the representation for a pronoun should be capable of expressing
the range of behaviors of a pronoun in English, especially in the domain
of verb phrase ellipsis.
I will review past models of verb phrase ellipsis. These models do
not provide a representation of pronouns for computational purposes, and
accordingly fail to meet our computational requirements. Additionally, I will
show that these models fail to represent pronouns in a way which captures the
full range of behaviors of pronouns.
I will then propose a new representation for pronouns and show how this
representation meets our computational requirements while providing a better
model of pronouns in verb phrase ellipsis.
The representation of definite noun phrases will also be discussed. As in
the case of pronouns, there are two factors which influence this representation
(i.e. modeling definite behavior and obeying our computational guidelines).
I will discuss several examples which argue for representing definites as
functions in logical form before pronoun resolution is carried out. I will
discuss the actual representation I chose, and illustrate its use with an
example.
------------------------------
Date: Wed, 14 Sep 88 10:51 EDT
From: Alberto M. Segre <segre@gvax.cs.cornell.edu>
Subject: CfT: Machine Learning
Call for Topics:
Sixth International Workshop on Machine Learning
Cornell University
Ithaca, New York; U.S.A.
June 29 - July 1, 1989
The Sixth International Workshop on Machine Learning will be
held at Cornell University, from June 29 through July 1, 1989.
The workshop will be divided into four to six disjoint sessions,
each focusing on a different theme. Each session will be chaired
by a different member of the machine learning community, and will
consist of 30 to 50 participants invited on the basis of
abstracts submitted to the session chair. Plenary sessions will
be held for invited talks.
People interested in chairing one of the sessions should
submit a one-page proposal, stating the topic of the session,
sites at which research is currently done on this topic,
estimated attendance, format of the session, and their own
qualifications as session chair. Proposals should be submitted
by November 1, 1988 to the program chair:
Alberto Segre
Department of Computer Science
Cornell University, Upson Hall
Ithaca, NY 14853-7501 USA
Telephone: (607) 255-9196
Electronic mail should be addressed to "ml89@cs.cornell.edu" or
"segre@gvax.cs.cornell.edu". The organizing committee will
evaluate proposals on the basis of perceived demand and their
potential impact on the field. Topics will be announced by early
1989, at which time a call for papers will be issued. Partial
travel support may be available for some participants.
------------------------------
Date: Thu, 15 Sep 88 09:17 EDT
From: William J. Rapaport <rapaport@cs.Buffalo.EDU>
Subject: SUNY Buffalo Linguistics Colloquia
SUNY BUFFALO LINGUISTICS COLLOQUIA
Fall 1988
Co-sponsored by the Graduate Linguistics Club and the GSA
Wine and cheese to follow, courtesy of the Graduate Linguistics Club
=========================================================================
Thursday, Sept. 15
4:30 pm
684 Baldy
John Whitman
Cornell University
"Korean-Japanese Historical Comparisons"
=========================================================================
Friday, Sept. 16
3:00 pm
684 Baldy
John Whitman
Cornell University
"Lexical Passives and Causatives in Korean and Japanese"
=========================================================================
Friday, Sept. 30
3:00 pm
684 Baldy
Geoffrey Huck
University of Chicago Press
"Discontinuous Constituency: Fact or Artifact?"
=========================================================================
Friday, Oct. 14
3:00 pm
684 Baldy
Lou Ann Gerken
Dept. of Psychology
SUNY Buffalo
"What Children Don't Say: Competence or Performance?"
=========================================================================
Friday, Oct. 28
3:00 pm
684 Baldy
Donald G. Churma
Dept. of Linguistics
SUNY Buffalo
"On `On Geminates'"
=========================================================================
For further information, contact Donna Gerdts, Dept. of Linguistics,
SUNY Buffalo, (716) 636-2177, linger@ubvmsc.cc.buffalo.edu
------------------------------
Date: Thu, 22 Sep 88 13:16 EDT
From: Emma Pease <emma@csli.Stanford.EDU>
Subject: CSLI Calendar, September 22, 4:1
[Note: anyone interested in subscribing should send mail to
emma@csli.stanford.edu. Future forwardings will only include the seminar
abstracts. - BWM]
C S L I C A L E N D A R O F P U B L I C E V E N T S
_____________________________________________________________________________
22 September 1988 Stanford Vol. 4, No. 1
_____________________________________________________________________________
A weekly publication of The Center for the Study of Language and
Information, Ventura Hall, Stanford University, Stanford, CA 94305
____________
[...]
--------------
ANNOUNCEMENT
CSLI Thursdays 1988-89
in the
Cordura Hall Conference Room
This year the traditional TINLunch time (12:00-1:15) will be used for
three different types of activities---TINLunches, TINLectures, and
TINLabs---in approximately equal numbers and on an approximately
rotating schedule. TINLunches will follow the familiar format;
TINLectures will feature talks by outside speakers; and TINLabs will
provide an introduction to research at CSLI. The TINLineup begins on
29 September with a TINLecture by Angelica Kratzer from the University
of Mass; her talk will be on "Stage-level and Individual-level
Predicates." On 6 October Martin Kay will lead a TINLab on "What is
Unification Grammar?" Watch the CSLI Calendar for coming attractions.
CSLI SEMINAR (2:15-3:45) will focus each quarter on a particular
issue or problem. CSLI researchers and others will discuss the
bearing their work has on the issue.
The fall quarter seminar is being organized by Ivan Sag, Herb
Clark, and Jerry Hobbs; the issue is: The Resolution Problem for
Natural-Language Processing. How can communication proceed in light
of the fact that interpretation is radically underdetermined by
linguistic meaning?
Languages exhibit massive ambiguity:
I forgot how good beer tastes. [Structural ambiguity]
The pen is empty. [Lexical ambiguity]
Dukakis agrees to only three debates. [Scope ambiguity]
I saw her duck. [Lexical and structural ambiguity]
Kim likes Sandy more than Lou. [Ellipsis ambiguity]
And massive parametricity (expressions whose interpretation relies in
part on contextually fixed parameters):
He is crazy. (Who's he?)
John is in charge. (John who? In charge of what?)
She ran home afterwards. (After what?)
The nail is in the bowl. (Which nail? Nailed into the bowl, or just
inside of it?)
She cut the lawn/hair/cocaine/record. (What kind of cutting?)
John's book. (The book he owns?/wrote?/edited?)
The Resolution Problem for natural language is the question of how
diverse kinds of knowledge (e.g., knowledge of local domains, context
of utterance, the plans and goals of interlocutors, and knowledge of
the world at large) interact with linguistic knowledge to make
communication possible, even efficient. In this seminar, which will
include presentations by the instructors, by Michael Tanenhaus
(University of Rochester), and by David Rumelhart, we attempt to
clarify the nature of the Resolution Problem and to consider a
diversity of approaches toward a solution.
This course is listed as Linguistics 232, Psychology 229, and
Computer Science 379 and may be taken for 1-3 units by registered
Stanford students.
TEA will be served in the Ventura Lounge following these events at
3:45.
--------------
STASS SEMINAR
Counterfactual Reasoning
Angelica Kratzer, UMass Linguistics Department
The STASS seminar meets every other Thursday, 4:00-5:30, in the
Cordura Conference Room. Everybody is welcome.
------------------------------
Date: Tue, 27 Sep 88 11:21 EDT
From: finin@PRC.Unisys.COM
Subject: Unisys AI Seminar: The SB-ONE Knowledge Representation Workbench
AI SEMINAR
UNISYS PAOLI RESEARCH CENTER
The SB-ONE Knowledge Representation Workbench
Alfred Kobsa
International Computer Science Institute, Berkeley
(on leave from the University of Saarbruecken, West Germany)
The SB-ONE system is an integrated knowledge representation workbench for
conceptual knowledge which was specifically designed to meet the requirements
of the field of natural-language processing. The representational formalism
underlying the system is comparable to KL-ONE, altough different in many
respects. A Tarskian semantics is given for the non-default part of it.
The user interface allows for a fully graphical definition of SB-ONE knowledge
bases. A consistency maintenance system checks for the syntactical
well-formedness of knowledge definitions. It rejects inconsistent entries, but
tolerates and records incomplete definitions. A partition mechanism allows for
the parallel processing of several knowledge bases, and for the inheritance of
(incomplete) knowledge structures between parititons.
The SB-ONE system is being employed in XTRA, a natural-language access system
to expert systems. The use of SB-ONE for meaning representation, user
modeling, and access to the expert system's frame knowledge base will be
briefly described.
10:00am Friday, October 14
BIC Conference Room
Unisys Paoli Research Center
Route 252 and Central Ave.
Paoli PA 19311
-- non-Unisys visitors who are interested in attending should --
-- send email to finin@prc.unisys.com or call 215-648-7446 --
* COMING ATTRACTION: On October 19, Marilyn Arnott (PhD from Texas in *
* Chemistry) will speak on the topic of an expert system for predictive *
* toxicology. The seminar will be held at 2:00 PM in the BIC Conference *
* Room. An exact title and an abstract will be distributed when they *
* become available. *
------------------------------
Date: Wed, 28 Sep 88 19:58 EDT
From: Emma Pease <emma@csli.Stanford.EDU>
Subject: From CSLI Calendar, September 29, 4:2
NEXT WEEK'S CSLI TINLAB
What is Unification?
Martin Kay
(kay.pa@xerox.com)
October 6
Unification is an operation on a pair of objects---usually expressions
in a formal language---that yields a new object of the same kind. It
comes up in logic, programming, and in several theories of
linguistics. In particular, it comes up in the kinds of linguistic
theories that are most often incorporated in computer programs. This
is not because it makes for obviously "procedural" theories---quite
the contrary. Why, then, does it appeal so strongly to
computationalists?
I will try to answer this question after first attempting to convey
the basic intuition behind unification.
______________
NOTE
Cordura Hall is our new building where we now hold our Thursday
events. It is on the corner of Panama Street and Campus Drive, next
to Ventura Hall, on the west side of the Campus.
------------------------------
End of NL-KR Digest
*******************