Copy Link
Add to Bookmark
Report

NL-KR Digest Volume 05 No. 09

eZine's profile picture
Published in 
NL KR Digest
 · 1 year ago

NL-KR Digest             (8/10/88 21:55:06)            Volume 5 Number 9 

Today's Topics:
Category Theory in AI

p. from Tom Bever on intuitions ...
Re: Chomsky reference [and: Generative model, Competence/performance]

Vacancy for two research position at the University of Amsterdam

Submissions: NL-KR@CS.ROCHESTER.EDU
Requests, policy: NL-KR-REQUEST@CS.ROCHESTER.EDU
----------------------------------------------------------------------

Date: Mon, 8 Aug 88 17:25 EDT
From: Paul Benjamin <dpb@philabs.philips.com>
Subject: Category Theory in AI

Some of us here at Philips Laboratories are using universal
algebra, and more particularly category theory, to formalize
concepts in the areas of representation, inference and
learning. We are interested in finding others who are tak-
ing a similar approach. If you are already working in this
area, please respond by email or USmail (please do not
post), so that we can form an informal network, and inter-
change information.

Paul Benjamin
Philips Laboratories
345 Scarborough Rd.
Briarcliff, NY 10510

{uunet,decvax}!philabs!dpb

------------------------------

Date: Tue, 9 Aug 88 14:50 EDT
From: Greg Lee <lee@uhccux.uhcc.hawaii.edu>
Subject: p. from Tom Bever on intuitions ...

The following is from Tom Bever, who says his mail-poster is
gefritzed:

\From BEVR@UORDBV.BITNET Tue Aug 9 07:47:12 1988
\Date: Tue, 9 Aug 88 13:29 EDT

Hello out there.

I just entered the modern world of newsgroupies. I am amazed at what I am
reading on sci.lang. Here's why.

There has been much discussion of the generative grammarian's nativist
paradigm, focussing largely around the role of linguistic intuitions.
The argument seems to be that there may well be rules that account for
such intuitions, but a theory of intuitions may not be profoundly related
to linguistic behavior or knowledge of usual kinds. That is, linguists
might be in the position of constructing a model of something which,
while human in origin, is artificially induced in a select few - rather
like a theory of transcendental meditation in yogis or tea-tasting in
professional teatasters.

It is simply mistaken to believe that linguistic intuitions are the only
kind of evidence which bear on linguistic theory. There are thousands of
experimental articles which have appeared since 1957 devoted to the
empirical investigation of linguistic models, using data other than
intuitions. For example, studies of aphasia examine the extent to which
grammatical speaking ability breaks down along lines predicted by
linguistic theory. Analysis of speech production errors in normals
depend on particular linguistic models. Language acquisition research
since the flood has been done within the context of a grammatical theory
- the data are primarily things that children choose to say, but also
experimentally probed perceptual capacities. Studies of second language
learning can and do probe for the role of maturation in governing
learning of particular grammatical structures. Studies of sentence
comprehension in adults use an ingenious array of techniques and data,
phoneme-monitoring, lexical decision, eye-dilation, evoked potentials,
reading time, lexical recognition and so on. Every such study
presupposes some kind of linguistic analysis of the stimulus which is
being tested.

While there are thousands of such studies, the vast majority of them have
not addressed generative theories in particular - most of the studies are
content with a generic brand of linguistics, primarily some variant of
phrase structure grammar. But many studies - probably hundreds - are
directly concerned with the 'psychological reality' of generative
theories, either directly, or indirectly via its explanatory role in
language behavior. Currently, a handful (ca. 25) studies have appeared
which address, and purport to confirm, the relevance of distinctions made
uniquely within government and binding theory.

So, the linguist's preoccupation with linguistic intuitions is out of
convenience, not necessity. And, as a number of you have noted,
intuitions are a sometime thing - the basic problem with them is that
nobody knows how they work. This means that one can never be sure that a
given intuition directly reflects grammatical competence or knowledge of
some other kind. Experimental and observational studies of other kinds
are vital to extend the empirical basis of the enterprise, and to
discriminate between theories. And, there are plenty of such studies.

Of course, the number of studies is only a rumor about the importance of
a theory. The real question is whether the studies are correct, and
which theory they support. As in experimental physics or any other area
I've heard about, an individual experiment can almost always be
controversial - what counts is the edifice constructed to support a
particular theory. Time will tell. Maybe. Whatever the merits of any
particular linguistic model, there is very wide support across the board
that language behavior in many different forms utilizes or presupposes
rule-governed processes. And that is what the nativist shouting is
about, since many of the details of those processes are never presented
to the child.

Connectionism will not help here. I have yet to see a connectionist
model of anything viewable as rule-governed that does not presuppose the
relevant structures via some representational scheme, programmed device,
or manipulation of input data. Each model has to be dissected, to find
its Helpful Attributes for Connectionist Simulation. Often they are hard
to piece together, but the HACS are always there. And for a principled
reason: connectionism is yet another scheme for solving a multiple-
constraint problem via statistical analysis of masses of cases - i.e.,
for confirming particular hypothesis via induction. But if there is
anything that induction is notoriously bad at, it's in creating the
hypotheses to be confirmed. So, each model has to build in the relevant
hypotheses so that the data can confirm them.

Which leaves us all, connectionists included, with the same old problem
to solve:

What kind of object IS language, anyway.

TgB

------------------------------

Date: Tue, 9 Aug 88 15:12 EDT
From: Greg Lee <lee@uhccux.uhcc.hawaii.edu>
Subject: Re: p. from Tom Bever on intuitions ...


From article <2220@uhccux.uhcc.hawaii.edu>, by BEVR@UORDBV.BITNET:
" ...
"
to linguistic behavior or knowledge of usual kinds. That is, linguists
" might be in the position of constructing a model of something which,
"
while human in origin, is artificially induced in a select few - rather
" like a theory of transcendental meditation in yogis or tea-tasting in
"
professional teatasters.
" ...

If one is interested in the human smell mechanism, a study of tea tasters
seems to me like a fine place to start. As a matter of fact, I believe
one would find that professional tasters and smellers have been singled
out for special study in the literature on this topic. Disparagement
of introspective reports about language as not being "
linguistic
behavior" or as not being worthy of scientific study has always
mystified me.

Greg, lee@uhccux.uhcc.hawaii.edu

------------------------------

Date: Tue, 9 Aug 88 05:11 EDT
From: Celso Alvarez <sp299-ad@violet.berkeley.edu>
Subject: Re: Chomsky reference [and: Generative model, Competence/performance]


In article <3742@pbhyf.PacBell.COM> rob@pbhyf.PacBell.COM (Rob Bernardo) writes:

>What I'm more underlyingly getting at is this: Just because we humans
>have the capability to learn something (language, driving),
>doesn't mean (...) that any set of rules were learned (cf. grammar).
>The behavior may appear rule-governed (i.e. it is patterned in such a
^^^^^^^^^^
No. Behavior *is* rule-governed. The question is, what (types and sets
of) rules govern it. And how we can get a grasp of them.

>way as to be describable by rules we scientists come up with), but that
>doesn't mean that whatever it is I have in my brain that enables me to
>engage in the behavior includes something in the form of rules (...)

I think you are somehow misunderstanding the generative model.
Let's apply your criticism to a concept in the social sciences,
such as that of a 'dense, multiplex social network' (e.g. me, my
friends, and some of my friends' friends, all of us working together
in the same shoe factory). Did you ever see one of such networks?
Do I *actually* interact with other members in my network along
straight black lines which connect a central circle (me) with the
rest? A network is a system of relationships. A grammar is too. The
formalization of a social network is usually a geometrical figure.
The formalization of a grammar is usually a set of hierarchized rules.

Your clarifications about the relationships between mind and
brain do not contradict the fact that, in the construction of an
explanatory model, we can subsume the speaker's supposed mental
processes (or, rather, the formal representation of language
production processes) in a system of rules we call 'grammar'. To say
that a speaker has a grammar amounts to saying that a speaker can
learn a language and speak it. The operativeness of this grammar is
(old stuff) nothing else than its generative capability.

This is not exactly to say, though, that "
rules are a convenient way
for us to talk about the regularities we observe in the behavior"
[R.B.] (for one thing, the value of rules goes far beyond that
of being 'convenient': they supposedly help us understand the
stuff we're messing around with). Unless you understand that
'behavior' also applies to cognitive behavior, in which case you
are right. We are forgetting, it seems, two things: (1) underlying
language production processes are also a type of (cognitive) behavior;
(2) both this cognitive behavior and 'linguistic' behavior
(=performance, =speech) are rule-governed. (Digression: look at
Hymes' initial model (around 1972) for the ethnographic study
of speech; curiously enough, it included a component that resembled
the formalistic approach of generative theory. Was Hymes
simply caught in the generative fever, or did he genuinely
believe that social behavior *can be formalized* (although it's
not yet recommendable!) by means of a set of rules?).

>... even supposing we simply *define* grammar as that knowledge which
>people bring to bear in making wellformedness judgements, how do you
>conclude that this grammar is involve in un-selfconscious language behavior?

You may be going too far here. Is your question of a theoretical,
epistemological, or metaphysical nature? How do you *know* or
"
conclude" that people think? Do angels have wings? Your objection
is legitimate, but it applies as well to any discipline oriented
toward the construction of an abstract model.

There is a certain degree of circularity in any theory (e.g. 'I've
built a grammar of language X; it generates well-formed sentences
in LX, which speakers of LX do or may produce; therefore, Grammar X
works and exists; therefore grammars exist'). The same applies
to the law of universal gravitation.

The problem is not the generativist attempt to construct a model
of competence. The problem is the generativists' insistence in
denying that they are operating with *data of behavior* (that
is, with *performance*). For one thing, when asked to produce
grammaticality judgements on a sentence like "
Where is my hat?",
informants are not requested to judge whether the WH- movement is
grammatical (competence?), but, rather, whether /hwe@rIzmayhaet/,
as a possible or actual utterance, is correct (performance).

>[The structure of utterances] is not very much like that of
>hypothetical sentences that linguists use. Have you ever transcribed
>actual speech of someone talking for a few minutes? It's gruesome work
>because it is so terribly different from written language and from
>the generativist's hypothetical sentences.

Have you ever noticed how terribly different your hypothetical
phonetic transcriptions are from spectrograms of the same utterances,
and how remotely spectrograms resemble the sound waves we hear?

No generativist denies that some utterances are something very
different from sentences. No sociolinguist denies that neuronal
networks constitute the biological basis of thought and language
processes. But not many of these students of language are willing
to recognize the need to work toward an *integrated* (physiological,
psychological, and social) model of linguistic knowledge and production.
(I aim high, that's why I never get anywhere :-) ).

Celso Alvarez (sp299-ad@violet.berkeley.edu.UUCP)

------------------------------

Date: Tue, 9 Aug 88 16:11 EDT
From: Rick Wojcik <rwojcik@bcsaic.UUCP>
Subject: Re: Chomsky reference


In article <2723@mind.UUCP> apoorva@mind.UUCP (Apoorva Muralidhara) writes:
>There is obviously empirical evidence that people have grammars, as
>evidenced by their wellformedness judgements, which are independent of
>semantics and so on. This point is clear.

Not really. Let's try to keep the modularity issue separate from the
innateness issue. There is no evidence that the well-formedness judgments
which people actually make are independent of semantics. Did you
ever stop to think about just how people use grammars to make
well-formedness judgments? I don't see how it can be done without
bringing in semantic and pragmatic information. Anyway, the term
'grammar' is normally intended to include all components--semantics,
syntax, phonology, the kitchen sink--when one talks about innateness.
--
Rick Wojcik csnet: rwojcik@boeing.com
uucp: uw-beaver!ssc-vax!bcsaic!rwojcik
address: P.O. Box 24346, MS 7L-64, Seattle, WA 98124-0346
phone: 206-865-3844

------------------------------

Date: Tue, 9 Aug 88 20:08 EDT
From: Apoorva Muralidhara <apoorva@mind.UUCP>
Subject: Re: Chomsky reference

Rob Bernardo writes:

But how do you know if there is a separate thing that is the innate
language-learning facility? How do you know that it produces a
grammar? How do you know that a grammar is a set of grammaticality
judgments? How do you know that sentential structures are units of
language, rather than simply units linguists (and others biased by
*written* language) believe exist?

I write:

As I've explained before and will explain again, we know that there is
an innate language-learning faculty because of the *poverty of the
stimulus*--children simply do *not* receive enough data to account for
their projection onto a grammar consistent with that of the language
community. Hence there is some innate constraint on generative
capacity involved, which interacts with the data in a structured way
in order to produce a grammar. We call this the "
innate
language-learning faculty." See?

Rob Bernardo writes:

Hm. You missed a point. I asked how you know there is a **separate**
thing that is the innate **language** learning facility. What I'm
getting at is that perhaps there is a facility (in the mass noun
sense), but you have been using "
facility" in the count noun sense
to mean a *thing/object* that has that ability. I don't think you've
begun to such that there is such a facility (in the count noun sense),
i.e. one particular to language. After all, your reasoning could be
applied to any learned human behavior. I.e. "
People learn how to drive.
.... There is an innate driving-learning facility." :-)

What I'm more underlyingly getting at is this: Just because we humans
have the capability to learn something (language, driving), doesn't mean
this capability is "
housed" in some "facility". Nor does it mean that
any set of rules were learned (cf. grammar). The behavior may appear
rule-governed (i.e. it is patterned in such a way as to be describable
by rules we scientists come up with), but that doesn't mean that whatever
it is I have in my brain that enables me to engage in the behavior
includes something in the form of rules. Rules are a convenient way
for us to talk about the regularities we observe in the behavior.
Let's not confuse the map with the territory. (I think this overlaps
with what Clay was getting at with the mind and brain stuff. The mind
is convenient way of talking about what we observe through introspection
and provides a convenient way for talking about what we suppose
goes on in the brain.)

I write:

No, you have missed a point. You see, it is you who are confusing the
map with the territory. You are absolutely right that the mind is a
convenient way of talking . . . etc. Similarly, when I said above "
we
call this the language-learning faculty. See?" this is what I
meant--we *define* "
language-learning faculty" to mean this innate
knowledge. There is no question of "
separateness"--I have simply used
a separate word to describe it. Celso Alvarez's remarks are quite to
the point here. Let me reiterate--when we say "
so and so is the
correct theory of UG," we mean that it is isomorphic to what is in the
brain *in the sense of ***generative capacity****. We are not
claiming that the brain actually contains photocopies of papers from
LINGUISTIC INQUIRY. Nor are we claiming that there are actually
little grayish organs in the mind labeled "
Base Component,"
"
Transformational Component," "PF," and "LF." (I am using GB as an
example, since I am most familiar with GB.) We are simply saying that
the brain has a innately restricted generative capacity for languages,
and this is a rule/principle-based description of that innate
knowledge. Your analogy to driving is flawed because my argument is
based on poverty of the stimulus. There is no demonstrable poverty of
stimulus for motor-vehicle driving specifically, so we do not speak of
a Motor-Vehicle Driving Component. However, I *do* claim (and I hope
most people will agree!) that there *is* inherently, wired into the
brain, a "
motor" faculty, for dealing with balance, motion,
coordination, muscle control, and so on. My impression is that most
of this, at least, is directly neurophysiologically confirmed, what
with those cerebellums and medulla oblongatas and such things (I don't
really know much about neurophysiology!)

Apoorva brilliantly writes :-) :

There is obviously empirical evidence that people have grammars, as
evidenced by their wellformedness judgements, which are independent of
semantics and so on. This point is clear.

Rob Bernardo responds:

I don't see how this follows. Just because people can make
wellformedness judgements (and not even consistently from time to time
for *one* person), you suppose people have grammars. Well, even
supposing we simply *define* grammar as that knowledge which people
bring to bear in making wellformedness judgements, how do you conclude
that this grammar is involve in un-selfconscious language behavior?

I write:

I have said this before, and I will say it again (actually, I've said
*that* before, and I'll probably say *that* again, too, won't I? :-):
*competence*! not *performance*! Competence! Not performance!
Competence! not performance! We are *not* talking about *language
behavior*. That is *performance*.

Rob Bernardo writes:

Well, it seems clear to me that utterances have an intended (and
perceived) structure, but the structure is not very much like that of
hypothetical sentences that linguists use. Have you ever transcribed
actual speech of someone talking for a few minutes? It's gruesome work
because it is so terribly different from written language and from the
generativist's hypothetical sentences.

I write:

I think we did, in this phonology course I took. I disagree
entirely--generative grammarians certainly do not base their theories
on written language! Indeed! Generative grammar doesn't even
directly apply to written language. It is only directly related to
*spoken* language, and linguists use *spoken* language data. Often
from languages which are rarely, or not at all, written. It is
statements like this which make me think you have heard of generative
grammar as some sort of philosophical movement and are talking about
it abstractly--you know, a sort of "
1: This guy Noam Chomsky claims
that people have an "innate language faculty" which ...., etc. ; 2:
Really? Sounds false to me. Probably based on written
language"--rather than actually seeing what generative grammarians do.
Maybe I'm wrong about this, but it seems to me that otherwise, while
your other questions may be valid, you wouldn't claim that generative
grammar is based on written language (!!!).

--Apoorva

------------------------------

Date: Tue, 9 Aug 88 20:13 EDT
From: Apoorva Muralidhara <apoorva@mind.UUCP>
Subject: Re: Chomsky reference [and: Generative model, Competence/performance]

Celso Alvarez writes:

The problem is not the generativist attempt to construct a model of
competence. The problem is the generativists' insistence in denying
that they are operating with *data of behavior* (that is, with
*performance*). For one thing, when asked to produce grammaticality
judgements on a sentence like "
Where is my hat?", informants are not
requested to judge whether the WH- movement is grammatical
(competence?), but, rather, whether /hwe@rIzmayhaet/, as a possible or
actual utterance, is correct (performance).
^
|
|_________ no, *competence*


(By the way, other than this, I liked your article. This is not
intended as a flame.)

--Apoorva

------------------------------

Date: Wed, 10 Aug 88 08:35 EDT
From: Greg Lee <lee@uhccux.uhcc.hawaii.edu>
Subject: Re: Chomsky reference


From article <2750@mind.UUCP>, by apoorva@mind.UUCP (Apoorva Muralidhara):
"
...
" meant--we *define* "language-learning faculty" to mean this innate
"
knowledge. There is no question of "separateness"--I have simply used
" a separate word to describe it. Celso Alvarez's remarks are quite to
"
...

I'm very happy to witness the withering of Chomsky's "language organ"
(if only on the net). Apoorva is closer to redemption than one
might guess -- keep it up, Rob and Celso!
Greg, lee@uhccux.uhcc.hawaii.edu

------------------------------

Date: Mon, 8 Aug 88 11:29 EDT
From: Wouter Jansweijer <jansweij@swivax.UUCP>
Subject: Vacancy for two research position at the University of Amsterdam

Two research positions.

Position title: "toegevoegd (assistent) onderzoeker"

Background:
The University of Amsterdam, Department of Social Science
Informatics is involved in a research-project that is
founded by Dutch Government. For this project the University
is partner in "SPIN-KBS" (StimuleringsProjektteam
INformatica-onderzoek- Knowledge Based Systems). The other
partners in the project are: the University of Twente and
the University of Rotterdam and a consortium of industrial
partners.

The Department of Social Science Informatics takes charge of
one project (and in future possibly more). The aim of this
particular project is to do fault-diagnosis in complex
(technical) systems on the basis of a deep model. So, one
issue in this project is the representation of these
technical systems in structure and function. A second issue
is the derivation of behaviour from this representation. The
third issue is research on diagnostic strategies for
localizing faults in technical systems.

Qualifications:
A masters degree in Computer Science, Cognitive Science or
Psychology with experience in Artificial Intelligence
Research. A good working knowledge of Prolog or Lisp.

Conditions:
Starting date will be as soon as possible. Salary will be
commensurate with experience (ca. DFL. 3200 - 6200 / month).
The position will be filled as a 12-month appointment with a
possibility to renewal. It is possible to do a Ph.D. as part
of the project.

Further information:
Dr. Wouter Jansweijer or Prof. Dr. Bob Wielinga,
Dep. of Social Science Informatics,
University of Amsterdam,
Herengracht 196,
1016 BS Amsterdam,
the Netherlands.
telephone: +31 20 525 2152
email: jansweij@swivax.UUCP
--
Wouter Jansweijer Phone: (31)-20-245365
EMAIL: jansweij@swivax.UUCP {seismo,decvax,philabs}!mcvax!swivax!jansweij
SNAIL: Department of Social Science Informatics, University of Amsterdam,
Herengracht 196, NL-1016 BS Amsterdam, The Netherlands.

------------------------------

End of NL-KR Digest
*******************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT