Copy Link
Add to Bookmark
Report

NL-KR Digest Volume 04 No. 42

eZine's profile picture
Published in 
NL KR Digest
 · 20 Dec 2023

NL-KR Digest             (4/14/88 20:28:08)            Volume 4 Number 42 

Today's Topics:
Semantic Networks : HELP !!!
Sapir-Whorf
Request for software which perform morphological analysis
ELIZA in Prolog ?

What are grammars (for)?
Re: Representing archiphonemes

From CSLI Calendar, April 7, 3:23
BBN AI Seminar -- Jeff Van Baalen
AI Seminar: Dave Schaffer
From CSLI Calendar, April 14, 3:24
Lang. & Cognition Seminar

Submissions: NL-KR@CS.ROCHESTER.EDU
Requests, policy: NL-KR-REQUEST@CS.ROCHESTER.EDU
----------------------------------------------------------------------

From: rich <rich%EASBY.DURHAM.AC.UK@CUNYVM.CUNY.EDU>
Subject: Semantic Networks : HELP !!!

I am investigating search techniques over realistically sized
semantic net structures as part of the work for my Ph.D. thesis.
I would be very glad to hear from anyone who can provide a copy of a
large semantic network on which to test my techniques, as the time
overhead in building my own semantic net is prohibitive. The language
of implementation of the network and its content is immaterial - any
size of network would be most welcome.

I would also appreciate hearing from anyone who is undertaking work
on semantic network search strategies and the use of semantic network
structures as the basis for commercial databases. Hopefully we may be
able to help each other.

Thanks very much.........

Richard.


E-MAIL: UUCP ...!mcvax!ukc!easby!rich
ARPA rich%EASBY.DUR.AC.UK@CUNYVM.CUNY.EDU
BITNET rich%DUR.EASBY@AC.UK

FROM: Mr. R.M.Piercy,
Computing Dept,
S.E.A.S,
Science Laboratories,
South Road,
DURHAM DH1 3LE,
ENGLAND.

------------------------------

Date: Wed, 6 Apr 88 17:50 EDT
From: Starbuck <pearl@topaz.rutgers.edu>
Subject: Sapir-Whorf

Some people have mentioned the Sapir-Whorf hyothesis. What is it?

Can someone give me a quick explanation of it and where I might read
more about it?

Thanks,

Steve
NAME: Stephen Pearl (Starbuck) VOICE: (201)932-3465
UUCP: rutgers!topaz.rutgers.edu!pearl ARPA: pearl@topaz.rutgers.edu
US MAIL: LPO 12749 CN 5064, New Brunswick, NJ 08903
QUOTE: "Works for me!" -Rick Hunter (The Cop, not the Robotech Defender)
"What is Starbuck-ing?" -Adultress 19

------------------------------

Date: Thu, 7 Apr 88 09:10 EDT
From: fosli@ifi.uio.no
Subject: Request for software which perform morphological analysis

I'm working on a project to translate words and phrases from English to
Norwegian.

Anyone done something similar?

In particular I would like:
1. A morphological analyzer to get the root and the features for a given word.
2. A dictionary to use with 1.

------------------------------

Date: Sun, 10 Apr 88 00:10 EDT
From: Kanna Rajan <CS_5331_13@uta.edu>
Subject: ELIZA in Prolog ?

Does anyone know of ELIZA being written in PROLOG (any version
either on mainframes or desk tops) ? Would appreciate if you could let
me know at the earliest.

Thanks in advance.

----Kanna Rajan
cs_5331_13@uta.edu
Computer Science Dept.
Univ. of Texas @ Arlington

------------------------------

Date: Thu, 7 Apr 88 12:22 EDT
From: HESTVIK%BRANDEIS.BITNET@MITVMA.MIT.EDU
Subject: What are grammars (for)?

Rick Wojcik writes:

>AH> ... But the sentence 'How did you wonder whether Bill
>AH> fixed the car', with the intended reading that 'how' is a question about
>AH> the manner of fixing...

RW>(Please continue to use the expression 'with the intended reading that'.
RW>It serves to remind us all that grammaticality judgments don't exist
RW>outside of pragmatic contexts.)

Not quite true. You don't need any pragmatic context to decide that 'He
likes John', with JOHN and HE coreferent is illformed. The same goes for the
above sentence with 'how...' (etc. etc.) "Intended reading" is not
"pragmatic con- text".

>My understanding of generative
>grammar is that no structural analysis is possible for the intended
>reading. That is how it gets recognized as ungrammatical.

That's exactly wrong. (It's true of formal grammars that you learn about in
discrete mathematics, but not of the study of mentally represented
grammars.)
By having the grammar give structural analyses for illformed strings, we
are able to pinpoint and discover in a systematic fashion WHY they are
ungrammatical. If the grammar gave it no structural analysis, but simply
made a yes/no distinction between well- formed and ill-formed strings, then
we could never understand why it was ungrammatical; we would simply have to
list it as an irreducible fact (and end up with context free PSGs).
In fact, by giving a structural analysis, we find that the WH-word is
related to a position in the embedded clause, and the relation crosses
another WH-word, and such crossings are generally bad (which is how it gets
recognized as ungramma- tical), etc. etc.
I.e., the fact that it is ungrammatical TELLS US something very
significant about the nature of human grammars, but we cannot discover what
it is unless the grammar gives an analysis of the sentence.

Rick Wojcik and Stan Friesen writes:

>>>AH> least for part of the field). Rather, the main interest is to try to
>>>AH> understand the very nature of grammars ... namely the
>>>AH> psychologically represented mechanism that underlies e.g. language
>>> acquisition
>>>AH> and language processing.

>>RW>Here we agree totally. This is why I believe that generative theory needs
>>RW>a coherent position on the way in which grammars interact with linguistic
>>RW>performance.

It's not an a priori requirement on generative grammar that it has a
coherent position on the way it interacts with performance. Rather, this
interaction is an empirical question and people are working on figuring it
out (see reference below). It is a fallacy to believe that we need to
understand everything about B to work on A. People understood the movements
of the planets before they understood what planets were! In linguistics it
is more the opposite: We have a pretty good understanding of what sentences
are, but we have (pretty much) no idea about how they are processed in the
brain.

>SF> Or why generative grammars must be thrown out, since they do not seem
>SF> to correspond to any real psychological process!

If you actually read any of the scientific literature, you would find that
generative grammar is NOT meant to correspond to any real psychological
PROCESS, but rather to real psychological KNOWLEDGE. The former view was a
misunderstanding in the early years by psychologists, called the
Derivational Theory of Complexity. They thought that a sentence with many
transformations would be harder/take longer to process than a sentence with
few derivations. This turned out to be wrong.
There are some reasons to think that there is not a direct correspondence
between the grammar and the processor. For starters, think of the fact a
sentence like "This is the man that the cat that chased the rat that ate the
cheese scratched"
is really hard to process; in fact, this kind of sentences
where the processor actually FAILS. Do we want the grammar to say that the
sentence is illformed? Or consider the sentence "The child seems sleeping."
This sentence is perfectly well understood by English speakers; they can
assign it a specific meaning. Do we want linguistic theory to say that it
is well- formed?
Notice that the theory of grammar has as one of its tasks to aid a child
an acquiring language, i.e., to instruct the child on what a grammar may
possibly look like. I.e., it may say that "If AGR is rich, then you have
pro-drop"
. This is a very different function from say, parsing a sentence.
It's not very process-like.

To read about the relation between grammar and processing, see ch. 2 (pp35-82)
in Robert Berwick & Amy Weinberg (1984) "The Grammatical Basis of Linguistic
Performance: Language use and acquisition"
. Cambridge, Mass.: MIT Press.
On the "how-..." sentence and why it is ungrammatical, see e.g. N. Chomsky
"Barriers", MIT Press 1986.

Arild Hestvik
Department of Psychology
Brandeis University

------------------------------

Date: Sat, 9 Apr 88 10:56 EDT
From: John Chambers <jc@minya.UUCP>
Subject: Re: Representing archiphonemes


> I don't see why your chart of Spanish graphemes contains archiphonemes.
> Although 'm' in 'cambiar' cannot contrast with /n/, that does not make
> it a different sound from 'm' in 'matar'. I would maintain that
> positions of phonemic neutralization exist, but not archiphonemes.
> Can you think of any real justification for underspecifying segments
> that occur in neutralized positions? Bear in mind that no alphabetic
> writing system has special symbols for them. They seem to have no
> behavioral correlates. Of what use are they?

Maybe I misunderstand, but it seems to me that there are a few examples of
such representation of archiphonemes. For instance, in Russian, [v] and [f]
are definitely in contrast, albeit weakly. The writing system has a letter
for [f] (which looks like the Greek phi), and one for [v/f] (which looks
like 'B'); there is no letter for [v] alone. When you see the [v/f] letter
(which is usually transliterated as 'v', causing English-speaking people to
mispronounce Russian names), you have to determine from context which of the
two phonemes it stands for. When you see the 'f', you know it's [f].

There is confusion here with the representation of morpho-phonemes, such as
the English /-s/ suffix. This is written "s", even when pronounced [z].
But this isn't an example of an archiphoneme; it is rather a morpho-phoneme
conditioned by its environment.

The Russian v/f case is more of a real archiphoneme. The two sounds aren't
in contrast in "native" words; the contrast arises only from borrowings that
violate the voicing rules (which is easy, as v/f is the only
voiced/voiceless pair that isn't in contrast in the native vocabulary). An
example is the common name "Fyodor", derived from the Greek "Theodore". The
initial [f] violates the conditioning rules for the [v/f] archiphoneme, so
it's spelled with an 'f'.

There is even an example in English, though we use a digraph: The symbol
'th' stands for two different phonemes that *are* in weak contrast. A
thousand years ago, they weren't in contrast, but other changes (such as
loss of final weak vowels) has produced contrasts such as mouth/mouthe and
breath/breathe. Anyhow, it seems straightforward to classify 'th' as an
English archiphoneme, with a single symbol representing both phonemenes, and
spelling conventions to distinguish them.

There are other examples. Germans use 's' to represent both [s] and [z],
with spelling conventions to disambiguate them. As in the above examples,
s/z are barely in contrast, so you don't often need separate symbols.
Mostly they just use 'ss' to represent [s] when the rules would give [z].

On the other hand, we haven't had any clear definitions of what constitutes
an "archiphoneme" or a "morphophoneme", so I could be using different
definitions that yours...

--
John Chambers <{adelie,ima,maynard,mit-eddie}!minya!{jc,root}> (617/484-6393)

------------------------------

Date: Mon, 11 Apr 88 12:18 EDT
From: Rick Wojcik <rwojcik@bcsaic.UUCP>
Subject: Re: Representing archiphonemes


In article <557@minya.UUCP> jc@minya.UUCP (John Chambers) writes:
>... we haven't had any clear definitions of what
>constitutes an "archiphoneme" or a "morphophoneme", so I could
>be using different definitions that yours...
>John Chambers <{adelie,ima,maynard,mit-eddie}!minya!{jc,root}> (617/484-6393)

Archiphonemes often get confused with morphophonemes. The key to
understanding them is the concept of automatic phonemic neutralization.
For example, English does not allow syllable-internal obstruent clusters
of mixed voicing. These are eliminated by a process of progressive voice
assimilation. Thus, we can have initial /sp/ clusters, but any
attempt to pronounce /sb-/ results in a devoicing of the /b/. Similarly,
plural /-z/ is automatically devoiced after voiceless obstruents. N.
Trubetzkoy proposed the use of capital letters to represent segments of
indeterminate voicing for cases such as this. Thus, {spill} might be
represented as /sPil/, and {cats} as /kaetZ/. Trubetzkoy took
archiphonemics, together with phonemics, to constitute phonological
theory. Morphophonemes stood for nonautomatic alternations between
distinct phonemes (e.g. /f/~/v/ in 'leaf/leaves'). Although Trub. took
morphonology to be a separate component of grammar from phonology, most
other phonologists came to conflate morphonemes with archiphonemes. Hence,
the confusion.

JH> When you see the [v/f] letter (which is usually transliterated as
JH> 'v', causing English-speaking people to mispronounce Russian names),
JH> you have to determine from context which of the two phonemes it
JH> stands for. When you see the 'f', you know it's [f].

Final /v/ gets pronounced [f] in Russian by automatic final devoicing, so
one could use an archiphoneme /F/ to represent both phonemes in that
position. As for /f/, it gets voiced before words beginning with voiced
obstruents--e.g. 'graf byl' [grav byl]. So when you see {f}, you don't
necessarily know that it's [f]. English-speaking people mispronounce
Russian names because they lack the automatic final devoicing. There is
good evidence that Russians think they are pronouncing final /v/'s in
those words.

JH> There is confusion here with the representation of morpho-phonemes,
JH> such as the English /-s/ suffix. This is written "s", even when
JH> pronounced [z]. But this isn't an example of an archiphoneme; it
JH> is rather a morpho-phoneme conditioned by its environment.

No. It's a candidate for archiphoneme, if you believe in such things.
Some people only allow 'p' in 'spill' to be an archiphoneme, because it
doesn't involve alternations. So, by some lights, your calling the {-s}
suffix a morphophoneme is ok, because it involves alternations.

JH> Anyhow, it
JH> seems straightforward to classify 'th' as an English archiphoneme,
JH> with a single symbol representing both phonemenes, and spelling
JH> conventions to distinguish them.

I know of no phonological theories that would take modern English 'th' to
be an archiphoneme. I can think of no cases in English where the dental
fricative occurs in a position of automatic neutralization.
--
Rick Wojcik csnet: rwojcik@boeing.com
uucp: {uw-june uw-beaver!ssc-vax}!bcsaic!rwojcik
address: P.O. Box 24346, MS 7L-64, Seattle, WA 98124-0346
phone: 206-865-3844

------------------------------

Date: Wed, 6 Apr 88 20:09 EDT
From: Emma Pease <emma@russell.stanford.edu>
Subject: From CSLI Calendar, April 7, 3:23

[Excerpted, etc.]

THIS WEEK'S CSLI SEMINAR
The Texture of Intelligence
Alexis Manaster-Ramer
(amr@csli.stanford.edu)
April 7

A and B engage in conversation in French with a group of Frenchmen.
However, while A speaks passable French, he does not understand spoken
French well, and B understands colloquial French reasonably well, but
does not speak it. So, A does the listening and B does the talking,
communicating with each other in English when necessary. As far as
the French interlocutors are concerned, A+B "knows" French. What I
want to argue is that theories of intelligent human behavior should
adopt the Frenchmen's point of view.

Intelligence exists in culture. What seems to make human beings an
intelligent species, biologically, is that we have evolved the
ability--and the necessity--of living in a culture. In general, the
subject of the study of human intelligence must then be interaction of
groups of people. As a result, a proper explanatory theory of
intelligent behavior must be HISTORICAL in nature (much as biology and
physics are historical sciences). While we need to understand how an
individual represents knowledge, reasons, speaks, etc., our theories
must also capture the fact that no individual is capable of
creating English or developing French cuisine, say, from scratch.
Whether we want cognitive science or AI, we should think of simulating
cultures evolving through time rather than individuals.

Obviously, many of the processes we need to model do take place
within individual human beings. These must be understood in terms of
the interaction of the different mechanisms, which are postulated to
account for specific patterns in the data rather than in terms of a
priori mental faculties such as "grammar," "world knowledge,"
"commonsense reasoning," etc. In studying the individual, we must
again develop theories that are historical (ontogenetic) in nature,
since people's reasoning and language use, for example, both seem to
depend to a large extent on how and when various skills and
information happen to be learned. Moreover, the components of the
theory of individuals cannot all be qualitatively alike. Some are
physical, others cognitive, and others in between (as in my theory of
TACTICS, the lowest level of language).

The theories of the individual, as well as those of the cultural,
phenomena can--and should--be formal (symbolic) without our having to
assume that the object being studied is symbolic and represented in
individual minds in symbols. The usual kinds of formalisms (say,
automata) can be used to model interactions among individuals or
cultural evolution or the states of a physical system such as the
vocal tract, for example, just as easily as they can to represent the
alleged cognitive faculties of individual people (like "grammar").
This approach seems to close the gap between the two main positions on
the nature and definition--as well as the possibility of artificial
simulation--of intelligence. We accept the "pessimistic" view on the
scope of the subject to be modeled but adopt the "optimistic" view on
the symbolic representation of the models (NOT of the objects of
study). Results in different areas emerge immediately from this
perspective, such as my work on tactics.

--------------
NEXT WEEK'S CSLI TINLUNCH
Reading: "Language and Interpretation:
Philosophical Reflections and Empirical Inquiry"

by Noam Chomsky
Discussion led by Sylvain Bromberger
(sylvain@csli.stanford.edu)
April 14

Once upon a time there were serious people who tried to figure out
what the heavenly spheres are made of. They never succeeded. There
are no heavenly spheres. Quine, Davidson, Dummett, and Putnam hold
views about language and its study that imply that much of what passes
for serious linguistics---at least at MIT---should be dismissed like
celestial sphereology, as based on delusion. These are prominent
philosophers. They should be right. Chomsky does not think that they
are. In this paper he tries to prove that they are mistaken. Are
they?

--------------
NEXT WEEK'S CSLI SEMINAR
On Acting Together: Joint Intentions for Intelligent Agents
Phil Cohen
(pcohen@ai.sri.com)
April 14

No one wants to build just one lonely autonomous agent. If we are
successful, we will want our creations to be available to help each
other, and us. In short, they should be able to act jointly with
other agents. Obvious examples of joint action in human society
include pushing a car, playing a duet, executing a pass play, engaging
in a dialogue, and doing research (for example, this research was done
jointly with Hector Levesque, Department of Computer Science,
University of Toronto). Analogues of such "team play" can easily be
created for any task requiring more than one agent for its
accomplishment, and for those in which agents need to divide up the
work.

In a recent paper, we argued that intention is a derived concept,
founded on the idea of an internal commitment, or "persistent goal,"
i.e., goals kept through time. In this talk, we develop an analogous
concept, that of a "joint commitment," that can serve as the basis of
a concept of joint intention. We show how joint commitments lead to
synchronization, agreements to commence action and to terminate,
individual actions by the collaborators, and communication. Finally,
we show how the analysis compares with recent proposals by Searle and
by Grosz and Sidner for describing joint intentionality.


------------------------------

Date: Fri, 8 Apr 88 14:25 EDT
From: Marc Vilain <MVILAIN@G.BBN.COM>
Subject: BBN AI Seminar -- Jeff Van Baalen

BBN Science Development Program
AI Seminar Series Lecture

REPRESENTATION DESIGN FOR PROBLEM SOLVING

Jeffrey Van Baalen
MIT AI Laboratory
(jvb@HT.AI.MIT.EDU)

BBN Labs
10 Moulton Street
2nd floor large conference room
10:30 am, Thursday April 14


It has long been acknowledged that having a good representation is key
in effective problem solving. But what is a ``good'' representation?
In this talk, I overview a theory of representation design for problem
solving that answers this question for a class of problems called
analytical reasoning problems. These problems are typically very
difficult for general problem solvers, like theorem provers, to solve.
Yet people solve them comparatively easily by designing a specialized
representation for each problem and using it to aid the solution
process. The theory is motivated, in large part, by observations of the
problem solving behavior of people.

The implementation based on this theory takes as input a straightforward
predicate calculus translation of the problem, gathers any necessary
additional information, decides what to represent and how, designs the
representations, creates a LISP program that uses those representations,
and runs the program to produce a solution. The specialized representation
created is a structure whose syntax captures the semantics of the problem
domain and whose behavior enforces those semantics.
-------

------------------------------

Date: Tue, 12 Apr 88 08:33 EDT
From: Dori Wells <DWELLS@G.BBN.COM>
Subject: AI Seminar: Dave Schaffer



BBN Science Development Program
AI Seminar Series

ADAPTIVE KNOWLEDGE REPRESENTATION: A CONTENT SENSITIVE
RECOMBINATION MECHANISM FOR GENETIC ALGORITHMS

J. David Schaffer
Philips Laboratories
North American Philips Corporation
Briarcliff Manor, New York


BBN Laboratories Inc.
10 Moulton Street
Large Conference Room, 2nd Floor

10:30 a.m., Tuesday, April 19, 1988

Abstract: This paper describes ongoing research on content sensitive
recombination operators for genetic algorithms. A motivation behind this
line of inquiry stems from the observation that biological chromosomes appear
to contain special nucleotide sequences whose job is to influence the
recombination of the expressible genes. We think of these as punctuation marks
telling the recombination operators how to do their job. Furthermore, we
assume that the distribution of these marks (part of the representation) in
a gene pool is determined by the same survival-of-the-fittest and genetic
recombination mechanisms that account for the distribution of the expressible
genes (the knowledge). A goal of this project is to devise such mechanisms
for genetic algorithms and thereby to link the adaptation of a representation
to the adaptation of its contents. We hope to do so in a way that capitalizes
on the intrinsically parallel behavior of the traditional genetic algorithm.
We anticipate benefits of this for machine learning.

We describe one mechanism we have devised and present some empirical evidence
that suggests it may be as good as or better than a traditional genetic
algorithm across a range of search problems. We attempt to show that its
action does successfully adapt the search mechanics to the problem space
and provide the beginnings of a theory to explain its good performance.

------------------------------

Date: Wed, 13 Apr 88 20:05 EDT
From: Emma Pease <emma@russell.stanford.edu>
Subject: From CSLI Calendar, April 14, 3:24

[Excerpted from CSLI Calendar]

Connections between Linguistics and Computer Science:
Some Topics in the Mathematics of Language
Bill Rounds
(rounds@csli.stanford.edu)
University of Michigan
CSLI and Xerox PARC
April 21

In this talk I will discuss some similarities and analogies between
grammar formalisms, situation theory, database theory, and the modal
logic of programs. The focus of the talk will be on a simple graphical
representation of linguistic structures, essentially as state graphs
of nondeterministic finite automata, and I will describe several kinds
of logical statements useful for speaking about these structures. When
a construct for specifying structures recursively is added to the
basic logic, one obtains a fairly powerful declarative mechanism
similar to Prolog.

Unification of the extended structures can be thought of as forming
a join operation in a suitable ordering of the structures. It turns
out that in one such ordering, unification corresponds to taking the
join of database relations. This ordering has also proved useful in
the specification of concrete data types.

Most of the talk will consist of examples and pictures, and only a
nodding familiarity with any of the above topics will be presumed.

------------------------------

Date: Thu, 14 Apr 88 14:11 EDT
From: Dori Wells <DWELLS@G.BBN.COM>
Subject: Lang. & Cognition Seminar

BBN Science Development Program
Language & Cognition Seminar Series

HOW LANGUAGE STRUCTURES ITS CONCEPTS: THE ROLE OF GRAMMAR

Leonard Talmy
Program in Cognitive Science
University of California, Berkeley

BBN Laboratories Inc.
10 Moulton Street
Large Conference Room, 2nd Floor

10:30 a.m., Wednesday, April 20, 1988


Abstract: A fundamental design feature of language is that it has two
subsystems, the open-class (lexical) and the closed-class (grammatical).
These subsystems perform complementary functions. In a sentence, the
open-class forms together contribute most of the *content* of the
total meaning expressed, while the closed-class forms together determine
the majority of its *structure*. Further, across the spectrum of
languages, all closed-class forms are under great semantic constraint:
they specify only certain concepts and categories of concepts, but not
others. These grammatical specifications, taken together, appear to
constitute the fundamental conceptual structuring system of language.
I explore the particular concepts and categories of concepts that
grammatical forms specify, the properties that these have in common
and that distinguish them from lexical specifications, the functions
served by this organization in language, and the relations of this
organization to the structuring systems of other cognitive domains such
as visual perception and reasoning. The greater issue, toward which this
study ultimately aims, is the general character of conceptual structure
in human cognition.

------------------------------

Date: Fri, 1 Apr 88 06:28 EST
From: Francis LOWENTHAL <PLOWEN%BMSUEM11.BITNET@CUNYVM.CUNY.EDU>

ANNOUNCING A CONFERENCE : LANGUAGE AND LANGUAGE ACQUISITION 4
=============================================================

CALL FOR PAPERS

FIRST ANNOUNCEMENT


Dear colleague,

I have the pleasure to invite you to the fourth
conference we organize on Language and Language Acquisition
at the University of Mons, Belgium.

The specific theme of this conference will be :
"LANGUAGE DEVELOPMENT AND COGNITIVE DEVELOPMENT"

Date : From August 22 to August 27, 1988
Place : Mons University.

The aim of this meeting is to further an interdiscipli-
nary and international collaboration among researchers connec-
ted one way or the other with the field of communication and
subjacent logic : this includes as well studies concerning
normal children as handicapped subjects.

Five topics have been chosen : Mathematics, Philosophy,
Logic and Computer Sciences, Psycholinguistics, Psychology and
Medical Sciences. During the conference, each morning will be
devoted to two 45-minutes lectures on one of these domains, and
to a wide discussion concerning all the papers already presen-
ted. The aftrnoon will be devoted to short presentations by
panelists and to further discussions concerning the panel and
everything that preceded it.

There will be no parallel sessions and, as the organi-
zers want to favour as much as possible discussions between the
participants, it has been decided to reduce the number of par-
ticipants to 70. The selection procedure will be supervised by
an international committee.

Further informations and registration forms can be
obtained by old fashioned mail or by E-mail from :

F. LOWENTHAL
Universite de l'Etat a Mons
Laboratoire N.V.C.D.
Place du Parc, 20
B-7000 MONS (Belgium)
tel : (32)65.37.37.41
TELEX 57764 - UEMONS B
E-MAIL : PLOWEN@BMSUEM11

Please, feel free to communicate this call for papers
to other potential interested researchers.

Thank you for your help and best wishes for 1988.

F. LOWENTHAL
JANUARY 7, 1988


------------------------------

End of NL-KR Digest
*******************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT