Copy Link
Add to Bookmark
Report

AIList Digest Volume 2 Issue 141

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest           Thursday, 18 Oct 1984     Volume 2 : Issue 141 

Today's Topics:
LISP - Common Lisp Flavors,
AI Tools - OPS5 & Benchmarks,
Linguistics - Language Evolution & Sastric Sanskrit & Man-Machine Language,
AI - The Two Cultures
----------------------------------------------------------------------

Date: Thursday, 18 Oct 1984 06:11:24-PDT
From: michon%closus.DEC@decwrl.ARPA (Brian Michon DTN: 283-7695 FPO/A-3)
Subject: Common Lisp Flavors

Is there a flavor package for Common Lisp yet?

------------------------------

Date: 18 Oct 84 02:26 PDT
From: JonL.pa@XEROX.ARPA
Subject: OPS5 & Benchmarks

Two points, inspired by issue #140:

1) Xerox has a "LispUsers" version of OPS5, which is an unsupported
transliteration from the public Franz version, of a year or so ago, into
Interlisp-D. As far as I know, this version is also in the public
domain. [Amos Barzilay and myself did the translation "in a day or so",
but have no interest in further debugging/supporting it]


2) Richard Gabriel is out of the country at the moment; but I'd like
to take a paragraph or two to defend his benchmarking project, and
report on what I witnessed at the two panel discussions sessions it
sponsored -- one at AAAI 83 and the other at AAAI 84. The latter was
attended by about 750+ persons (dwindling down to about 300+ in the
closing hours!). In 1983, no specific timing results were released,
partly because many of the machines under consideration were undergoing
a very rapid rate of development; in 1984, the audience got numbers
galore, more perhaps than they ever wanted to hear. I suspect that the
TI Explorer is also currently undergoing rapid development, and numbers
taken today may well be invalid tomorrow (Pentland mentioned that).
The point stressed over and over at the two panel sessions is that
most of these benchmarks were picked to monitor some very specific facet
of Lisp performance, and thus no single number could adequately compare
two machines. In the question/answer session of 1983, someone tried to
cajole some such simplistic ratio out of Dr Gabriel, and his reply is
worth re-iterating "Well, I'll tell you -- I have two machines here, and
on one of the benchmarks, they ran at the same speed; but on another
one, there was a factor of 13 difference between them. So, now, which
number do you want? One, or Thirteen?"
One must also note that many of the more important facets for
personal workstations were ignored -- primarily, I think because it's so
hard to figure out a meaningful statistic to monitor for them, and
partly because I'm sure Dick wanted to limit somewhat the scope of his
project. How does paging figure into the numbers? if paging is
factored out, then what do the numbers mean for a user who is frequently
swapping? What about local area network access to shared facilities?
What about the effects of GC? I don't know anyone who would feel
comfortable with someone else's proposed mixture of "facets" into a
whetstone kind of benchmark; it's just entirely possible that the
variety of facet mixtures found in Lisp usage is much greater than that
found in Fortran usage. [Nevertheless, I seem to remember that the
several facets reported upon by Pentland are at the core of almost any
Lisp (or, rather, ZetaLisp-like Lisp) -- function call, message passing,
and Flavor creation -- so he's not entirely off the wall.]
In summary, I'd say that both manufacturers and discerning buyers
have benefited from the discussions brought about by the Lisp timings
project; the delay on publication of the (voluminous!) numbers has had
the good effect of reminding even those who don't want to be reminded
that *** a single number simply will not do ***, and that "the numbers",
without an understanding analysis, are meaningless. Several of the
manufacturer's representatives even admitted during the 1984 panel
sessions that their own priorities had been skewed by monitoring facets
involved in the Lisp system itself, and that seeing the RPG benchmarks
as "user" rather than "system" programs gave them a fresh look at the
areas that needed performance enhancements.


-- Jon L White --

------------------------------

Date: 18 October 1984 0646-PDT (Thursday)
From: mbr@nprdc
Reply-to: mbr@NPRDC
Subject: Re: Timings

I along with about 8 million others heard RPG (Richard Gabriel) talk
at AAAI this year and at the Lisp Conference both this year and 2
years ago, so the benchmarks are around. I dunno if he has the
results on line (or for that matter what his net address is--
he was at LLL doing common lisp for the S1 last I heard), but
someone in net land might know, and a summary could be posted to
AIList mayhaps?

Mark Rosenstein


[Dr. Gabriel is on the net, but I will let him announce his own
net address if he wishes to receive mail on this subject. -- KIL]

------------------------------

Date: 15 Oct 1984 09:40-EST
From: Todd.Kueny@CMU-CS-G.ARPA
Subject: Language Evolution - Comments

For what its worth:

Any language in use by a significant number of speakers is under
constant evolution. When I studied ancient Greek only singular and
plural were taught; dual was considered useful only for very old texts,
e.g. Homer or before. The explanation for this was twofold:

1) as the language was used, it became cumbersome to worry about
dual when plural would suffice. The number of endings for
case, sex and so on is very large in ancient Greek; having
dual just made things more cumbersome.

2) similarly, as ancient Greek became modern Greek, case to a
large extent vanished. Why? Throughout its use, Greek
evolved many special forms for words which were heavily used,
e.g. to be. Presumably because no one took the time to speak
the complete original form and so its written form changed.

I pose two further questions:

1) Why would singular, dual, and plural evolve in the first
place? Why not a tri and quad as well? Dual seems to be
(at least to me) very unnatural.

2) I would prefer English to ancient Greek principally because
of the lack of case endings and conjugations. It is very
difficult to express certain new ideas, e.g. the concept of a word
on its own with no sex or case, in such a language. Why
would anyone consider case useful?

-Todd K.

------------------------------

Date: 15 Oct 1984 09:52-PDT (Monday)
From: Rick Briggs <briggs@RIACS.ARPA>
Subject: Re: Langauge Evolution


Why do languages move away from case? Why did Sastric Sanskrit
die? I think the answer is basically entropy. The history of
language development points to a pattern in which linguists write
grammars and try to enforce the rules(organization), and the tendency
of the masses is to sacrifice elaborate case structures etc. for ease
of communication.
One of the reasons Panini codified the grammar of Sanskrit so
carefully is that he feared a degeneration of the language, as was
already evidenced by various "Prakrits" or inferior versions of
Sanskrit spoken by servants etc. The Sanskrit word for barbarian
was "mleccha" which means "one who doesn't speak Sanskrit"; culture
and high civilization were equated with language. Similarly English
"barbarian" is derived from the greek "one who makes noises like
baa baa" i.e. who doesn't speak Greek.
Current Linguistics has begun to actually aid this entropy by
paying special attention to slang and casual usage(descriptive vs.
prescriptive). Without some negentropy from the linguists, I fear
that English will degenerate further.

Rick Briggs

------------------------------

Date: Monday, 15-Oct-84 19:32:13-BST
From: O'KEEFE HPS (on ERCC DEC-10) <okeefe.r.a.%edxa@ucl-cs.arpa>
Subject: Sastric Sanskrit again

Briggs' message of 9 Oct 84 makes things a lot clearer.
The first thing is that Sastric Sanskrit is an artificial language,
very like Fitch's "unambiguous English" subset (he is a philosopher
who has a paper showing how this rationalised dialect is clear
enough so you can do Natural Deduction proofs on it directly).

One thing he confuses me about is case. How is having case
a contribution to unambiguity? What is the logical difference
between having a set of prepositions and having a set of cases?
Indeed, most languages that have cases have to augment them with
prepositions because the cases are just too vague. E.g. English
has a sort of possessive case "John's", but when we want to be
clear we have to say "of John" or "for John" or "from John" as
the case may be. Praise of Latin is especially confusing, when
you recall that (a) that language hasn't got a definite article
(it has got demonstratives) and (b) the results of a certain
church Council had to be stated in Greek because of that ambiguity.
If you can map surface case to semantic case, surely you can map
prepositions to semantic case?

The second thing which Briggs makes clear is that Sastric
Sanskrit is unbelievably long-winded. I do not believe that it can
ever have been spontaneously spoken.

The third thing is that despite this it STILL isn't unambiguous,
and I can use his own example to prove it.

He gives the coding of "Caitra cooks rice in a pot", and
translates it back into English as "There is an activity(vyaapaara:),
subsisting in the pot, with agency residing in one substratum not
different from Caitra, which produces the softening which subsists in
rice." Is Caitra BOILING the rice or STEAMING it? It makes a
difference! Note that this doesn't prove that Sastric Sanskrit
can't describe the situation unambiguously, only that it contains at
least one ambiguous sentence. Then too, suppose I wanted to
translate this into Greek. I need to know whether or not to use
the middle voice. That is, is Caitra cooking the rice for HIMSELF,
or for someone ELSE? Whichever choice I make in my translation, I
run the risk of saying something which Briggs, writing Sastric
Sanskrit, did not intend. So it's ambiguous.

Now that Briggs has made things so much clearer, I would be
surprised indeed if AI couldn't learn a lot from the work that
went into the design of Sastric Sanskrit. Actually using their
formalism for large chunks of text must have taught its designers
a lot. Though if "blackbird" really is specified as "a colour-
event residing in a bird" the metaphysical assumptions underlying
it might not be immune to criticism.

A final point is that we NEED languages which are capable
of coding ambiguous propositions, as that may be what we want to
say. If Briggs see Caitra cooking some rice in a pot, he may
not KNOW whether it is for Caitra or for another, so if Briggs
is going to tell me what he sees, he has to say something I may
regard as ambiguous. Similarly, when a child says "Daddy ball",
that ambiguity (give me the ball? bounce the ball? do something
surprising with the ball?) may be exactly what it means to say;
it may have no clearer idea than that it would like some activity
to take place involving Daddy and the ball. A language which is
incapable of ambiguous expression is suited only to describing
mathematics and other games.

------------------------------

Date: 16 Oct 84 11:01:49-CDT (Tue)
From: "Roland J. Stalfonovich" <rjs%okstate.csnet@csnet-relay.arpa>
Subject: AI Natural Language

Much has been said in the last few notes about old or forgotten human
languages. This brings up an interesting point.
Has anyone thought of making (or is there currently) a 'standard' language
for AI projects? Not a programming language, but rather a communication
language for interspecies communication ,man to machine-man (that is the whole
hope of AI after all).

Several good choices exist and have existed for several generations. The
languages of Esperanto and Unifon are two good choices for study.
Esperanto was devised around the turn of the century for the purpose of
becoming the international language of the world. To these ends it has
obviously failed. This does not however say that it is not without merit.
It's advantages of an organized verb conjugation and easy noun and pronoun
definition make it a good choice for an 'easily implemented' language.
Unifon is a simplification of English. It involves the replacement of the
26 characters of the English alphabet by a set of 40 characters representing
the 40 phonics (thus the name) of the English language. This would allow
the implementation of the language for speech synthesis (a pet project of many
research groups).

There are many more languages, and I am sure that everyone has his or her
own favorite. But for the criteria of being easily implemented on a computer
in both the printed and spoken form, Esperanto and/or Unifon should be
seriously considered.

------------------------------

Date: Mon 15 Oct 84 11:22:35-PDT
From: BARNARD@SRI-AI.ARPA
Subject: The Two Cultures of AI

It seems to me that there are two quite separate traditions in AI.
One of them, which I suppose includes the large majority of AI
practitioners, is devoted to rule-based deductive methods for problem
solving and planning. (I would include most natural language
understanding work in this category, as well.) The other, which
occupies a distinctly minority position, is concerned with models of
perception --- especially visual perception. It is my experience that
the followers of these two traditions often have trouble
communicating.

I want to suggest that this communication problem is due to the
fundamental difference in the kinds of problems with which these two
groups of people are dealing. The difference, put simply, is that
"problem solving" is concerned with how to find solutions to
well-posed problems effectively given a sufficient body of knowledge,
while "perception" is concerned with how to go beyond the information
given. The solution of a well-defined problem, once it is known, is
known for certain, assuming that the knowledge one begins with is
valid. Perception, on the other hand, is always equivocal. Our
visual ability to construct interpretations in terms of invariant
properties of physical objects (shapes, sizes, colors, etc.) is not
dependent on sufficient information, in the formal logical sense.

As a researcher in perception, I have to admit that I am often annoyed
when problem-solving types insist that their formal axiomatic methods
are universal in some sense, and that they essentially "define" what
AI is all about. No doubt they are equally annoyed when I complain
about the severe limitations of the deductive method as a model of
intelligence, and relentlessly promote the inductive method. I'll
end, therefore, with a plea for tolerance, and for a recognition that
intelligence may, and in fact must, incorporate both "ways of
knowing."

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT