Copy Link
Add to Bookmark
Report
AIList Digest Volume 5 Issue 040
AIList Digest Thursday, 12 Feb 1987 Volume 5 : Issue 40
Today's Topics:
Queries - Pattern Recognition/Graphs & Mac PD Prolog &
Print Driver Extension & DEC AI Workstation & J.M. Spivey,
Representation - Language Comparisons,
AI Tools - Against the Tide of Common LISP
----------------------------------------------------------------------
Date: Wed, 11 Feb 87 10:42:08 n
From: DAVIS@EMBL.BITNET
Subject: pattern recognition/graphs
Does anyone out there in the electronic village have any familiarity or
knowledge of pattern recognition algorithms which are or may be of
particular use in the identification of fuzzy predefined graph features ?
Whilst I have a couple of approaches of my own, I'd be very interested to
hear about any other methods. I guess that 'template matching' with
arbitrary match coefficients is the most obvious, but any other offers ?
netmail: davis@embl.bitnet (from uucp thats psuvax1!embl.bitnet!davis)
wetmail: embl, postfach 10.2209, 6900 Heidleberg, west germany
paul davis
------------------------------
Date: 10 Feb 87 22:51:12 GMT
From: mendozag@ee.ecn.purdue.edu (Grado)
Subject: Mac PD Prolog wanted
Does anyone have the sources for a PD Prolog for the Mac+?
How about any other PD Prolog, so I can port it to the Mac?
Please, let me know by e-mail.
Thanks in advance,
Victor M Grado
School of EE
Purdue University
West Lafayette, IN 47907
(317) 494-3494
mendozag@ecn.purdue.edu
pur-ee!mendozag
------------------------------
Date: Wed, 11 Feb 87 15:53 EST
From: DON%atc.bendix.com@RELAY.CS.NET
Subject: Print driver extension for HP Laser printers on LMI
Does anyone have a print driver adapted for the HP Laser printer
written for LMI, Symbolics, or TI explorer? I'm looking for
the ability to set tab stops and use multiple fonts.
[I like to be as helpful as possible, but several readers have
pointed out that termcap entries and other hardware queries
have nothing to do with AI. There are lists (SLUG@UTEXAS-20,
INFO-TI-EXPLORER@SUMEX-AIM, INFO-1100@SUMEX-AIM, WORKS@RUTGERS,
etc.) devoted to specific hardware and operating systems. -- KIL]
------------------------------
Date: Wed, 11 Feb 87 08:42 EST
From: DON%atc.bendix.com@RELAY.CS.NET
Subject: DEC AI Workstation
One of my colleagues is thinking of buying an AI workstation from
DEC. I have heard nothing good about them. However, the negative
remarks have not come from people who have actually used them. In
order to better advise my colleague, I would like to hear from people
who have used the workstations. Of particular interest to me are
remarks from people who have used the DEC workstation and one of the
standard Lisp workstations (XEROX, Symbolics, LMI, TI, Sun, Apollo).
What about the Lisp Sensitive Editor. Is that worth anything? How
does it compare to ZMACS?
Thank you,
Don Mitchell
------------------------------
Date: Wed, 11 Feb 87 09:35 EDT
From: Peter Heitman <HEITMAN%cs.umass.edu@RELAY.CS.NET>
Subject: Looking for J.M. Spivey, the author of Portable Prolog
Can anyone help me locate J.M. Spivey, the author of Portable Prolog?
He was at the University of York years ago and then went to Edinburgh
for a while after that. Any help tracking him down will be appreciated.
Peter Heitman
heitman@cs.umass.edu
------------------------------
Date: Wed, 11 Feb 87 12:35:50 pst
From: ladkin@kestrel.ARPA (Peter Ladkin)
Subject: language comparisons
Walter Hamscher writes:
> Here's one way to characterize richness: A is richer than B if symbol
> structures in A can finitely denote facts (i.e., the interpreter can
> interpret as) that B can't.
I suppose the intention of `richer than' is to be an
aymmetric comparative. Thus, he needs to add some condition
such as:
A can also finitely denote all facts that B can't
to rule out cases where both A is richer than B and
B is richer than A. A case of this would be first-order logic
and modal logic. Each may express conditions that are
inexpressible in the other (e.g. irreflexivity for modal logic,
well-cappedness for first-order logic).
peter ladkin
ladkin@kestrel.arpa
------------------------------
Date: Tue, 10 Feb 87 19:24:09 pst
From: well!jjacobs@lll-lcc.ARPA (Jeffrey Jacobs)
Reply-to: well!jjacobs@lll-lcc.ARPA (Jeffrey Jacobs)
Subject: Against the Tide of Common LISP
"Against the Tide of Common LISP"
Copyright (c) 1986, Jeffrey M. Jacobs, CONSART Systems Inc.,
P.O. Box 3016, Manhattan Beach, CA 90266 (213)376-3802
Bix ID: jeffjacobs, CIS Userid 75076,2603
Reproduction by electronic means is permitted, provided that it is not
for commercial gain, and that this copyright notice remains intact."
The following are from various correspondences and notes on Common LISP:
Since you were brave enough to ask about Common Lisp, sit down for my answer:
I think CL is the WORST thing that could possibly happen to LISP. In fact, I
consider it a language different from "true" LISP. CL has everything in the
world in it, usually in 3 different forms and 4 different flavors, with 6
different options. I think the only thing they left out was FEXPRs...
It is obviously intended to be a "compiled" language, not an interpreted
language. By nature it will be very slow; somebody would have to spend quite a
bit of time and $ to make a "fast" interpreted version (say for a VAX). The
grotesque complexity and plethora of data types presents incredible problems to
the developer; it was several years before Golden Hill had lexical scoping,
and NIL from MIT DOES NOT HAVE A GARBAGE COLLECTOR!!!!
It just eventually eats up it's entire VAX/VMS virtual memory and dies...
Further, there are inconsistencies and flat out errors in the book. So many
things are left vague, poorly defined and "to the developer".
The entire INTERLISP arena is left out of the range of compatability.
As a last shot; most of the fancy Expert Systems (KEE, ART) are implemented in
Common LISP. Once again we hear that LISP is "too slow" for such things, when
a large part of it is the use of Common LISP as opposed to a "faster" form
(i.e. such as with shallow dynamic binding and simpler LAMBDA variables; they
should have left the &aux, etc as macros). Every operation in CL is very
expensive in terms of CPU...
______________________________________________________________
I forgot to leave out the fact that I do NOT like lexical scoping in LISP; to
allow both dynamic and lexical makes the performance even worse. To me,
lexical scoping was and should be a compiler OPTIMIZATION, not an inherent
part of the language semantics. I can accept SCHEME, where you always know
that it's lexical, but CL could drive you crazy (especially if you were
testing/debugging other people's code).
This whole phenomenon is called "Techno-dazzle"; i.e. look at what a super
duper complex system that will do everything I can build. Who cares if it's
incredibly difficult and costly to build and understand, and that most of the
features will only get used because "they are there", driving up the cpu useage
and making the whole development process more costly...
BTW, I think the book is poorly written and assume a great deal of knowledge
about LISP and MACLISP in particular. I wouldn't give it to ANYBODY to learn
LISP
...Not only does he assume you know a lot about LISP, he assume you know a LOT
about half the other existing implementations to boot.
I am inclined to doubt that it is possible to write a good introductory text on
Common LISP; you d**n near need to understand ALL of it before you can start
to use it. There is nowhere near the basic underlying set of primitives (or
philosophy) to start with, as there is in Real LISP (RL vs CL). You'll notice
that there is almost NO defining of functions using LISP in the Steele book.
Yet one of the best things about Real LISP is the precise definition of a
function!
Even when using Common LISP (NIL), I deliberately use a subset. I'm always
amazed when I pick up the book; I always find something that makes me curse.
Friday I was in a bookstore and saw a new LISP book ("Looking at LISP", I
think, the author's name escapes me). The author uses SETF instead of SETQ,
stating that SETF will eventually replace SETQ and SET (!!). Thinking that
this was an error, I checked in Steel; lo and behold, tis true (sort of).
In 2 2/3 pages devoted to SETF, there is >> 1 << line at the very bottom
of page 94! And it isn't even clear; if the variable is lexically bound AND
dynamically bound, which gets changed (or is it BOTH)? Who knows?
Where is the definitive reference?
"For consistency, it is legal to write (SETF)"; (a) in my book, that should be
an error, (b) if it's not an error, why isn't there a definition using the
approprate & keywords? Consistency? Generating an "insufficient args"
error seems more consistent to me...
Care to explain this to a "beginner"? Not to mention that SETF is a
MACRO, by definition, which will always take longer to evaluate.
Then try explaining why SET only affects dynamic bindings (a most glaring
error, in my opinion). Again, how many years of training, understanding
and textbooks are suddenly rendered obsolete? How many books say
(SETQ X Y) is a convenient form of (SET (QUOTE X) Y)? Probably all
but two...
Then try to introduce them to DEFVAR, which may or may not get
evaluated who knows when! (And which aren't implemented correctly
very often, e.g. Franz Common and Golden Hill).
I don't think you can get 40% of the points in 4 readings! I'm constantly
amazed at what I find in there, and it's always the opposite of Real LISP!
MEMBER is a perfect example. I complained to David Betz (XLISP) that MEMBER
used EQ instead of EQUAL. I only checked about 4 books and manuals (UCILSP,
INTERLISP, IQLISP and a couple of others). David correctly pointed out that
CL defaults to EQ unless you use the keyword syntax. So years of training,
learning and ingrained habit go out the window. How many bugs
will this introduce. MEMQ wasn't good enough?
MEMBER isn't the only case...
While I'm at it, let me pick on the book itself a little. Even though CL
translates lower case to upper case, every instance of LISP names, code,
examples, etc are in **>> lower <<** case and lighter type. In fact,
everything that is not descriptive text is in lighter or smaller type.
It's VERY difficult to read just from the point of eye strain; instead of
the names and definitions leaping out to embed themselves in your brain,
you have to squint and strain, producing a nice avoidance response.
Not to mention that you can't skim it worth beans.
Although it's probably hopeless, I wish more implementors would take a stand
against COMMON LISP; I'm afraid that the challenge of "doing a COMMON LISP"
is more than most would-be implementors can resist. Even I occasionally find
myself thinking "how would I implement that"; fortunately I then ask myself
WHY?
Jeffrey M. Jacobs <UCILSP>
CONSART Systems Inc.
Technical and Managerial Consultants
P.O. Box 3016, Manhattan Beach, CA 90266
(213)376-3802
CIS:75076,2603
BIX:jeffjacobs
USENET: jjacobs@well.UUCP
(originally written in late 1985 and early 1986; more to come RSN)
------------------------------
Date: Wed, 11 Feb 87 23:04:46 pst
From: well!jjacobs@lll-lcc.ARPA (Jeffrey Jacobs)
Reply-to: well!jjacobs@lll-lcc.ARPA (Jeffrey Jacobs)
Subject: Re: Against the Tide of Common LISP
Some comments on "Against the Tide of Common LISP".
First, let me point out that this is a repeat of material that appeared
here last June. There are several reasons that I have repeated it:
1) To gauge the ongoing change in reaction over the past two years.
The first time parts of it appeared in 1985, the reaction was
uniformly pro-CL.
When it appeared last year, the results were 3:1 *against* CL, mostly
via Mail.
Now, being "Against the Tide..." is almost fashionable...
2) To lay the groundwork for some new material that is in progress
and will be ready RSN.
I did not edit it since it last appeared, so let me briefly repeat some
of the comments made last summer:
I. My complaint that "both dynamic and lexical makes the
performance" even worse refers *mainly* to interpreted code.
I have already pointed out that in compiled code the difference in
performance is insignificant.
2. The same thing applies to macros. In interpreted code, a
macro takes significantly more time to evaluate.
I do not believe that it
is acceptable for a macro in interpreted code to by destructively
exanded, except under user control.
3. SET has always been a nasty problem; CL didn't fix the problem,
it only changed it. Getting rid of it and using a new name would
have been better.
After all, maybe somebody *wants* SET to set a lexical variable if that's
what it gets...
I will, however, concede that CL's SET is indeed generally the desired
result.
4. CL did not fix the problems associated with dynamic vs lexical
scoping and compilation, it only compounded them. My comment
that
>"lexical scoping was and should be a compiler OPTIMIZATION"
is a *historical* viewpoint. In the 'early' days, it was recognized
that most well written code was written in such a manner that
it was an easy and effective optimization to treat variables as
being lexical/local in scope. The interpreter/compiler dichotomy
is effectively a *historical accident* rather than design or intent of the
early builders of LISP.
UCI LISP should have been released with the compiler default as
SPECIAL. If it had been, would everybody now have a different
perspective?
BTW, it is trivial for a compiler to default to dynamic scoping...
5. >I checked in Steel; lo and behold, tis true (sort of).
>In 2 2/3 pages devoted to SETF, there is >> 1 << line at the very bottom
>of page 94!
I was picking on the book, not the language. But thanks for all
the explanations anyway...
6. >"For consistency, it is legal to write (SETF)"
I have so much heartburn with SETF as a "primitive" that I'll save it
for another day.
7. >MEMBER used EQ instead of EQUAL.
Mea culpa, it uses EQL!
8. I only refer to Common LISP as defined in the Steele Book, and
to the Common LISP community's subsequent inability to make
any meaningful changes or create a subset. (Excluding current
ANSI efforts).
Some additional points:
1. Interpreter Performance
I believe that development under an interpreter provides
a substantially better development environment, and that
compiling should be a final step in development.
It is also one of LISP's major features that anonymous functions
get generated as non-compiled functions and must be interpreted.
As such, interpreter performance is important.
3. "Against the Tide of Common LISP"
The title expresses my 'agenda'. Common LISP is not a practical,
real world language.
It will result in the ongoing rejection of LISP by the real world; it is
too big and too expensive. To be accepted, LISP must be able to run
on general purpose, multi-user computers.
It is choking off acceptance of other avenues and paths of
development in the United States.
There must be a greater understanding of the problems, and benefits
of Common LISP, particularly by the 'naive' would be user.
Selling it as the 'ultimate' LISP standard is dangerous and
self-defeating!
Jeffrey M. Jacobs
CONSART Systems Inc.
Technical and Managerial Consultants
P.O. Box 3016, Manhattan Beach, CA 90266
(213)376-3802
CIS:75076,2603
BIX:jeffjacobs
USENET: jjacobs@well.UUCP
------------------------------
End of AIList Digest
********************