Copy Link
Add to Bookmark
Report

AIList Digest Volume 2 Issue 158

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest           Wednesday, 21 Nov 1984    Volume 2 : Issue 158 

Today's Topics:
LISP - Public-Domain LISP & Lisp Performance Tools,
Expert Systems - Paradocs,
Algorithms & Theorem Proving - Karmarkar' Linear Programming Algorithm,
Seminars - Solving Problems in Equational Theories &
The Grand Tour (in Pattern Recognition)
----------------------------------------------------------------------

Date: 19-Nov-84 10:11:58-PST
From: mkm@FORD-WDL1.ARPA
Subject: Public-Domain LISP?

Is there a public domain copy of a LISP interpreter running around out there?
If so, I would like to know where, how to get it, etc.

Thanks,

Mike McNair
Ford Aerospace

------------------------------

Date: Monday, 19 Nov 1984 09:39:20-PST
From: cashman%what.DEC@decwrl.ARPA
Subject: Lisp performance tools

I am interested in pointers to any tools which have been developed for
measuring the performance of Lisp application programs (not measuring the
performance of Lisp systems themselves).

Paul Cashman (Cashman%what.DEC@DECWRL)

------------------------------

Date: Mon 19 Nov 84 18:25:33-PST
From: Joe Karnicky <KARNICKY@SU-SCORE.ARPA>
Subject: Paradocs Expert System

Has anyone out there heard of/seen/used a software system called Paradocs?
It is marketed by a company that has undergone several name changes and is
currently known as Cogensys (out of Farmington Hills, Mich.). The system
is described as a "judgement processing system" and is represented as being
able to combine inputs from several domain experts into a judgement base
which is then able to diagnose problems in the domain. The name of the fellow
who created the system is Buzz Berk (spelling uncertain).
I'd very much appreciate ***any*** information and/or opinions about the
value or performance of this system.
Sincerely,
Joe Karnicky
<KARNICKY@SCORE>

------------------------------

Date: 19 Nov 1984 1421-EST
From: Venkat Venkatasubramanian <VENKAT@CMU-CS-C.ARPA>
Subject: Karmarkar

[Forwarded from the CMU bboard by Laws@SRI-AI.]

There is a front page article in today's NY Times on Karmarkar and his
linear programming algorithm.

------------------------------

Date: 19 Nov 84 1204 PST
From: Martin Frost <ME@SU-AI.ARPA>
Subject: linear programming "breakthrough"

[Excerpted from the Stanford bboard by Laws@SRI-AI.]

18 Nov 84
By JAMES GLEICK
c.1984 N.Y. Times News Service
NEW YORK - A 28-year-old mathematician at AT&T Bell Laboratories has
made a startling theoretical breakthrough in the solving of [linear
programming] systems of equations. [...]
The Bell Labs mathematician, Dr. Narendra Karmarkar, has devised a
radically new procedure. [...]
The new Karmarkar approach exists so far only in rougher computer
code. Its full value will be impossible to judge until it has been
tested experimentally on a wide range of problems. But those who have
tested the early versions at Bell Labs say that it already appears
many times faster than the simplex method, and the advantage grows
rapidly with more complicated problems. [...]
Karmarkar, the son and nephew of mathematicians, was born in
Gwalior, India, and grew up in Poona, near Bombay. He joined Bell
Labs last year after attending the California Institute of Technology
at Pasadena and getting his doctorate from the University of
California at Berkeley.
News of his discovery has been spreading through the computer
science community in preprinted copies of Karmarkar's paper and in
informal seminars. His paper is to be formally published in the
journal Combinatorica next month and will be a central topic at the
yearly meeting of the Operations Research Society of America this
week in Dallas. [...]
Mathematicians visualize such problems as complex geometric solids
with millions or billions of facets. Each corner of each facet
represents a possible solution. The task of the algorithm is to find
the best solution, say the corner at the top, without having to
calculate the location of every one.
The simplex method, devised by the mathematician George B. Dantzig
in 1947, in effect runs along the edges of the solid, checking one
corner after another but always heading in the direction of the best
solution. In practice it usually manages to get there efficiently
enough for most problems, as long as the number of variables is no
more than 15,000 or 20,000.
The Karmarkar algorithm, by contrast, takes a giant short cut,
plunging through the middle of the solid. After selecting an
arbitrary interior point, the algorithm warps the entire structure -
in essence, reshaping the problem - in a way designed to bring the
chosen point exactly into the center. The next step is to find a new
point in the direction of the best solution and to warp the structure
again, bringing the new point into the center.
''Unless you do this warping,'' Karmarkar said, ''the direction that
appears to give the best improvement each time is an illusion.''
The repeated transformations, based on a technique known as
projective geometry, lead rapidly to the best answer. Computer
scientists who have examined the method describe it as ingenious.
''It is very new and surprising - it has more than one theoretical
novelty,'' said Laszlo Babai, visiting professor of computer science
at the University of Chicago. ''The real surprise is that the two
things came together, the theoretical breakthrough and the practical
applicability.''
Dantzig, now professor of operations research and computer science
at Stanford University, cautioned that it was too early to assess
fully the usefulness of the Karmarkar method. ''We have to separate
theory from practice,'' he said. ''It is a remarkable theoretical
result and it has a lot of promise in it, but the results are not all
in yet.''
Many mathematicians interested in the theory of computer science
have long been dissatisfied with the simplex method, despite its
enormous practical success. This is because the program performs
poorly on problems designed specificaly to test its weaknesses,
so-called worst possible case problems. [...]
But fortunately for computer science, the worst-case problems almost
never arise in the real world. ''You had to work hard to produce
these examples,'' Graham said. And the simplex method performs far
better on average than its worst-case limit would suggest.
Five years ago, a group of Soviet mathematicians devised a new
algorithm, the ellipsoid method, that handled those worst-case
problems far better than the simplex method. It was a theoretical
advance - but the ellipsoid had little practical significance because
its average performance was not much better than its worst-case
performance.
The Soviet discovery, however, stimulated a burst of activity on the
problem and led to Karmarkar's breakthrough. The new algorithm does
far better in the worst case, and the improvement appears to apply as
well to the kinds of problems of most interest to industry.
''For a long time the mind-set that the simplex method was the way
to do things may have blocked other methods from being tested,'' said
Dr. Richard Karp, professor of computer science at the University of
California at Berkeley. ''It comes as a big surprise that what might
have been just a curiosity, like the ellipsoid, turns out to have
such practical importance.''

------------------------------

Date: Tue 20 Nov 84 14:15:01-PST
From: John B. Nagle <NAGLE@SU-SCORE.ARPA>
Subject: Re: new linear programming algorithm

[Forwarded from the Stanford bboard by Laws@SRI-AI.]

This may have implications for automatic theorem proving. Since
Dick Karp is working on it, they will probably be explored. We may get
some really high performance verification techniques someday.

------------------------------

Date: Tue, 20 Nov 84 16:17:41 pst
From: Vaughan Pratt <pratt@Navajo>
Subject: new linear programming algorithm and automatic theorem
proving.

[Forwarded from the Stanford bboard by Laws@SRI-AI.]

As applied to program verification, the typical linear programming problems
that arise are almost all of the form, find any integer solution to a set
of inequations of the form x+c<y where x and y are variables and c is an
integer constant. (Different inequations are allowed different choices of
the two variables, i.e. there are more than two variables in the system as
a whole.) There is a simple algorithm for solving these (in either integers
or reals) having worst case O(n**3) (essentially Floyd's algorithm for all
shortest paths). The sets tend to be sparse, which can be taken advantage
of to get better than n**3 performance. The implementation is simple and the
constant factor is small.

The reason this form crops up is that it is an alternative representation
for the inequational theory of successor and predecessor, which in turn
crops up since most arithmetic occurring in programs consists of subscript
incrementing and decrementing and checking against bounds. Programs whose
arithmetic goes beyond this theory also tend to go beyond the theory of + and
- by having * and / as well, i.e. the fraction of programs covered by
linear programming but not by the above trivial fragment of it is not that
large.

-v

------------------------------

Date: Tue, 20 Nov 84 17:26:48 pst
From: Moshe Vardi <vardi@diablo>
Subject: New Algorith for Linear Programming

[Forwarded from the Stanford bboard by Laws@SRI-AI.]

A preliminary report appeared in the proceedings of the last ACM Symp. on
Theory of Computing. It is a provably polynomial time algorithm, which unlike
Khachian's algorithm is a practical one. There are doubts among the experts
whether the algorithm is as revolutionary as the PR people say it is.

Moshe

------------------------------

Date: Sat 17 Nov 84 17:58:32-PST
From: Ole Lehrmann Madsen <MADSEN@SU-CSLI.ARPA>
Subject: Seminar - Solving Problems in Equational Theories

[Forwarded from the Stanford bboard by Laws@SRI-AI.]


CENTER FOR THE STUDY OF LANGUAGE AND INFORMATION
AREA C MEETING

Topic: REVE: a system for solving problems in equational theories,
based on term rewriting techniques.
Speaker: Jean-Pierre Jouannaud, Professor at University of NANCY, FRANCE,
on leave at SRI-Internatinal and CSLI.
Time: 1:30-3:30
Date: Wednesday, Nov. 21
Place: Ventura Seminar Room

Equational Logic has been adopted by mathematicians for a very long time and by
computer scientists recently. Specifications in OBJ2, an "object-oriented"
language designed and implemented at SRI-International, uses equations to
express relations between objects. To express computations in this logic,
equations are used one way, e.g. as rules. To make proofs with rules in this
logic requires the so-called "confluence" property, which expresses that the
result of a computation is unique, no matter the order the rules are applied.
Proofs and computations are therefore integrated in a very simple framework.
When a set of rules does not have the confluence property, it is augmented by
new rules, using the so-called Knuth and Bendix completion algorithm, until the
property becomes satisfied. This algorithm requires the set of rules to have
the termination property, i.e., an expression cannot be rewritten forever. It
has been proved that this algorithm allows to perform inductive proof without
invoking explicitly an induction principle and solve equations (unification) in
the corresponding equational theory as well.

REVE1, developped by Pierre Lescanne during a leave at MIT, implements all
these concepts, including automated proofs for termination.

REVE2, developped by Randy Forgaard at MIT, provided REVE with a very
sophisticated user interface, including an undo command.

REVE3, developped by Claude and Helene Kirchner in NANCY, includes new powerful
features, mainly mixted sets of rules and equations for handling theories
including permutative axioms.

All versions are developped in CLU and run on VAX under UNIX-Berkeley.

------------------------------

Date: Tue 20 Nov 84 22:59:35-PST
From: Art Owen <OWEN@SU-SCORE.ARPA>
Subject: Seminar - The Grand Tour

[Forwarded from the Stanford bboard by Laws@SRI-AI.]

Laboratory for Computational Statistics Seminar

Time: 3:15pm Wednesday November 21st
Place: Sequoia Hall 114
Cookies: at 3:00 in the Lounge

Title: Hopes for the Grand Tour

by Daniel Asimov
Dept. of Computer Science
U.C. Berkeley

The grand tour is a technique for examining two dimensional projections
of higher dimensional objects. The tour essentially picks a trajectory
through the space of possible projections, while a data analyst watches
the corresponding 'movie' on a graphics terminal. The objective
is to as quickly as possible pass near to most of the
possible projections. It is a tool for finding projections that are
informative.

The talk will discuss the current state of grand tour
research, identifying desirable properties
that a tour might have, indicating which such properties have been
achieved and directions for future research.

That's 3:00, 21 Nov 84, Sequoia Hall 114

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT