Copy Link
Add to Bookmark
Report

AIList Digest Volume 2 Issue 155

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest           Thursday, 15 Nov 1984     Volume 2 : Issue 155 

Today's Topics:
News - Recent AI Articles,
AI Tools - FRL Source & New Lisp for VAXen,
Logic Programming - Compiling Logic to Functional Programs
Algorithms - Malgorithms,
Seminar - Inductive Learning
----------------------------------------------------------------------

Date: Mon 12 Nov 84 15:30:28-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Recent AI Articles

Oscar Firschein has called my attention to three articles on AI in
the November issue of Datamation. The Overselling of Expert Systems,
by Gary R. Martins, is a devastating attack on the current crop of
production-rule interpreters. The Blossoming of European AI, by
Paul Tate, is an informative piece on expert systems development
in Europe that is much more positive about the approach, but ends
with the appeal "Please, be honest." AI and Software Engineering,
based on Robert Kowalski's 1983-84 SPL-Insight Award lecture, advocates
logic-based programming; I found the presentation discursive and
inconclusive, but there's a nice example concerning the expression
of British citizenship laws as logical rules.


Martins makes some very good points about current expert systems
and development shells (e.g., "the blackboard model of cooperating
expert processes" is just a longer name for the old COMMON storage
facility in FORTRAN), but he neglects hierarchical inference (as in
MYCIN and PROSPECTOR), learning and self-modification (as in AM/EURISKO),
and the benefits of new ways of looking at old problems (hypothesize
and test in DENDRAL, "parallel" activity of cooperating experts
in HEARSAY). He describes rule-based systems as clumsy, resource-hungry,
and unsuitable for complex applications, and regards them as a result
of confusing AI science (simple cognitive models, LTM and STM, etc.)
with engineering. He does favor the type of serious AI development
being pursued by DARPA, but seems to think that most of the current
"expert systems" will be limited to applications in the personal
computer market (applications that could have been coded just as
easily with decision tables, decision trees, or other methodologies).

Martins also tells why he thinks the few experts systems mentioned above
(and also R1/XCON-XSEL) have been so successful. His points are worth
considering:

1) Brilliant programmers.
2) Easy or carefully delimited problems.
3) Plenty of time and funding in a favorable environment.
4) Developers were not saddled with expert system tools!
They develop systems to fit problems, not the other way around.
5) Luck -- other promising systems didn't make it to the finish line.
6) Public relations; some of these wonders are less than the public
believes them to be.


For a much more positive outlook on expert systems, or at least on
knowledge-based systems, see Frederick Hayes-Roth's overview in
the October issue of IEEE Computer. (One minor typo: Figure 5
should have an instance of PROLOG changed to POPLOG.)

-- Ken Laws

------------------------------

Date: 13 Nov 1984 21:29-EST
From: milne <milne@wpafb-afita>
Subject: Frl Source


MIT FRL Available

I have become the "keeper of the source" for FRL, originally from
MIT and implemented in FranzLisp. The system includes a machine-readable
version of the manual and a demo of the tower of hanoi and an atn.
I am happy to distribute the sources free of charge, subject to the
following conditions:
1. Although I will distribute it, I am not a maintainer of the
software. I do not guarantee it is free of bugs (but I think it is),
and I do not have time to fix problems.
2. I can write UNIX tar tapes only. Sorry, no mail or FTP
transfers. (The source is about 95 files)
3. It includes a UNIX and vms make file, but I can write only
tar tapes.
4. To get a copy, send a blank tape to:
Dr. Rob Milne
AFIT/ENG
WPAFB, OH 45433
I will write the tape and send it back.
cheers,
Rob Milne
Director, AI Lab
Air Force Institute of Technology
milne@wpafb-afita

------------------------------

Date: Tue, 13 Nov 84 11:20:19 -0200
From: jaakov%wisdom.BITNET@Berkeley (Jacob Levy)
Subject: Announcement of new Lisp for UN*X 4.x VAXen

I don't know if this is the appropriate place for such an announcement,
but here goes, anyway :-


YLISP, a Coroutine-based Lisp System for VAXen.
-=============================================-

A friend of mine, Yitzhak Dimitrovski, and myself, wrote a Lisp
system for UN*X 4.x systems on VAXen. It has the following features :-

o - Coroutines and closures. The system uses these to implement
the user-interface, single-stepping and error-handling. It's
easy to write a scheduler and time-share YLISP between two or
more user programs.
o - Multiple-dimension arrays.
o - Multiple name spaces (oblists) arranged in a tree hierarchy.
This is similar to the Lisp Machine facility.
o - Defstruct structure definition package.
o - Flavors object-oriented programming tools.
o - User-extensible evaluator (it is possible to (re)define the
actions of 'eval', 'apply' and 'print' relative to all user-
and pre-defined types).
o - Integer arithmetic. No floating-point, sorry. don't think that
that's really necessary, but *could* be hacked. No BIGNUMs
either.
o - Good user-interface with history, sophisticated error handling
and function-call and variable-assignment tracing facilities.
o - Extensive library of ported and user-contributed programs,such
as a variant of the Interlisp structure editor, 'loop' macro,
'mlisp' Pascal-like embedded language, etc.
o - Compiler that generates efficient native assembler code for
the VAXen. The compiler is provided as a separate program,due
to size considerations. The compiler is written entirely in
Lisp, of course.
o - Extensive online documentation, as well as a 400-page manual
describing the whole system from a programmer's point of view.

The system is named 'YLISP', and was written for 4.1 when we were
students at the Hebrew University at Jerusalem. Since then, Yitzhak has
left for the US and is currently a Ph.D. student in Prof. Schwartz's
Supercomputer group at Courant. I have continued to develop the system on
my own, and have ported it to UN*X 4.2.

I am looking for a site that is willing to handle the distribution
of this software from the US, by letting one FTP it from their computer.
Alternatively, I am also willing to supply people with magtapes of YLISP,
for the cost of the tape and handling charges (about 70$ a piece). If you
are interested, please respond by electronic mail to one of the addresses
given below. I will be ready to start distributing the system in two
weeks' time.

Rusty Red (AKA Jacob Levy)

BITNET: jaakov@wisdom
CSNET and ARPA: jaakov%wisdom.bitnet@wiscvm.ARPA
UUCP: (if all else fails..) ..!decvax!humus!wisdom!jaakov

------------------------------

Date: Mon 12 Nov 84 23:22:28-MST
From: Uday Reddy <U-Reddy@UTAH-20>
Subject: Compiling Logic to Functional Programs

[Forwarded from the Prolog Digest by Laws@SRI-AI.]

The only work I know on compiling logic to functions:

1. Bellia , Levi, Martelli: On compiling Prolog programs on
demand driven architectures, Logic Programming Workshop,
Albufeira, '83

2. Reddy: Transformation of logic programs to functional
programs, ISLP, Atlantic City, 84.

The two pieces of work are similar. They should be distinguished
from other pieces of work cited by Webster (Lindstrom and Panangaden,
Carlsson, Bruce Smith) which interpret logic in a functional language
rather than compile a logic language into a functinal language.

The translation approach has limitations in that it needs mode
annotations (either from the programmer or chosen by the compiler)
and it cannot handle "logical variables". I don't know of any work
that overcomes these limitations. Personally, I believe they cannot
be overcome. One can probably prove this assertion, provided one
can formalize the difference between translation and interpretation.

Combinator calculus is equivalent to lambda calculus, and there are
translators available from one to the other. So, using combinators
neither simplifies nor complicates the problem.

-- Uday Reddy

------------------------------

Date: 12 Nov 84 08:56:04 PST (Monday)
From: Nick <NNicoll.ES@XEROX.ARPA>
Subject: Re: Badgorythms

What makes the following worse than a normal software badgorythm is that
it is implemented in the Language compiler...

"Another reason for bloated code: increment a byte in memory can be
done in a single instruction, but they load the byte into a register,
extend it to a word, extend that to a long, add one, and then store the
low 8 bits of the long back into memory."

This gem came from the following msg;

From: <Devon@MIT-MC.ARPA>
Subject: Macintosh language benchmarks
To: homeier@AEROSPACE.ARPA, info-mac@SUMEX-AIM.ARPA

I have been using the [notoriously awful] Whitesmith C compiler
available from "software toolworks" or some similar name. It does work,
and there are header files defining all the data structures, and
interface files so you can make all the ROM calls. I haven't found any
serious bugs, but code bloat is amazing. One reason is that Apple's
linker is a crock that doesn't have have the concept of scanning a
library! Instead it blithely loads everything contained in each library
file (which you must specify yourself -- blech!) regardless of whether
it is called for or not. Another reason for bloated code: increment a
byte in memory can be done in a single instruction, but they load the
byte into a register, extend it to a word, extend that to a long, add
one, and then store the low 8 bits of the long back into memory.


\\ Nick

------------------------------

Date: Mon 12 Nov 84 14:57:15-PST
From: Jean-Luc Bonnetain <BONNETAIN@SUMEX-AIM.ARPA>
Subject: malgorithms

I just received a msg from Andrei Broder (Broder@decwrl) saying that he and
George Stolfi wrote a paper called "Pessimal Algorithms and Simplexity
Analysis" which is to appear in the SIGACT news. Maybe people who expressed
interest in my msg will find this "joke paper" (Andrei's term) worth reading.

jean-luc

------------------------------

Date: 13 Nov 84 08:43 PST
From: Todd.pasa@XEROX.ARPA
Subject: Malgorisms (What malgorithms were before LISP)

Yet another class of malgorithms is generously provided by the
INTERLISP-D implementation of LISP ... algorithms that look like
malgorithms but really aren't. An example:

To strip an M element list from the top of a larger list X, a seemingly
logical approach would be to take the first element of X, append it to
the second, and so on until the Mth. In INTERLISP-D, a faster way to
accomplish this is to take the difference of the larger list X and its
tail beginning at element M+1. In other words, to "subtract" the list
lacking the elements you want from the full list. The malgorithm code
appears as:

(LDIFF X (NTH X (ADD1 M)))

The "logical" code as:

(FOR I FROM 1 TO M COLLECT (CAR (NTH X I)))


As is shown below, the "malgorithm" is actually a faster way to
solve the problem. Timed executions for 100 sample runs yielded the
following results:


"Malgorithm" "Logical method"

M=4 .00114 sec. .00127
M=30 .00902 .0214
M=100 .0301 .170


The method breaks down when you try to extract sublists from
arbitrary positions inside larger lists ... execution of a "logical"
method similar to the above is MUCH faster. However, I am still amazed
that a malgorithm as seemingly ridiculous as this one is can be so
efficient for even a special case.


--- JohnnyT

"Things are more like they used to be now than they ever were"

------------------------------

Date: 11 Nov 1984 17:03 EST (Sun)
From: "Daniel S. Weld" <WELD%MIT-OZ@MIT-MC.ARPA>
Subject: Seminar - Inductive Learning

[Forwarded from the MIT bboard by Laws@SRI-AI.]


Inductive Learning: Recent Theoretical and Experimental Results
Ryszard Michalski

Wednesday November 14 4:00pm 8th floor playroom


Inductive learning is presented as a goal-oriented and resource-constrained
process of applying certain rules of inference to the initial observational
statements and hypotheses. This process involves new type of
inference rules, called "generalization rules."
In contrast to truth-preserving deductive rules, inductive
generalization rules are falsity-preserving.

Two types of inductive learning are distinguished,
learning from examples (concept acquisition or reconstruction)
and learning by observation (concept formation and descriptive
generalization). Learning from
examples in turn divides into "instance-to-class" and
"part-to-whole" generalization.

We will briefly describe recent experiments with two inductive
learning systems:
1 - for learning from examples via incremental concept refinement, and
2 - for automated formation of classifications via conceptual clustering.

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT