Copy Link
Add to Bookmark
Report
AIList Digest Volume 2 Issue 014
AIList Digest Saturday, 11 Feb 1984 Volume 2 : Issue 14
Today's Topics:
Requests - SHRDLU & Spencer-Brown & Programming Tests & UNITS,
Replys - R1/XCON & AI Text & Lisp Machine Comparisons,
Seminars - Symbolic Supercomputer & Expert Systems & Multiagent Planning
----------------------------------------------------------------------
Date: Sun, 29 Jan 84 16:30:36 PST
From: Rutenberg.pa@PARC-MAXC.ARPA
Reply-to: Rutenberg.pa@PARC-MAXC.ARPA
Subject: does anyone have SHRDLU?
I'm looking for a copy of SHRDLU, ideally in
machine readable form although a listing
would also be fine.
If you have a copy or know of somebody
who does, please send me a message!
Thanks,
Mike
------------------------------
Date: Mon, 6 Feb 84 14:48:37 PST
From: Charlie Crummer <crummer@AEROSPACE>
Subject: Re: AIList Digest V2 #12
I would dearly like to get in contact with G. Spencer-Brown. Can anyone
give me any kind of lead? I have tried his publisher, Bantam, and got
no results.
Thanks.
--Charlie
------------------------------
Date: Wed, 8 Feb 84 19:26:38 CST
From: Stan Barber <sob@rice>
Subject: Testing Programming Aptitude or Compentence
I am interested in information on the following tests that have been or are
currently administered to determine Programming Aptitude or Compentence.
1. Aptitude Assessment Battery:Programming (AABP) created by Jack M. Wolfe
and made available to employers only from Programming Specialists, Inc.
Brooklyn NY.
2. Programmer Aptitude/Compentence Test System sold by Haverly Systems,
Inc. (Introduced in 1970)
3. Computer Programmer Aptitude Battery by SRA (Science Research Associates),
Inc. (Examined in by F.L. Schmidt et.al. in Journal of Applied Psychology,
Volume 65 [1980] p 643-661)
4. CLEP Exam on Computers and Data Processing. The College Board and the
Educational Testing Service.
5. Graudate Record Exam Advanced Test in Computer Science by the Education
Testing Service.
Please send the answers to the following questions if you have taken or
had experience with any of these tests:
1. How many scores and what titles did they used for the version of the
exam that you took?
2. Did you feel the test actually measured your ability to learn to
program or your current programming competence (that is, did you feel it
asked relevant questions)?
3. What are your general impressions about testing and more specifically
about testing special abilities or skills (like programming, writing, etc.)
I will package up the results and send them to Human-nets.
My thanks.
Stan Barber
Department of Psychology
Rice University
Houston TX 77251
sob@rice (arapnet,csnet)
sob.rice@rand-relay (broken arpa mailers)
...!{parsec,lbl-csam}!rice!sob (uucp)
(713) 660-9252 (bulletin board)
------------------------------
Date: 6 Feb 84 8:10:41-PST (Mon)
From: decvax!linus!vaxine!chb @ Ucb-Vax
Subject: UNITS request: Second Posting
Article-I.D.: vaxine.182
Good morning!
I am looking for a pointer to someone (or something) who is knowledgeable
about the features and the workings of the UNITS package, developed at
Stanford HPP. If you know something, or someone, and could drop me a note
(through mail) I would greatly appreciate it.
Thanks in advance.
Charlie Berg
...allegra!linus!vaxine!chb
------------------------------
Date: 5 Feb 84 20:28:09-PST (Sun)
From: hplabs!hpda!fortune!amd70!decwrl!daemon @ Ucb-Vax
Subject: DEC's expert system for configuring VAXen
Article-I.D.: decwrl.5447
[This is in response to an unpublished request about R1. -- KIL]
Just for the record - we changed the name from "R1" to "XCON" about a year
ago I think. It's a very useful system and is part of a family of expert
systems which assist us in the operation of various corporate divisions
(sales, service, manufacturing, installation).
Mark Palmer
Digital
(UUCP) {decvax, ucbvax, allegra}!decwrl!rhea!nacho!mpalmer
(ARPA) decwrl!rhea!nacho!mpalmer@Berkeley
decwrl!rhea!nacho!mpalmer@SU-Shasta
------------------------------
Date: 6 Feb 84 7:15:33-PST (Mon)
From: harpo!utah-cs!hansen @ Ucb-Vax
Subject: Re: AI made easy??
Article-I.D.: utah-cs.2473
I'd try Artificial Intelligence by Elaine Rich (McGraw-Hill). It's easy
reading, not too technical but gives a good overview to the novice.
Chuck Hansen {...!utah-cs}
------------------------------
Date: 5 Feb 84 8:48:26-PST (Sun)
From: hplabs!sdcrdcf!darrelj @ Ucb-Vax
Subject: Re: Lisp Machines
Article-I.D.: sdcrdcf.813
There really no such things as reasonable benchmarks for systems as different
as various Lisp machines and VAXen are. Each machine has different strengths
and weaknesses. Here is a rough ranking of machines:
VAX 780 running Fortran/C standalone
Dorado (5 to 10X dolphin)
LMI Lambda, Symbolics 3600, KL-10 Maclisp (2 to 3X dolphin)
Dolphin, dandelion, 780 VAX Interlisp, KL-10 Interlisp
Relative speeds are very rough, and dependent on application.
Notes: Dandelion and Dolphin have 16-bit ALUs, as a result most arithmetic
is pretty slow (and things like trancendental functions are even worse
because there's no way to floating arithmetic without boxing each
intermediate result). There is quite a wide range of I/O bandwidth among
these machines -- up to 530 Mbits/sec on a Dorado, 130M on a dolphin).
Strong points of various systems:
Xerox: a family of machines fully compatible at the core-image level,
spanning a wide range of price and performance (as low as $26k for a minumum
dandelion, to $150k for a heavily expanded Dorado). Further, with the
exception of some of the networking and all the graphics, it is very highly
compatible with both Interlisp-10 and Interlisp-VAX (it's reasonable to have
a single set of sources with just a bit of conditional compilation).
Because of the use of a relatively old dialect, they have a large and well
debugged manual as well.
LMI and Symbolics (these are really fairly similar as both are licensed from
the MIT lisp machine work, and the principals are rival factions of the MIT
group that developed it) these have fairly large microcode stores, and as
a result more things are fast (e.g. much of graphics primitives are
microcoded, so these are probably the machines for moby amounts of image
processing and graphics. There are also tools for compiling directly to
microcode for extra speed. These machines also contain a secondary bus such
as Unibus or Multibus, so there is considerable flexibility in attaching
exotic hardware.
Weak points: Xerox machines have a proprietary bus, so there are very few
options (philosphy is hook it to something else on the Ethernet). MIT
machines speak a new dialect of lisp with only partial compatible with
MACLISP (though this did allow adding many nice features), and their cost is
too high to give everyone a machine.
The news item to which this is a response also asked about color displays.
Dolphin: 480x640x4 bits. The 4 bits go thru a color map to 24 bits.
Dorado: 480x640x(4 or 8 or 24 bits). The 4 or 8 bits go thru a color map to
24 bits. Lisp software does not currently support the 24 bit mode.
3600: they have one or two (the LM-2 had 512x512x?) around 1Kx1Kx(8 or 16
or 24) with a color map to 30 bits.
Dandelion: probably too little I/O bandwidth
Lambda: current brochure makes passing mention of optional standard and
high-res color displays.
Disclaimer: I probably have some bias toward Xerox, as SDC has several of
their machines (in part because we already had an application in Interlisp.
Darrel J. Van Buer, PhD
System Development Corp.
2500 Colorado Ave
Santa Monica, CA 90406
(213)820-4111 x5449
...{allegra,burdvax,cbosgd,hplabs,ihnp4,sdccsu3,trw-unix}!sdcrdcf!darrelj
VANBUER@USC-ECL.ARPA
------------------------------
Date: 6 Feb 84 16:40 PDT
From: Kandt.pasa@PARC-MAXC.ARPA
Subject: Lisp Machines
I have seen several benchmarks as a former Symbolics and current Xerox
employee. These benchmarks have typically compared the LM-2 with the
1100; they have even included actual or estimated(?) 3600, 1108, or 1132
performances. These benchmarks, however, have seldom been very
informative because the actual test code is not provided or a detailed
discussion of the implementation. For example, is the test on the
Symbolics machine coded in Zetalisp or with the Interlisp compatibility
package? Or, in Interlisp, were fast functions used (FRPLACA vs.
RPLACA)? (Zetalisp's RPLACA is equivalent to Interlisp's FRPLACA so
that if this transformation was not performed the benchmark would favor
the Symbolics machine.) What about efficiency issues such as block
compiling, compiler optimizers, or explicitily declaring variables?
There are also many other issues such as what happens when the data set
gets very large in a real application instead of a toy benchmark or, in
Zetalisp, should you turn the garbage collector on (its not normally on)
and when you do what impact does it have on performance. In summary, be
cautious about claims without thorough supportive evidence. Also
realize that each machine has its own strengths and weaknesses; there is
no definitive answer. Caveat emptor!
------------------------------
Date: Sat, 4 Feb 84 19:24 EST
From: Thomas Knight <tk@MIT-MC.ARPA>
Subject: Concurrent Symbolic Supercomputer
[Forwarded by SASW@MIT-MC]
FAIM-1
Fairchild AI Machine #1
An Ultra-Concurrent Symbolic Supercomputer
by
Dr. A. L. Davis
Fairchild Laboratory for Artificial Intelligence Research
Friday, February 10, 1984
Presently AI researchers are being hampered in the development of large scale
symbolic applications such as expert systems, by the lack of sufficient machine
horsepower to execute the application programs at a sufficiently rapid rate to
make the application viable. The intent of the FAIM-1 machine is to provide
a machine capable of 3 or 4 orders of magnitude performance improvement over
that currently available on today's large main-frame machines. The
main source of performance increase is in the exploitation of concurrency at
the program, system, and architectural levels.
In addition to the normal ancillary support activities, the work is being
carried on in 3 areas:
1. Language Design - a frame based, object oriented language is being
designed which allows the programmer to express highly concurrent
symbolic algorithms. The mechanism permits both logical and
procedural programming styles in a unified message based semantics
fashion. In addition, the programmer may provide strategic
information which aids the system in managing the concurrency
structure on the physical resource components of the machine.
2. Machine Architecture - the machine derives its power from the
homogeneous replication of a medium grain processor element.
The element consists of a processor, message delivery subsystem,
and a parallel pattern based memory subsystem known as the CxAM
(Context Adressable Memory). 2 variants of a CxAM design are
being done at this time and are targeted for fabrication on a
sub 2 micron CMOS line. The connection topology for the
replicated elements is a 3 axis, single twist, Hex plane which
has the advantages of planar wiring, easy extensibility, variable
off surface bandwidth, and permits a variety of fault tolerant
designs. The Hex plane topology also permits nice hierarchical
process growth without creating excess communication congestion
which would cause false synchronization in otherwise concurrent
activities. In addition the machine is being designed in hopes
of an eventual wafer-scale integrated implementation.
3. Resource Allocation - with any concurrent system which does not
require machine dependent programming styles, there is a generic
problem in mapping the concurrent activities extant in the program
efficiently onto the multi-resource ensemble. The strategy
employed in the FAIM-1 system is to analyze the static structure of
the source program, transform it into a graph, and then via a
series of function preserving graph transforms produce a loadable
version of the program which attempts to minimize communication
cost while preserving the inherent concurrency structure.
A certain level of dynamic compensation is guided by programmer
supplied strategy information.
The talk will present an overview of the work we have done in these areas.
Host: Prof. Thomas Knight
------------------------------
Date: 8 Feb 84 15:59:49 EST
From: Smadar <KEDAR-CABELLI@RUTGERS.ARPA>
Subject: III Seminar on Expert Systems this coming Tuesday...
[Reprinted from the Rutgers bboard.]
I I I SEMINAR
Title: Automation of Modeling, Simulation and Experimental
Design - An Expert System in Enzyme Kinetics
Speaker: Von-Wun Soo
Date: Tuesday, February 14,1983, 1:30-2:30 PM
Location: Hill Center, Seventh floor lounge
Von-Wun Soo, a Ph.D. student in our department, will give an informal talk on
the thesis research he is proposing. This is his abstract:
We are proposing to develop a general knowledge engineering tool to
aid biomedical researchers in developing biological models and running
simulation experiments. Without such powerful tools, these tasks can be
tedious and costly. Our aim is to integrate these techniques used in
modeling, simulation, optimization, and experimental design by using an
expert system approach. In addition we propose to carry out experiments
on the processes of theory formation used by the scientists.
Enzyme kinetics is the domain where we are concentrating our efforts.
However, our research goal is not restricted to this particular domain.
We will attempt to demonstrate with this special case, how several new
ideas in expert problem solving including automation of theory
formation, scientific discovery, experimental design, and knowledge
acquisition can be further developed.
Four modules have been designed in parallel: PROKINAL, EPX, CED, DISC.
PROKINAL is a model generator which simulates the qualitative reasoning
of the kineticists who conceptualize and postulate a reaction mechanism
for a set of experimental data. By using a general procedure known as
the King-Altman procedure to convert a mechanism topology into a rate
law function, and symbolic manipulation techniques to factor rate
constant terms to kinetic constant term, PROKINAL yields a
corresponding FORTRAN function which computes the reaction rate.
EPX is a model simulation aid which is designed by combining EXPERT and
PENNZYME. It is supposed to guide the novice user in using simulation
tools and interpreting the results. It will take the data and the
candidate model that has been generated from PROKINAL and estimate the
parameters by a nonlinear least square fit.
CED is a experimental design consultant which uses EXPERT to guide the
computation of experimental conditions. Knowledge of optimal design
from the statistical analysis has been taken into consideration by
EXPERT in order to give advice on the appropriate measurements and
reduce the cost of experimentation.
DISC is a discovery module which is now at the stage of theoretical
development. We wish to explore and simulate the behavior of scientific
discovery in enzyme kinetics research and use the results in automating
theory formation tasks.
------------------------------
Date: 09 Feb 84 2146 PST
From: Rod Brooks <ROD@SU-AI>
Subject: CSD Colloquium
[Reprinted from the Stanford bboard.]
CSD Colloquium
Tuesday 14th, 4:30pm Terman Aud
Michael P. Georgeff, SRI International
"Synthesizing Plans for Co-operating Agents"
Intelligent agents need to be able to plan their activities so that
they can assist one another with some tasks and avoid harmful
interactions on others. In most cases, this is best achieved by
communication between agents at execution time. This talk will discuss
a method for synthesizing a synchronized multi-agent plan to achieve
such cooperation between agents. The idea is first to form
independent plans for each individual agent, and then to insert
communication acts into these plans to synchronize the activities of
the agents. Conditions for freedom from interference and cooperative
behaviour are established. An efficient method of interaction and
safety analysis is then developed and used to identify critical
regions and points of synchronization in the plans. Finally,
communication primitives are inserted into the plans and a supervisor
process created to handle synchronization.
------------------------------
End of AIList Digest
********************