Copy Link
Add to Bookmark
Report

AIList Digest Volume 1 Issue 054

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest             Friday, 9 Sep 1983       Volume 1 : Issue 54 

Today's Topics:
Robotics - Walking Robot,
Fifth Generation - Book Review Discussion,
Methodology - Rational Psychology,
Lisp Availability - T,
Prolog - Lisp Based Prolog, Foolog
----------------------------------------------------------------------

Date: Fri 2 Sep 83 19:24:59-PDT
From: John B. Nagle <NAGLE@SU-SCORE.ARPA>
Subject: Strong, agile robot

[Reprinted from the SCORE BBoard.]

There is a nice article in the current Robotics Age about an
outfit down in Anaheim (not Disney) that has built a six-legged robot
with six legs spaced radially around a circular core. Each leg has
three motors, and there are enough degrees of freedom in the system to
allow the robot to assume various postures such as a low, tucked one
for tight spots; a tall one for looking around, and a wide one for
unstable surfaces. As a demonstration, they had the robot climb into
the back of a pickup truck, climb out, and then lift up the truck by
the rear end and move the truck around by walking while lifting the
truck.4

It's not a heavy AI effort; this thing is a teleoperator
controlled by somebody with a joystick and some switches (although it
took considerable computer power to make it possible for one joystick
to control 18 motors in such a way that the robot can walk faster than
most people). Still, it begins to look like walking machines are
finally getting to the point where they are good for something. This
thing is about human sized and can lift 900 pounds; few people can do
that.

------------------------------

Date: 3 Sep 83 12:19:49-PDT (Sat)
From: harpo!eagle!mhuxt!mhuxh!mhuxr!mhuxv!akgua!emory!gatech!pwh@Ucb-Vax
Subject: Re: Fifth Generation (Book Review)
Article-I.D.: gatech.846

In response to Richard Treitel's comments about the Fifth Generation
book review recently posted:

*This* turkey, for one, has not heard of the "Alvey report."
Do tell...

I believe that part of your disagreement with the book reviewer stems
from the fact that you seem to be addressing different audiences. He,
a concerned but ignorant lay-audience; you, the AI Intelligensia on
the net.

phil hutto


CSNET pwh@gatech
INTERNET pwh.gatech@udel-relay
UUCP ...!{allegra, sb1, ut-ngp, duke!mcnc!msdc}!gatech!pwh


p.s. - Please do elaborate on the Alvey Report. Sounds fascinating.

------------------------------

Date: Tue 6 Sep 83 14:24:28-PDT
From: Richard Treitel <TREITEL@SUMEX-AIM.ARPA>
Subject: Re: Fifth Generation (Book Review)

Phil,

I wish I were in a position to elaborate on the Alvey Report. Here's
all I know, as relayed by a friend of mine who is working back in
Britain:

As a response to either (i) the challenge/promise of the Information
Era or (ii) the announcement of a major Japanese effort to develop AI
systems, Mrs. Thatcher's government commissioned a Commission,
chaired by some guy named Alvey about whom I don't know anything
(though I suspect he is an academic of some stature, else he wouldn't
have been given the job). The mission of this Commission (or it may
have been a Committee) was to produce recommendations for national
policy, to be implemented probably by the Science and Engineering
Research Council. They found that while a few British universities
are doing quite good computer science, only one of them is doing AI
worth mentioning, namely Edinburgh, and even there, not too much of
it. (The reason for this is that an earlier Government commissioned
another Report on AI, which was written by Professor Sir James
Lighthill, an academic of some stature. Unfortunately he is a
mathematician specialising in fluid dynamics -- said to have designed
Concorde's wings, or some such -- and he concluded that the only bit
of decent work that had been done in AI to date was Terry Wingorad's
thesis (just out) and that the field showed very little promise. As a
result of the Lighthill Report, AI was virtually a dirty word in
Britain for ten years. Most people still think it means artificial
insemination.) Alvey's group also found, what anyone could have told
the Government, that research on all sorts of advanced science and
technology was disgracefully stunted. So they recommended that a few
hundred million pounds of state and industrial funds be pumped into
research and education in AI, CS, and supporting fields. This
happened about a year ago, and the Gov't basically bought the whole
thing, with the result that certain segments of the academic job
market over there went straight from famine to feast (the reverse
change will occur pretty soon, I doubt not). It kind of remains to be
seen what industry will do, since we don't have a MITI.

I partly accept your criticism of my criticism of that review, but I
also believe that a journalist has an obligation not to publish
falsehoods, even if they are generally believed, and to do more than
re-hash the output of his colleagues into a form consistent with the
demands of the story he is "writing".

- Richard

------------------------------

Date: Sat 3 Sep 83 13:28:36-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: Rational Psychology

I've just read Jon Doyle's paper "Rational Psychology" in the latest
AI Magazine. It's one of those papers you wish you (I wish) had
written it yourself. The paper shows implictly what is wrong with many
of the arguments in discussions on intelligence and language analysis
in this group. I am posting this as a starting shot in what I would
like to be a rational discussion of methodology. Any takers?

Fernando Pereira

PS. I have been a long-time fan of Truesdell's rational mechanics and
thermodynamics (being a victim of "black art" physics courses). Jon
Doyle's emphasis on Truesdell's methodology is for me particularly
welcome.


[The article in question is rather short, more of an inspirational
pep talk than a guide to the field. Could someone submit one
"rational argument" or other exemplar of the approach? Since I am
not familiar with the texts that Doyle cites, I am unable to discern
what he and Fernando would like us to discuss or how they would have
us go about it. -- KIL]

------------------------------

Date: 2 Sep 1983 11:26-PDT
From: Andy Cromarty <andy@aids-unix>
Subject: Availability of T


Yale has not yet decided on the means by which it will distribute
T to for-profit institutions, but it has been negotiating with a
few companies, including Cognitive Systems, Inc. To my knowledge
no final agreements have been signed, so right now, no one can sell
it. ...We do not want a high price tag to inhibit availability.

-- Jonathan Rees, T Project (REES@YALE) 31-Aug-83

About two days before you sent this to the digest, I received a
14-page T licensing agreement from Yale University's "Office of
Cooperative Research"
.

Prices ranged from $1K for an Apollo to $5K for a VAX 11/780 for
government contractors (e.g. us), with no software support or
technical assistance. The agreement does not actually say that
sources are provided, although that is implied in several places. A
rather murky trade secret clause was included in the contract.

It thus appears that T is already being marketed. These cost figures,
however, are approaching Scribe territory. Considering (a) the cost
of $5K per VAX CPU, (b) the wide variety of alternative LISPs
available for the VAX, and (c) the relatively small base of existing T
(or Scheme) software, perhaps Yale does "want a high price tag to
inhibit availability"
after all....
asc

------------------------------

Date: Thursday, 1 September 1983 12:14:59 EDT
From: Brad.Allen@CMU-RI-ISL1
Subject: Lisp Based Prolog

[Reprinted from the Prolog Digest.]

I would like to voice disagreement with Fernando Pereira's implication
that Lisp Based Prologs are good only for pedagogical purposes. The
flipside of efficiency is usability, and until there are Prolog
systems with exploratory programming environments which exhibit the
same features as, say Interlisp-D or Symbolics machines, there will be
a place for Lisp Based Prologs which can use such features as, E.g.,
bitmap graphics and calls to packages in other languages. Lisp Based
Prologs can fill the void between now and the point when software
accumulation in standard Prolog has caught up to that of Lisp ( if it
ever does ).

------------------------------

Date: Sat 3 Sep 83 10:51:22-PDT
From: Pereira@SRI-AI
Subject: Prolog in Lisp

[Reprinted from the Prolog Digest.]

Relying on ( inferior ) Prologs in Lisp is the best way of not
contributing to Prolog software accumulation. The large number of
tools that have been built at Edinburgh show the advantages for the
whole Prolog community of sites 100% committed to building everything
in Prolog. By far the best debugging environment for Prolog programs
in use today is the one on the DEC-10/20 system, and that is written
entirely in Prolog. Its operation is very different, and much superior
for Prolog purposes, than all Prolog debuggers built on top of Lisp
debuggers that I have seen to date. Furthermore, integrating things
like screen management into a Prolog environment in a graceful way is
a challenging problem ( think of how long it took until flavors came
up as the way of building the graphics facilities on the MIT Lisp
machines ), which will also advance our understanding of computer
graphics ( I have written a paper on the subject, "Can drawing be
liberated from the von Neumann style?"
).

I am not saying that Prologs in Lisp are not to be used ( I use one
myself on the Symbolics Lisp machines ), but that a large number of
conceptual and language advances will be lost if we don't try to see
environmental tools in the light of logic programming.

-- Fernando Pereira

------------------------------

Date: Mon, 5 Sep 1983 03:39 EDT
From: Ken%MIT-OZ@MIT-MC
Subject: Foolog

[Reprinted from the Prolog Digest.]

In Pereira's introduction to Foolog [a misunderstanding; see the next
article -- KIL] and my toy interpreter he says:

However, such simple interpreters ( even the
Abelson and Sussman one which is far better than
PiL ) are not a sufficient basis for the claim
that "it is easy extend Lisp to do what Prolog
does."
What Prolog "does" is not just to make
certain deductions in a certain order, but also
make them very fast. Unfortunately, all Prologs in
Lisp I know of fail in this crucial aspect ( by
factors between 30 and 1000 ).

I never claim for my little interpreter that it was more than a toy.
It primary value is pedagogic in that it makes the operational
semantics of the pure part of Prolog clear. Regarding Foolog, I
would defend it in that it is relatively complete;

-- it contains cut, bagof, call, etc. and for i/o and arithmetic his
primitive called "lisp" is adequate. In the introduction he claims
that its 75% of the speed of the Dec 10/20 Prolog interpreter. If
that makes it a toy then all but 2 or 3 Prolog implementations are
non-toy.

[Comment: I agree with Fernando Pereira and Ken that there are lots
and again lots of horribly slow Prologs floating around. But I do not
think that it is impossible to write a fast one in Lisp, even on a
standard computer. One of the latest versions of the Foolog
interpreters is actually slightly faster than Dec-10 Prolog when
measuring LIPS. The Foolog compiler I am working on compiled
naive-reverse to half the speed of compiled Dec-10 Prolog ( including
mode declarations ). The compiler opencodes unification, optimizes
tail recursion and uses determinism, and the code fits in about three
pages ( all of it is in Prolog, of course ). -- Martin Nilsson]

I tend to agree that too many claims are made for "one day wonders".
Just because I can implement most of Prolog in one day in Lisp
doesn't mean that the implentation is any good. I know because I
started almost two years ago with a very tiny implementation of
Prolog in Lisp. As I started to use it for serious applications it
grew to the point where today its up to hundreds of pages of code (
the entire source code for the system comes to 230 Tops20 pages ).
The Prolog runs on Lisp Machines ( so we call it LM-Prolog ). Mats
Carlsson here in Uppsala wrote a compiler for it and it is a serious
implementation. It runs naive reverse of a list 30 long on a CADR in
less than 80 milliseconds (about 6250 Lips). Lambdas and 3600s
typically run from 2 to 5 times faster than Cadrs so you can guess
how fast it'll run.

Not only is LM-Prolog fast but it incorporates many important
innovations. It exploits the very rich programming environment of
Lisp Machines. The following is a short list of its features:

User Extensible Interpreter
Extensible unification for implementing
E.g. parallelism and constraints

Optimizing Compiler
Open compilation Tail recursion removal and
automatic detection of determinacy Compiled
unification with microcoded runtime support
Efficient bi-directional interface to Lisp

Database Features
User controlled indexing Multiple databases
(Worlds)

Control Features
Efficient conditionals Demand-driven
computation of sets and bags

Access To Lisp Machine
Features Full programming environment, Zwei
editor, menus, windows, processes, networks,
arithmetic ( arbitrary precision, floating,
rational and complex numbers, strings,
arrays, I/O streams )

Language Features
Optional occur check Handling of cyclic
structures Arbitrary parity

Compatability Package
Automatic translation from DEC-10 Prolog
to LM-Prolog

Performance
Compiled code up to 6250 LIPS on a CADR
Interpreted code; up to 500 LIPS

Availability
LM-Prolog currently runs on LMI CADRs
and Symbolics LM-2s. Soon to run on
Lambdas.

Commercially Available Soon.
For more information contact
Kenneth M. Kahn or Mats Carlsson.

Inquires can be directed to:

KEN@MIT-OZ or

UPMAIL P. O. Box 2059
S-75002
Uppsala, Sweden

Phone +46-18-111925

------------------------------
Date: Tue 6 Sep 83 15:22:25-PDT
From: Pereira@SRI-AI
Subject: Misunderstanding

[Reprinted from the PROLOG Digest.]

I'm sorry that my first note on Prologs in Lisp was construed as a
comment on Foolog, which appeared in the same Digest. In fact, my
note was send to the digest BEFORE I knew Ken was submitting Foolog.
Therefore, it was not a comment on Foolog. As to LM-Prolog, I have a
few comments about its speed:

1. It depends essentially on the use of Lisp machine subprimitives and
a microcoded unification, which are beyond Lisp the language and the
Lisp environment in all but the MIT Lisp machines. It LM-Prolog can
be considered as "a Prolog in Lisp," then DEC-10/20 Prolog is a Prolog
in Prolog ...

2. To achieve that speed in determinate computation requires mapping
Prolog procedure calls into Lisp function calls, which leaves
backtracking in the lurch. The version of LM-Prolog I know of used
stack group switches for bactracking, which is orders of magnitude
slower than backtracking on the DEC-20 system.

3. Code compactness is sacrificed by compiling from Prolog into Lisp
with open-coded unification. This is important because it makes worse
the paging behavior of large programs.

There are a lot of other issues in estimating the "real" efficiency of
Prolog systems, such as GC requirements and exact TRO discipline. For
example, using CONS space for runtime Prolog data structures is a
common technique that seems adequate when testing with naive reverse
of a 30 long list, but appears hopeless for programs that build
structure and backtrack a lot, because CONS space is not stack
allocated ( unless you use certain nonportable tricks, and even
then... ), and therefore is not reclaimed on backtracking ( one might
argue that Lisp programs for the same task have the same problem, but
efficient backtracking is precisely one of the major advantages of
good Prolog implementations ).

The current Lisp machines have exciting environment tools from which
Prolog users would like to benefit. I think that building Prolog
systems in Lisp will hit artificial performance and language barriers
much before the actual limits of the hardware employed are reached.
The approach I favor is to take the latest developments in Prolog
implementation and use them to build Prolog systems that coexist with
Lisp on those machines, but use all the hardware resources. I think
this is possible with a bit of cooperation from manufacturers, and I
have reasons to hope this will happen soon, and produce Prolog systems
with a performance far superior to DEC-20 Prolog.

Ken's approach may produce a tolerable system in the short term, but I
don't think it can ever reach the performance and functionality which
I think the new machines can deliver. Furthermore, there are big
differences between the requirements of experimental systems, with all
sorts of new goodies, and day-to-day systems that do the standard
things, but just much better. Ken's approach risks producing a system
that falls between these (conflicting) goals, leading to a much larger
implementation effort than is needed just for experimenting with
language extensions ( most of the time better done in Prolog ) or just
for a practical system.

-- Fernando Pereira

PS: For all it is worth, the source of DEC-20 Prolog is 177 pages of
Prolog and 139 of Macro-10 (at 1 instruction per line...). The system
comprises a full compiler, interpreter, debugger and run time system,
not using anything external besides operating system I/O calls. We
estimate it incorporates between 5 and 6 man years of effort.

According to Ken, LM-Prolog is 230 pages of Lisp and Prolog ...

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT