Copy Link
Add to Bookmark
Report

AIList Digest Volume 1 Issue 032

eZine's profile picture
Published in 
AIList Digest
 · 11 months ago

AIList Digest            Thursday, 4 Aug 1983      Volume 1 : Issue 32 

Today's Topics:
Graph Theory - Finding Clique Covers,
Knowledge Representation - Textnet,
Fifth Generation & Msc. - Opinion,
Lisp - Revised Maclisp Manual & Review of IQLisp
----------------------------------------------------------------------

Date: 2 Aug 83 11:14:51 EDT (Tue)
From: Dana S. Nau <dsn%umcp-cs@UDel-Relay>
Subject: A graph theory problem

The following graph theory problem has arisen in connection with some
AI research on computer-aided design and manufacturing:

Let H be a graph containing at least 3 vertices and having no
cycles of length 4. Find a smallest clique cover for H.

If there were no restrictions on the nature of H, the problem would be
NP-hard, but given the restrictions, it's unclear what its complexity
is. A couple of us here at Maryland have been puzzling over the
problem for a week or so, and haven't been able to reduce any known
NP-hard problem to it. However, the fastest procedure we have found
to solve the problem takes exponential time in the worst case.

Does anyone know anything about the computational complexity of this
problem, or about possible procedures for solving it?

------------------------------

Date: 3 Aug 83 20:50:46 EDT (Wed)
From: Randy Trigg <randy%umcp-cs@UDel-Relay>
Subject: Textnet

[Adapted from Human-Nets. The organization and indexing
of knowledge are topics that should be of interest to the AI
community. -- KIL]

Regarding the recent worldnet discussion, I thought I'd briefly
describe my research and suggest how it might apply: My thesis work
has been in the area of advanced text handlers for the online
scientific community. My system is called "Textnet" and shares much
with both NLS/Augment and Hypertext. It combines a hierarchical
component (like NLS, though we allow and encourage multiple
hierarchies for the same text) with the arbitrary linked network
strategy of Hypertext. The Textnet data structure resembles a
semantic network in that links are typed and are valid manipulable
objects themselves, as are "chunks" (nodes with associated text) and
"tocs" (nodes capturing hierarchical info).

I believe that a Textnet approach is the most flexible for a national
network. In a distributed version of Textnet (distributing
Hypertext/Xanadu has also been proposed), users create not only new
papers and critiques of existing ones, but also link together existing
text (i.e., reindexing information), and build alternate
organizations.

There can be no mad dictator in such an information network. Each
user organizes the world of scientific knowledge as he/she desires.
Of course, the system can offer helpful suggestions, notifying a user
about new information needing to be integrated, etc. But in this
approach, the user plays the active role. Rather than passively
accepting information in whatever guise worldnet decides to promote,
each must take an active hand in monitoring that part of the network
of interest, and designing personalized search strategies for the
rest. (For example, I might decree that any information stemming from
a set of journals I deem absurd, shall be ignored.) After all, any
truly democratic system should and does require a little work from
each member.

------------------------------

Date: 3 Aug 1983 0727-PDT
From: FC01@USC-ECL
Subject: Re: Fifth Generation

Several good points were made about the Japanese capabilities and
plans for 5th generation computers. I certainly didn't intend to say
that they weren't capable of building such machines, only that the
U.S. could easily beat them to it if the effort were deemed
worthwhile. I have to agree that the nature of systolic arrays is
quite different from the necessary architecture for inference engines,
but nevertheless for vision and speech applications, these arrays are
quite clearly superior. I know of no other nation with a data flow
machine in operation (although the Japanese are most certainly working
on it). Virtually every theorem proving system in existence was
written in the U.S. All of this information was freely (and rightly in
my opinion) disseminated to the rest of the world. If we continue to
do the research and seek immediate profits at the expense of long term
development, there is no doubt in my mind that the Japanese will beat
us there. If on the other hand, we use our extreme expertise to make
our development programs the best they can be, and don't make the same
mistake we made with robotics in the 70s, I feel we can build better
machines sooner.

Lisp translators from interlisp to otherlisps seem very
interesting to me. Perhaps someone could send me a pointer to an
ARPA-net mailing address of the creator/maintainer of these programs.
To my knowledge, none operates w/out human assistance, but I could be
wrong. [Check with Hanson@SRI-IU for Rodney Brooks' Maclisp-to-Franz
macro package. It does not cover all features in Maclisp. -- KIL]

As to natural language translation using computers, it has
been tried for technical translation and has been quite succesful as a
dictionary. As of 5 years ago, there were no real translators beyond
this for natural language. Perhaps this has changed drastically. It
is my guess that without a system capable of learning, true
translation will never be done. It is simply too much to expect that a
human expert would be able to embody all of the knowledge of a
language into a program. Perhaps 90% translation could be achieved in
a few years, and 99% could probably be here w/in 10 years (between
similar languages).

Speech recognition can be quite effective for relatively small
vocabularies by a given speaker in a particular language.
Understanding speech is a considerably slower process, but has the
advantage of trying to make sense of the sounds. It is not probably
realistic to say that general purpose speech understanding systems in
multiple languages with multiple speakers using large vocabularies
will be operational at real time performance in the next 10 years.

Vision systems have been researched considerably for limited
robotics applications. Context boundedness seems to have a great
effect on the sort of IO that humans do. It is certainly not clear
that real time vision systems capable of understanding large varieties
of environments will be operational w/in the next 10 years.

These problems are not simply solved by having very large
quantities of processing power! If they were, 5th generation computers
would not be such a risk. Even if the goals are not met, the advances
due to a large R+D program such as ICOTs will certainly have many
technological spinoffs with a widespread effect on the world
marketplace. It has been a longstanding problem with AI research that
people who demonstrate its results and people who report on these
demonstrations both stress the possibilities for the future rather
than the realities of today. In many cases, the misconceptions spread
through the scientific community as well as the general public. Even
many computer science 'experts' that I've met have vast misconceptions
about what the current systems can in fact do, have in fact done, and
can be easily expanded to do. In many cases, NP complete problems have
been approached through heuristic means. This certainly works in many
cases, but as the sizes of problems increase, it is not clear that
these heuristics will apply as handily. NP completeness cannot be
gotten around in general by building bigger or faster computers.
Computer learning has only been approached by a few researchers, and
few people would be considered 'intelligent' if they couldn't learn
from their mistakes.

It doesn't bother me to see Kirk destroy computers with his
illogical ways. I've personally blown away many operating systems
accidentally with my illogical ways, and don't expect that anyone will
ever be able to build a 'perfect' machine. It does bother me when
people look at that as more than fantasy and claim it as scientific
evidence. Just as the 'robots' that are run by remote control (kind of
like a radio controlled airplane) sometimes upset me when they fool
people into thinking they are autonomous and intelects.

Yet another flaming controversy
starter by
Fred

------------------------------

Date: 3 August 1983 15:04 EDT
From: Kent M. Pitman <KMP @ MIT-MC>
Subject: MIT-LCS TR-295: The Revised Maclisp Manual

They said it would never happen, but look for yourself...

The Revised Maclisp Manual
by Kent Pitman

Abstract

Maclisp is a dialect of Lisp developed at M.I.T.'s Project MAC (now
the MIT Laboratory for Computer Science) and the MIT Artificial
Intelligence Laboratory for use in artificial intelligence research
and related fields. Maclisp is descended from Lisp 1.5, and many
recent important dialects (for example Lisp Machine Lisp and NIL) have
evolved from Maclisp.

David Moon's original document on Maclisp, The Maclisp Reference
Manual (alias the Moonual) provided in-depth coverage of a number of
areas of the Maclisp world. Some parts of that document, however, were
never completed (most notably a description of Maclisp's I/O system);
other parts are no longer accurate due to changes that have occurred
in the language over time.

This manual includes some introductory information about Lisp, but is
not intended as tutorial. It is intended primarily as a reference
manual; particularly, it comes in response to users' pleas for more
up-to-date documentation. Much text has been borrowed directly from
the Moonual, but there has been a shift in emphasis. While the Moonual
went into greater depth on some issues, this manual attempts to offer
more in the way of examples and style notes. Also, since Moon had
worked on the Multics implementation, the Moonual offered more detail
about compatibility between ITS and Multics Maclisp. While it is hoped
that Multics users will still find the information contained herein to
be useful, this manual focuses more on the ITS and TOPS-20
implementations since those were the implementation most familiar to
the author.

The PitMANUAL, draft #14 May 21, 1983
Saturday Evening Edition

Keywords: Artificial Intelligence, Lisp, List Structure, Maclisp,
Programming Language, Symbol Manipulation

Ordering Information:

The Revised Maclisp Manual
MIT-LCS TR-295, $13.10

Publications
MIT Laboratory for Computer Science
545 Technology Square
Cambridge, MA 02139

About 300 copies were made. I don't know how long they'll last.
--kmp

------------------------------

Date: 1 August 1983 1747-EDT
From: Jeff Shrager at CMU-CS-A
Subject: IQLisp for the IBM-PC


A review of IQLisp (by Integral Quality, 1983).

Compiled by Jeff Shrager
CMU Psychology
7/27/83

The following comments refer to IQLisp running on an IBM-PC XT/256K
(you tell IQLisp the host machine's memory size at startup). I spent
two two-hour (approximately) sessions with IQLisp just going through
the manual and hacking various features. Then I tried to implement a
small production system interpreter (which took another three hours).

I. Things that make IQLisp more attractive than other micro Lisp
systems that I have seen.

A. The general workspace size is much larger than most due to the
IBM-PC XT's expanded capacity. IQLisp can take advantage of the
increased space and the manual explains in detail how memory
can be rearranged to take advantage of different programming
requirements. (But, see II.G.) (See also, summary.)
B. The Manual is complete and locally legible. (But see II.D.)
The internal specifications manual is surprisingly clear and
complete.
C. There is a window package. (But the windows aren't implemented
to scroll under one another so the feature is more-or-less
useless.)
D. There is a macro facility. This feature is important to both
speed and eventual implementation of a compiler. (But see II.B.)
Note that the manual teaches the "correct" way to write
fexprs -- i.e., with macros.
E. It uses the 8087 FP coprocessor if one exists. (But see II.A.)
F. Integer bignums are supported.
G. Arrays are supported for various data types.
H. It has good "simple" I/O facilities.
1. Function key support.
2. Single keystroke input.
3. Read macros. (No print macros?)
4. A (marginal) window facility.
5. Multiple streams.
I. The development package is a useful programming tool.
1. Error recovery tools are well designed.
2. A complete structure editor is provided. (But, see II.I.)
3. Many useful macros are included (e.g, backquote).
J. It seems to be reasonably fast. (See summary.)
K. Stack frame hacking functions are provided which permit error
control and evaluations in different contexts. (But, see II.H.)
L. There is a clean interface to DOS. (The "DIR" function is
especially useful and cleverly implemented.)


II. Negative aspects of IQLisp. (* Things marked with a "*" indicate
important deficiencies.)

**A. There is no compiler!
*B. Floating point is not supported without the 8087. One would
think that some sort of even very slow FP would be provided.
*C. Casing is completely backwards. Uppercase is demanded by IQLisp
which forces one to put on shift lock (in a bad place on the IBM
PC). If any case dependency is implemented it should be the
opposite (i.e., demand lower case) but case sensitivity should
be switch controllable -- and default OFF!
*D. The manual is poorly organized. It is very difficult to find
a particular topic since there are no complete indexes and the
topics are split over several different sections.
E. Error recovery is sometimes poor. I have had three or four
occasions to reboot the PC because IQLisp had gone to lunch.
Once this was because the 8087 was not present and I had told
the system that it was. I don't know what caused the other
problems.
F. The file system supports only sequential files.
G. The stack is fixed at 64K maximum which isn't very much and
permits only about 700 levels of binding-free recursion.
H. No new features of larger Lisp systems are provided. For
example: closures, flavors, etc. This is really not a
reasonable complaint since we're talking 256K here.
I. There is no screen editor for functions.


III. Summary.

I was disappointed by IQLisp but perhaps this is because I am still
dreaming of having a Lisp machine for under $5,000. IQ has obviously
put a very large amount of effort into the system and its
documentation (the latter being at least as important as the former).

Although one does not have all the functionality of a Lisp machine in
IQLisp (or even nearly so) I think that they have done an admirable
job within the constraints of the IBM-PC. Some of the features are
overkill (e.g, the window system which is pretty worthless in the way
provided and in a non-graphics environment.)

My production system was not the model of efficient PS hacking. It
was not meant to be. I wanted to see how IQLisp compared with our
Vax VMS Franz system. I didn't use a RETE net or efficient memory
organization. IQ didn't do very well against even a heavily loaded
Vax (also interpreted lisp code). The main problem was space, not
speed. This is to be expected on a machine without virtual memory.
Since there are no indexed file capabilities in IQLisp, the user is
strictly limited by the available core memory. I think that it's
going to be some time before we can do interesting AI with a micro.
However, (1) I think that I could have rewritten my production system
to be much more efficient in both space and time. It may have run
acceptably with some careful tuning (what do you want for three
hours!?). And (2) we are going to try to use the system in the near
future for some human-computer interaction experiments -- as a
single-subject workstation for learning Lisp. I see no reason that
it should not perform acceptably in domains which are less
information intensive than AI.

The starred (*) items in section II above are major stumbling blocks
to using IQLisp in general. Of these, it is the lack of a Lisp
compiler which stops me from recommending it to everyone. I expect
that this will be corrected in the near future because they have all
the required underpinnings (macros, assembly interface, etc). Why
don't people just write a simple little lisp system and a whizzy
compiler?

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT