Copy Link
Add to Bookmark
Report

AIList Digest Volume 1 Issue 041

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest           Thursday, 18 Aug 1983      Volume 1 : Issue 41 

Today's Topics:
Expert Systems - Rog-O-Matic,
Programming Languages - LOGLisp & NETL & Functional Programming,
Computational Complexity - Time/Space Tradeoff & Humor
----------------------------------------------------------------------

Date: Tuesday, 16 August 1983 21:20:38 EDT
From: Michael.Mauldin@CMU-CS-CAD
Subject: Rog-O-Matic paper request


People of NetLand, the long awaited day of liberation has arrived!
Throw off the shackles of ignorance and see what secrets of
technology have been laid bare through the agency of a free
university press, unbridled by the harsh realities of economic
competition!

The Rog-O-Paper is here!

For a copy of CMU Technical Report CMU-CS-83-144, entitled
"Rog-O-Matic: A Belligerent Expert System", please send your physical
address to

Mauldin@CMU-CS-A

and include the phrase "paper request" in the subject line.


For those who have a copy of the draft, the final version contains
two more figures, expanded descriptions of some algorithms, and an
updated discussion of Rog-O-Matic's performance, including
improvements made since February. And even if you don't have a copy
of the draft, the final version still contains two more diagrams,
expanded descriptions of some algorithms, and an updated discussion
of performance. The history of the program's development is also
chronicled.

The source is still available by either FTP or can be mailed in
several pieces. It is about a third of a megabyte of characters, and
is mailed in pieces either 70K or 40K characters long.

Michael Mauldin (Fuzzy)
Computer Science Department
Carnegie-Mellon University
Pittsburgh, PA 15213


CMU-CS-83-144 Abstract

Rog-O-Matic is an unusual combination of algorithmic and
production systems programming techniques which cooperate
to explore a hostile environment. This environment is the
computer game Rogue, which offers several advantages for
studying exploration tasks. This paper presents the
major features of the Rog-O-Matic system, the types of
knowledge sources and rules used to control the
exploration, and compares the performance of the system
with human Rogue players.

------------------------------

Date: Tue 16 Aug 83 22:56:27-CDT
From: Donald Blais <CC.BLAIS@UTEXAS-20.ARPA>
Subject: LOGLisp language query

In the July 1983 issue of DATAMATION, Larry R. Harris states that the
logic programming language LOGLisp has recently been developed by
Robinson. What sources can I go to for additional information on this
language?

-- Donald

------------------------------

Date: Wed, 17 Aug 83 04:25 PDT
From: "Glasser Alan"@LLL-MFE.ARPA
Subject: Scott Fahlmann's NETL

I've read a book by Scott Fahlmann about a system called NETL for
representing knowledge in terms of a particular tree-like structure.
I found it a fascinating idea. It was published in 1979. When I last
heard about it, there were plans to develop some hardware to implement
the concept. Does anyone know what's been happening on this front?
Alan Glasser (glasser@lll-mfe)

------------------------------

Date: 15 Aug 83 22:44:27-PDT (Mon)
From: pur-ee!uiucdcs!uicsl!pollack @ Ucb-Vax
Subject: Re: FP and AI - (nf)
Article-I.D.: uiucdcs.2574

Having also worked with both FP and AI systems I basically agree with
your perceptions of their respective goals and functions, but I think
that we can have both, since they operate at different levels: Think
of a powerful, functional language that underlies the majority of the
work in AI data and procedural representations, and imagine what the
world would be like if it were pure (but still powerful).

Besides the "garbage collector" running now and then, there could,
given the mathematical foundations of FP systems, also be an
"efficiency expert" hanging around to tighten up your sloppy code.

Jordan Pollack
University of Illinois
...!pur-ee!uiucdcs!uicsl!pollack

P.S. There is a recent paper by Lenat from Rand called "Cognitive
Economy" which discusses some possible advances in computing
environment maintenance; I don't recall it being linked to FP
systems, however.

------------------------------

Date: 16 Aug 83 20:33:29 EDT (Tue)
From: Mark Weiser <mark%umcp-cs@UDel-Relay>
Subject: maximum speed

This *maximum* time business needs further ground rules if we are to
discuss it here (which we probably shouldn't). For instance, the
argument that communication and multiplcation paths don't matter in an
nxn matrix multiply, but that the limiting step is the summation of n
numbers, seems to allow too much power in specifying components. I am
allowed unboundedly many processors and communication paths, but only
a tree of adders? I can build you a circuit that will add n numbers
simultaneously, so that means the *maximum* speed of an nxn matrix
multiply is constant. But it just ain't so. As n grows larger and
larger and larger the communication paths and the addition circuitry
will also either grow and grow and grow, or the algorithm will slow
down. Good old time-space tradeoff.

(Another time-space tradeoff for matrix multiply on digital
computers: just remember all the answers and look them up in ROM.
Result: constant time matrix multiply for bounded n.)

------------------------------

Date: 16 Aug 1983 2016-MDT
From: William Galway <Galway@UTAH-20>
Subject: NP-completeness and parallelism, humor

Perhaps AI-digest readers will be amused by the following
article. I believe it's by Danny Cohen, and appears in the
proceedings of the CMU Conference on VLSI Systems and
Computations, pages 124-125, but this copy was dug out of local
archives.

..................................................................

The VLSI Approach to
Computational Complexity

Professor J. Finnegan
University of Oceanview, Kansas
(Formerly with the DP department of the
First National Bank of Oceanview)]

The rapid advance of VLSI and the trend toward the decrease of
the geometrical feature size, through the submicron and the
subnano to the subpico, and beyond, have dramatically reduced the
cost of VLSI circuitry. As a result, many traditionally
unsolvable problems can now (or will in the near future) be
easily implemented using VLSI technology.

For example, consider the traveling salesman problem, where the
optimal sequence of N nodes ("cities") has to be found. Instead
of applying sophisticated mathematical tools that require
investment in human thinking, which because of the rising cost of
labor is economically unattractive, VLSI technology can be
applied to construct a simple machine that will solve the
problem!

The traveling salesman problem is considered difficult because of
the requirement of finding the best route out of N! possible
ones. A conventional single processor would require O(N!) time,
but with clever use of VLSI technology this problem can easily be
solved in polynomial time!!

The solution is obtained with a simple VLSI array having only N!
processors. Each processor is dedicated to a single possible
route that corresponds to a certain permutation of the set
[1,2,3,..N]. The time to load the distance matrix and to select
the shortest route(s) is only polynomial in N. Since the
evaluation of each route is linear in N, the entire system
solves the problem in just polynomial time! Q.E.D.

Readers familiar only with conventional computer architecture may
wrongly suspect that the communication between all of these
processors is too expensive (in area). However, with the use of
wireless communication this problem is easily solved without the
traditional, conventional area penalty. If the system fails to
obtain from the FCC the required permit to operate in a
reasonable domain of the frequency spectrum, it is always
possible to use microlasers and picolasers for communicating
either through a light-conducting substrate (e.g. sapphire) or
through a convex light-reflecting surface mounted parallel to the
device. The CSMA/CD (Carrier Sense Multiple Access, with
Collision Detection) communication technology, developed in the
early seventies, may be found to be most helpful for these
applications.

If it is necessary to solve a problem with a larger N than the
one for which the system was initially designed, one can simply
design another system for that particular value of N, or even a
larger one, in anticipation of future requirements. The
advancement of VLSI technology makes this iterative process
feasible and attractive.

This approach is not new. In the early eighties many researchers
discovered the possibility of accelerating the solution of many
NP-complete problems by a simple application of systems with an
exponential number of processors.

Even earlier, in the late seventies many scientists discovered
that problems with polynomial complexity could also be solved in
lower time (than the complexity) by using number of processors
which is also a polynomial function of the problem size,
typically of a lower degree. NxN matrix multiplication by
systems with N^2 processors used to be a very popular topic for
conversations and conference papers, even though less popular
among system builders. The requirement of dealing the variable N
was (we believe) handled by the simple P/O technique, namely,
buying a new system for any other value of N, whenever needed.

According to the most popular model of those days, the cost of
VLSI processors decreases exponentially. Hence the application
of an exponential number of processors does not cause any cost
increase, and the application of only a polynomial number of
processors results in a substantial cost saving!! The fact that
the former exponential decrease refers to calendar time and the
latter to problem size probably has no bearing on this discussion
and should be ignored.

The famous Moore model of exponential cost decrease was based on
plotting the time trend (as has been observed in the past) on
semilogarithmic scale. For that reason this model failed to
predict the present as seen today. Had the same observations
been plotted on a simple linear scale, it would be obvious that
the cost of VLSI processors is already (or about to be) negative.
This must be the case, or else there is no way to explain why so
many researchers design systems with an exponential number of
processors and compete for solving the same problem with more
processors.

CONCLUSIONS

- With the rapid advances of VLSI technology anything is
possible.

- The more VLSI processors in a system, the better the paper.

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT