Copy Link
Add to Bookmark
Report

AIList Digest Volume 2 Issue 156

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest            Friday, 16 Nov 1984      Volume 2 : Issue 156 

Today's Topics:
Programming Languages - Horror Stories,
Algorithms - Interlisp-D "Malgorithm",
AI Tools - DEC Software Agreements & Japanese Lisp Machines,
Seminars - Logo for Teaching Language &
Knowledge Representation and Temporal Representation
----------------------------------------------------------------------

Date: 14 Nov 84 20:45:32 EST
From: Edward.Smith@CMU-CS-SPICE
Subject: Programming Language Horror Stories

[Forwarded from the CMU bboard by Laws@SRI-AI.]

In a couple of weeks I'm going to give my last lecture in Comparative
Programming and as a way of motivating these undergraduates about the
importance of the consideration of language design in their future work I
was going to present some examples of particularly BAD (ugly or dangerous or
however you wish to define that) design. So, if you have any favorite horror
story about some programming language you'd like to contribute I would
appreciate it - I will put all the good ones up in a file somewhere. These
stories should be SHORT and to the point, and could be things like these
classics:
- the property of FORTRAN to compile and execute (generally without
complaining) a DO loop construct without a comma
- just about ANY one-line APL program (my favorite being the one-line game
of LIFE attributed to Marc Donner)
- the use of the space character in SNOBOL as (a) the separator between
labels and statements, (b) concatenation operator, (c) pattern matching
operator, or (d) separator for the pattern match assignment operator "."
- (FORTRAN's full of them) the property of early FORTRAN's to change the
value of "constants" like 5 to say 3 by an interesting parameter passing
mechanism

Please send to ets@cmu-cs-spice. Thanks in advance.

------------------------------

Date: 15 Nov 84 12:39 PST
From: JonL.pa@XEROX.ARPA
Subject: Interlisp-D "malgorithm"?

Regarding the "malgorithm" and "Logical method" proposed by
Todd.pasa@XEROX.ARPA: using the NTH function repeatedly on a list of
elements (as opposed to an array or "vector" of elements) has got to be
a classic "malgorithm". The access time for selecting the n'th element
of a list is proportional to n, whereas the similar time for arrays or
vectors should be essentially constant. The repeated selection evident
in Todd's example converts a linear algorithm into a quadratic one.

Just for the record, let me propose what I (and I suspect many other
long-time "Lisp lovers" would have considered to be the "logical"
algorithm):

(for ITEM in X as I to M collect ITEM)

A little analysis would show this to be asymptotically better than the
proposed "malgorithm" by a factor somewhere between 1 and 2 -- 2 because
the "malgorithm" traverses the M-prefix of the list X twice, and 1
because the the CONS time cost may be made arbitrarily high, thereby
occluding any other effect.

-- JonL White --

------------------------------

Date: 15 Nov 84 0524 EST
From: Dave.Touretzky@CMU-CS-A.ARPA
Subject: DEC news release

The following two paragraphs come from a medium-length article in DEC's
Large System News.


Digital Signs Marketing Agreements With Six AI Firms

Digital has signed agreements with a number of leading independent producers of
Artificial Intelligence (AI) software to market cooperatively their products
on VAX computers and personal computer systems.

Independent AI software producers include the Carnegie Group, Inc.; Digital
Research, Inc. (DIR); Gold Hill Computers; Inference Corp.; Prologia; and
USC Information Sciences Institute (ISI). AI software packages developed to
run on Digital's computers include Inference's ART, Gold Hill's GCLISP,
ISI's Interlisp, Prologia's PROLOG II, and the Carnegie Group's SRL+ and PLUME.


In two other articles in the same newsletter, Digital announced availability
of VAX Lisp 1.0 as a fully-supported Common Lisp product, and availability
of OPS5 as a supported product. Digital's OPS5 is written in Bliss-32 for
performance reasons; it includes both an interpreter and a compiler.

------------------------------

Date: 14 Nov 84 09:31:35 PST (Wed)
From: Jed Marti <marti@randgr>
Subject: Japanese Lisp Machines.

I just saw the request for information about the Fujitsu Alpha. I
recently spent a week in Japan as a guest of the RIKEN institute which
provided a tour of some of the local Tokyo efforts in this direction.
Perhaps it would be of interest to the AIList readers.

Jed Marti.



Japanese Lisp Machines

The RSYMSAC conference held at the Riken institute in Saitama, Japan on
August 21-22, provided an opportunity for a close view of Japanese
efforts to construct very fast machines for running large scale
symbolic algebra and AI programs. Four of us toured two computer
centers and three Lisp machine construction projects, talking to
implementors and trying our favorite test programs. This short report
describes the state of their systems and the Japanese symbolic algebra
environment.

FLATS at Riken

The Riken institute conducts research in the physical sciences and
operates a Fujitsu M380H (an IBM 370 look-alike) providing both time
sharing and batch services. During the day, computer algebra system
users access Cambridge Lisp running REDUCE 3.1. The symbolic
computation group operates a VAX 11/750 running VMS, a host of 16 bit
micro-computers, and the FLATS machine.

The symbolic computation group officially unveiled FLATS (Formula Lisp
Association Tuple Set) at the conference. The Mitsui Ship Building
Company constructed the hardware based on designs of the Riken group.
Built from SSI ECL components, the CPU executes a micro-instruction
every 50 nanoseconds and a Lisp instruction every 100 nanoseconds from
a 300 bit by 256 word micro store and 8 megabytes of 450 nanosecond
main memory. Over 70,000 wires connect the back plane making
conventional hardware debugging impossible. The engineers exercise
modules on a special test jig or through the attached support
processor.

The hash code generation hardware sets FLATS apart from conventional
Lisp machines. It computes a hash code in the basic machine cycle time
for extremely fast property list access and CONS cell generation.
Improvements in execution speed and program clarity more than offset
the loss of the RPLACA and RPLACD functions on hashed CONSes.

The designers increased speed with a number of special features:

3 separate cache memories for instructions, data, and stack

special micro-coded instructions for the garbage collector and
big numbers

CALL, JUMP and RETURN executed in parallel with other instructions

hardware type checking in parallel with data operations

3 address instruction codes

hardware support for paging

data path width of 64 bits

The FLATS machine, without hash CONS and property lists, runs REDUCE
3.0 at about IBM 3033 speeds. Several papers presented at RSYMSAC
described the status of FLATS and the design of the next FLATS machine
that the group hopes to construct from Josephson Junction circuits
[2-3].

University of Tokyo Computer Center

We visited the University of Tokyo Computer Center to find out more
about UTILISP (University of Tokyo Interactive LISP) implemented by the
Department of Mathematical Engineering and Instrumentation Physics
[1]). Probably one of the largest academic installations in the world,
the center operates two Hitachi M280 dual processors (roughly
equivalent to an IBM 3081) each with 32 megabytes of main storage and a
Hitachi M200H with 16 megabytes of main storage. A Hitachi S810/2
vector processor with 64 megabytes of main memory and a VAX 11/780 with
4 megabytes complement the general purpose machines. On-line storage
consists of 48 gigabytes on disk, and 37 gigabytes in data cells. The
center emphasizes user convenience. Users mount their own tapes, take
output off printers, read their own card decks (we didn't actually see
anyone do this, but the machine was there), tear off plots and so on.
The lightly loaded machines run an average of only 4,000 jobs per day.
Users need not wait for terminals and other equipment, an enviable
situation indeed.

UTILISP resembles MacLisp. An effort to transport MACSYMA to UTILISP
suffers only from the lack of built-in big number arithmetic.

Fujitsu ALPHA

A long train and subway ride brought us to the third tour stop, the
Fujitsu Laboratories in Kawasaki, home of the Lisp machine ALPHA [4-5].
The ALPHA offloads time sharing symbolic processing jobs from IBM style
mainframes. More than one ALPHA can be connected to a single mainframe,
which supplies I/O device, filing system, editing and operating system
support.

The ALPHA has 8 megabytes of real memory with a 16 megabyte virtual
address space. Memory and data buses are 32 bits wide with Lisp items
composed of an 8 bit tag and 24 bit value. The ALPHA processor has a
high speed hardware stack of 8k words with special hardware for
swapping segments to and from slower memory. The division of the stack
into blocks permits high speed switching between different processes.
To support tagged data items, a micro-instruction jump based on the 8
bit tag is implemented. The ALPHA machine performs data calculations by
masking off the tag bits in hardware, rather than software. The machine
has over 7700 STTL, 64k bit RAMs and 4k high speed RAMs.

Micro Instructions - 48 bits wide, 160 ns, 16k words.
Main Memory - Virtual 16 M words, Real 8 M words,
Page size 4 K bytes.
Stack - Logical stack 64 K words, Hardware stack 8 K words,
Swapping block size 2 K bytes.

The ALPHA runs UTILISP and has an interpreter, compiler, and copying
garbage collector. Fujitsu claims the ALPHA runs three times faster
than the Symbolics 3600 and 5 times faster than DEC 2060 MACLISP.
Fujitsu uses the ALPHA for CAD, machine translation, and natural
language understanding.


ELIS - Nippon Telegraph and Telephone Public Corporation

Nippon Telegraph and Telephone Public Corporation demonstrated the ELIS
machine and TAO language a "harmonic" mixture of Lisp, Smalltalk, and
Prolog, to quote the authors [6]. A PDP 11/60 provides file and
operating system support while the ELIS hardware performs the list
processing functions. The ELIS hardware features 32 bit items with 8
bit tags providing for 16 million items (128 megabytes). The basic
microcycle time is 180 ns in 32k of micro-instructions 64 bits wide.
Main memory is 4 megabytes with an access time of 420 ns and a special
system stack of 32k 32 bit items. Deep binding is used and multiple
processes are supported by stack groups, and the cpu switches between
contexts very fast (2 microseconds unless there is some stack
swapping). For identical tasks programmed in the three different
paradigms, the procedural version provides the most speed with the
object oriented version about 1.1 times as slow and the logic version
about twice as slow.

Acknowledgement: I would like to thank Dr. N. Inada of Riken for
organizing both RSYMSAC and the tour.

List of References
1. Chikayama, Takashi, `UTILISP Manual', Technical Report METR 81-6
(September 1981), Department of Mathematical Engineering and
Instrumentation Physics, University of Tokyo, Bunkyo-Ku, Tokyo,
Japan.
2. Goto, E., Shimizu, K., `Architecture of a Josephson Computer
(FLATS-2)', RSYMSAC, Wako-shi, Saitama, 351-01 Japan, 1984.
3. Goto, E., Soma, T., Inada, N., et al, 'FLATS: A Machine for
Symbolic and Algebraic Manipulation', RSYMSAC, Riken, Wako-shi,
Saitama, 351-01 Japan, 1984.
4. Hayashi, H., Hattori, A., Akimoto, H., `ALPHA: A High-Performance
LISP Machine with a New Stack Structure and Garbage Collection
System', Proceedings of the 10th Annual International Symposium
on Computer Architecture, pages 342-347.
5. Hayashi, H., Hattori, A., Akimoto, H., `LISP Machine "ALPHA"',
Fujitsu Scientific and Technical Journal, Vol. 20, No. 2,
pages 219-234.
6. Okuno, H. G., Takeuchi, I., Osato, N., Hibino, Y., Watanabe, K.,
`TAO: A Fast Interpreter-Centered System on Lisp Machine ELIS',
Proceedings of the 1984 Conference on LISP and Functional
Programming.


Jed Marti MARTI@RAND-UNIX

------------------------------

Date: 9 November 1984 1359-EST
From: Jeff Shrager@CMU-CS-A
Subject: Seminar - Logo for Teaching Language

[Forwarded from the CMU bboard by Laws@SRI-AI.]


Subject: Guest speaker on teaching language with LOGO
Source: charney (davida charney @ cmu-psy-a)

English Department -- Guest Speaker

NAME: Wallace Feurzeig (Bolt, Beranek and Newman)
DATE: Friday, November 16
TIME: 9 - 10:30 am (There will be coffee and doughnuts.)
PLACE: Adamson Wing in Baker Hall

TITLE: Exploring Language with Logo

The talk gives examples of materials from our forthcoming book "Exploring
Language with Logo" to be published by Harper and Row o/a first quarter,
1985, co-authored by Paul Goldenberg and Wallace Feurzeig. The book
attempts to develop a qualitatively different approach to the teaching of
language arts in schools. Our approach is based on two major intellectual
developments -- the theory of generative grammar in formal linguistics and
the invention of programming languages. The important new idea from
linguistics is that a grammar can be used as a constructive instrument to
generate sentences, in contrast to the conventional school experience of
grammar as an analytic device, a set of tools for parsing sentences to
determine whether or not they are instances of "good" English. This shift
has enormous psychological and pedagogical benefits: it switches the
learner's focus and viewpoint from rule learner to language creator. At the
same time, it provides a distinctly different, more accessible and
acceptable way of introducing the formal structures of language and the
regularities and rules describing these structures.

The other major intellectual development, programming languages, provides
the most distinctive and radical departure from the present language arts
course. Our approach depends fundamentally upon programming ideas and
activities. In our presentation, the key and central language concepts are
introduced and developed as Logo programs. Teachers and students are engaged
in programming projects throughout. The use of a programming language in the
English language classrooom makes the idea of generative grammars concrete
in tasks readily accessible to schoolchildren. Moreover, in the environment
of programming, grammar models are transformed from highly abstract
formalisms into runnable objects in semantic situations that are meaningful
and interesting to students. For example, students can create Logo programs
that simulate the grammar of gossip, puns, jokes, love letters, baby talk,
proverbs, quizzes, conversational discourse, poems of various forms and
expressive styles, and many other kinds of texts. Examples will illustrate
the approach and materials at three levels: the structure of sentences,
structures within a word, and larger structures.

------------------------------

Date: 13 November 1984 11:12-EST
From: Rosemary B. Hegg <ROSIE @ MIT-MC>
Subject: Seminar - Knowledge Representation and Temporal Representation

[Forwarded from the MIT bboard by Laws@SRI-AI.]

COOPERATIVE COMPUTATION, KNOWLEDGE ORGANIZATION AND TIME

John K. Tsotsos

Department of Computer Science
University of Toronto
Toronto, Ontario, Canada


DATE: Friday, November 16, 1984
TIME: 1.45 pm Refreshments
2.00 pm Lecture
PLACE: NE43-7th Floor Playroom


A cooperative processing scheme is presented that deals with
time-varying information. It operates over a network of temporal
concepts, organized along common representational axes:generali-
zation, aggregation, similarity and temporal precedence. Units
in this network are organized into computation layers, and these
layers are conceptualized as "recognizing" concepts that can be
organized along a generalization / specialization dimension.
Thus elements of both "localist" and "distributed" views of con-
cept representations are present. Static and dynamic data in the
same way - as samples over time, and thus, sampling issues are
directly addressed. This process is a time-varying non-linear
optimization task; it differs from past cooperative computation
schemes in three respects: a) our information is not uniform but
rather different concepts are represented at different levels of
the hierarchies; b) there are multiple interacting networks, each
organized according to different semantics; c) the data is time-
varying and more importantly, the structure over which relaxa-
tion is performed is time-varying. The cooperative process to be
described has the qualitative properties we believe are desirable
for temporal interpretation, and its performance will be
described empirically, and in a qualitative fashion through the
use of several examples.

HOST: Prof. Peter Szolovits

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT