Copy Link
Add to Bookmark
Report

AIList Digest Volume 1 Issue 057

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest            Friday, 16 Sep 1983       Volume 1 : Issue 57 

Today's Topics:
Artificial Intelligence - Public Recognition,
Programming Languages - Multiple Inheritance & Micro LISPs,
Query Systems - Talk by Michael Hess,
AI Architectures & Prolog - Talk by Peter Borgwardt,
AI Architectures - Human-Nets Reprints
----------------------------------------------------------------------

Date: 10 Sep 1983 21:44:16-PDT
From: Richard Tong <fuzzy1@aids-unix>
Subject: "some guy named Alvey"

John Alvey is Senior Director, Technology, at British Telecom. The
committee that he headed reported to the British Minister for
Information Technology in September 1982 ("A Program for Advanced
Information Technology", HMSO 1982).

The committee was formed in response to the announcement of the
Japanese 5th Generation Project at the behest of the British
Information Technology Industry.

The major recommendations were for increased collaboration within
industry, and between industry and academia, in the areas of Software
Engineering, VLSI, Man-Machine Interfaces and Intelligent
Knowledge-Based Systems. The recommended funding levels being
approximately: $100M, $145M, $66M and $40M respectively.

The British Government's response was entirely positive and resulted
in the setting up of a small Directorate within the Department of
Industry. This is staffed by people from industry and supported by
the Government.

The most obvious results so far have been the creation of several
Information Technology posts in various universities. Whether the
research money will appear as quickly remains to be seen.

Richard.

------------------------------

Date: Mon 12 Sep 83 22:35:21-PDT
From: Edward Feigenbaum <FEIGENBAUM@SUMEX-AIM>
Subject: The world turns; would you believe...

[Reprinted from the SU-SCORE bboard.]

1. A thing called the Wall Street Computer Review, advertising
a conference on computers for Wall Street professionals, with
keynote speech by Isaac Asimov entitled "Artificial Intelligence
on Wall Street"

2. In the employment advertising section of last Sunday's NY Times,
Bell Labs (of all places!) showing Expert Systems prominently
as one of their areas of work and need, and advertising for people
to do Expert Systems development using methods of Artificial
Intelligence research. Now I'm looking for a big IBM ad in
Scientific American...

3. In 2 September SCIENCE, an ad from New Mexico State's Computing
Research Laboratory. It says:

"To enhance further the technological capabilities of New Mexico, the
state has funded five centers of technical excellence including
Computing Research Laboratory (CRL) at New Mexico State University.
...The CRL is dedicated to interdisciplinary research on knowledge-
based systems"

------------------------------

Date: 15 Sep 1983 15:28-EST
From: David.Anderson@CMU-CS-G.ARPA
Subject: Re: Multiple Inheritance query

For a discussion of multiple inheritance see "Multiple Inheritance in
Smalltalk-80" by Alan Borning and Dan Ingalls in the AAAI-82
proceedings. The Lisp Machine Lisp manual also has some justification
for multiple inheritance schemes in the chapter on Flavors.

--david

[See also any discussion of the LOOPS language, e.g., in the
Fall issue of AI Magazine. -- KIL]

------------------------------

Date: Wed 14 Sep 83 19:16:41-EDT
From: Ted Markowitz <TJM@COLUMBIA-20.ARPA>
Subject: Info on micro LISP dialects

Has anyone evaluated verions of LISP that run on micros? I'd like to
find out what's already out there and people's impressions of them.
The hardware would be something in the nature of an IBM PC or a DEC
Rainbow.

--ted

------------------------------

Date: 12 Sep 1983 1415-PDT
From: Ichiki
Subject: Talk by Michael Hess

[This talk will be given at the SRI AI Center. Visitors
should come to E building on Ravenswood Avenue in Menlo
Park and call Joani Ichiki, x4403.]


Text Based Question Answering Systems
-------------------------------------

Michael Hess
University of Texas, Austin

Friday, 16 September, 10:30, EK242

Question Answering Systems typically operate on Data Bases consisting
of object level facts and rules. This, however, limits their
usefulness quite substantially. Most scientific information is
represented as Natural Language texts. These texts provide relatively
few basic facts but do give detailed explanantions of how they can be
interpreted, i.e. how the facts can be linked with the general laws
which either explain them, or which can be inferred from them. This
type of information, however, does not lend itself to an immediate
representation on the object level.

Since there are no known proof procedures for higher order logics we
have to find makeshift solutions for a suitable text representation
with appropriate interpretation procedures. One way is to use the
subset of First Order Predicate Calculus as defined by Prolog as a
representation language, and a General Purpose Planner (implemented in
Prolog) as an interpreter. Answering a question over a textual data
base can then be reduced to proving the answer in a model of the world
as described in the text, i.e. to planning a sequence of actions
leading from the state of affairs given in the text to the state of
affairs given in the question. The meta-level information contained in
the text is used as control information during the proof, i.e. during
the execution of the simulation in the model. Moreover, the format of
the data as defined by the planner makes explicit some kinds of
information particularly often addressed in questions.

The simulation of an experiment in the Blocks World, using the kind of
meta-level information important in real scientific experiments, can
be used to generate data which, when generalised, could be used
directly as DB for question answering about the experiment.
Simultaneously, it serves as a pattern for the representation of
possible texts describing the experiment. The question of how to
translate NL questions and NL texts, into this kind of format,
however, has yet to be solved.

------------------------------

Date: 12 Sep 1983 1730-PDT
From: Ichiki
Subject: Talk by Peter Borgwardt

[This talk will be given at the SRI AI Center. Visitors
should come to E building on Ravenswood Avenue in Menlo
Park and call Joani Ichiki, x4403.]

There will be a talk given by Peter Borgwardt on Monday, 9/19 at
10:30am in Conference Room EJ222. Abstract follows:

Parallel Prolog Using Stack Segments
on Shared-memory Multiprocessors

Peter Borgwardt
Computer Science Department
University of Minnesota
Minneapolis, MN 55455

Abstract

A method of parallel evaluation for Prolog is presented for
shared-memory multiprocessors that is a natural extension of the
current methods of compiling Prolog for sequential execution. In
particular, the method exploits stack-based evaluation with stack
segments spread across several processors to greatly reduce the need
for garbage collection in the distributed computation. AND
parallelism and stream parallelism are the most important sources of
concurrent execution in this method; these are implemented using local
process lists; idle processors may scan these and execute any process
as soon as its consumed (input) variables have been defined by the
goals that produce them. OR parallelism is considered less important
but the method does implement it with process numbers and variable
binding lists when it is requested in the source program.

------------------------------

Date: Wed, 14 Sep 83 07:31 PDT
From: "Glasser Alan"@LLL-MFE.ARPA
Subject: human-nets discussion on AI and architecture

Ken,

I see you have revived the Human-nets discussion about AI and
computer architecture. I initiated that discussion and saved all
the replies. I thought you might be interested. I'm sending them
to you rather than AILIST so you can use your judgment about what
if anything you might like to forward to AILIST.
Alan

[The following is the original message. The remainder of this
digest consists of the collected replies. I am not sure which,
if any, appeared in Human-Nets. -- KIL]


---------------------------------------------------------------------

Date: 4 Oct 1982 (Monday) 0537-EDT
From: GLASSER at LLL-MFE
Subject: artificial intelligence and computer architecture

I am a new member of the HUMAN-NETS interest group. I am also
newly interested in Artificial Intelligence, partly as a result of
reading "Goedel,Escher,Bach" and similar recent books and articles
on AI. While this interest group isn't really about AI, there isn't
any other group which is, and since this one covers any computer
topics not covered by others, this will do as a forum.
From what I've read, it seems that most or all AI work now
being done involves using von Neumann computer programs to model
aspects of intelligent behavior. Meanwhile, others like Backus
(IEEE Spectrum, August 1982, p.22) are challenging the dominance of
von Neumann computers and exploring alternative programming styles
and computer architectures. I believe there's a crucial missing link
in understanding intelligent behavior. I think it's likely to
involve the nature of associative memory, and I think the key to it
is likely to involve novel concepts in computer architecture.
Discovery of the structure of associative memory could have an
effect on AI similar to that of the discovery of the structure of
DNA on genetics. Does anyone out there have similar ideas? Does
anyone know of any research and/or publications on this sort of
thing?

---------------------------------------------------------------------

Date: 15 Oct 1982 1406-PDT
From: Paul Martin <PMARTIN at SRI-AI>
Subject: Re: HUMAN-NETS Digest V5 #96

Concerning the NON-VON project at Columbia, David Shaw, formerly of
the Stanford A. I. Lab, is using the development of some
non-VonNeuman hardware designs to make an interesting class of
database access operations no longer require times that are
exponential with the size of the db. He wouldn't call his project
AI, but rather an approach to "breaking the VonNeuman bottleneck"
as it applies to a number of well-understood but poorly solved
problems in computing.

---------------------------------------------------------------------

Date: 28 Oct 1982 1515-EDT
From: David F. Bacon
Subject: Parallelism and AI
Reply-to: Columbia at CMU-20C

Parallel Architectures for Artificial Intelligence at Columbia

While the NON-VON supercomputer is expected to provide significant
performance improvements in other areas as well, one of the
principal goals of the project is the provision of highly efficient
support for large-scale artificial intelligence applications. As
Dr. Martin indicated in his recent message, NON-VON is particularly
well suited to the execution of relational algebraic operations. We
believe, however, that such functions, or operations very much like
them, are central to a wide range of artificial intelligence
applications.

In particular, we are currently developing a parallel version of the
PROLOG language for NON-VON (in addition to parallel versions of
Pascal, LISP and APL). David Shaw, who is directing the NON-VON
project, wrote his Ph.D. thesis at the Stanford A.I. Lab on a
subject related to large-scale parallel AI operations. Many of the
ideas from his dissertation are being exploited in our current work.

The NON-VON machine will be constructed using custom VLSI chips,
connected according to a binary tree-structured topology. NON-VON
will have a very "fine granularity" (that is, a large number of very
small processors). A full-scale NON-VON machine might embody on the
order of 1 million processing elements. A prototype version
incorporating 1000 PE's should be running by next August.

In addition to NON-VON, another machine called DADO is being
developed specifically for AI applications (for example, an optimal
running time algorithm for Production System programs has already
been implemented on a DADO simulator). Professor Sal Stolfo is
principal architect of the DADO machine, and is working in close
collaboration with Professor Shaw. The DADO machine will contain a
smaller number of more powerful processing elements than NON-VON,
and will thus have a a "coarser" granularity. DADO is being
constructed with off-the-shelf Intel 8751 chips; each processor will
have 4K of EPROM and 8K of RAM.

Like NON-VON, the DADO machine will be configured as a binary tree.
Since it is being constructed using "off-the-shelf" components, a
working DADO prototype should be operational at an earlier date than
the first NON-VON machine (a sixteen node prototype should be
operational in three weeks!). While DADO will be of interest in its
own right, it will also be used to simulate the NON-VON machine,
providing a powerful testbed for the investigation of massive
parallelism.

As some people have legitimately pointed out, parallelism doesn't
magically solve all your problems ("we've got 2 million processors,
so who cares about efficiency?"). On the other hand, a lot of AI
problems simply haven't been practical on conventional machines, and
parallel machines should help in this area. Existing problems are
also sped up substantially [ O(N) sort, O(1) search, O(n^2) matrix
multiply ]. As someone already mentioned, vision algorithms seem
particularly well suited to parallelism -- this is being
investigated here at Columbia.

New architectures won't solve all of our problems -- it's painfully
obvious on our current machines that even fast expensive hardware
isn't worth a damn if you haven't got good software to run on it,
but even the best of software is limited by the hardware. Parallel
machines will overcome one of the major limitations of computers.

David Bacon
NON-VON/DADO Research Group
Columbia University

------------------------------

Date: 7 Nov 82 13:43:44 EST (Sun)
From: Mark Weiser <mark.umcp-cs@UDel-Relay>
Subject: Re: Parallelism and AI

Just to mention another project, The CS department at the University
of Maryland has a parallel computing project called Zmob. A Zmob
consists of 256 Z-80 processors called moblets, each with 64k
memory, connected by a 48 bit wide high speed shift register ring
network (100ns/shift, 25.6us/revolution) called the "conveyer
belt". The conveyer belt acts almost like a 256x256 cross-bar since
it rotates faster than a z-80 can do significant I/O, and it also
provides for broadcast messages and messages sent and received by
pattern match. Each Z-80 has serial and parallel ports, and the
whole thing is served by a Vax which provides cross-compiling and
file access.

There are four projects funded and working on Zmob (other than the
basic hardware construction), sponsored by the Air Force. One is
parallel numerical analysis, matrix calculations, and the like (the
Z-80's have hardware floating point). The second is parallel image
processing and vision. The third is distributed problem solving
using Prolog. The fourth (mine) is operating systems and software,
developing remote-procedure-call and a distributed version of Unix
called Mobix.

A two-moblet prototype was working a year and half ago, and we hope
to bring up a 128 processor version in the next few months. (The
boards are all PC'ed and stuffed but timing problems on the bus are
temporarily holding things back).

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT