Copy Link
Add to Bookmark
Report
AIList Digest Volume 2 Issue 070
AIList Digest Wednesday, 6 Jun 1984 Volume 2 : Issue 70
Today's Topics:
Games - Computer War Games Request,
AI Tools - Stanford Computer Plans,
Scientific Method - Hardware Prototyping,
Seminar - Expert System for Maintenance
----------------------------------------------------------------------
Date: 1 Jun 84 13:22:15-PDT (Fri)
From: hplabs!intelca!cem @ Ucb-Vax.arpa
Subject: Computer War Games
Article-I.D.: intelca.287
This may be a rather simple problem, but it least it has no philosophical
ramifications.
I am developing a game that plays very similarly to the standard combat
situation type games that Avalon Hill is famous for. Basically, it has
various pieces of hardware, such as battleships, aircraft carriers,
destroyers, transports, tanks, armies, various aircraft, etc. and the
purpose is to build a fighting force using captured cities and defeat
the opposing force. It is fairly simple to make the computer a "game
board" however I would also like it to be at least one of the opponents
also. So I need some pointers on how to make the program smart enough
to play a decent game. I suspect there will be some similarities to
chess since it to is essentially a war game. The abilities I hope to
endow my computer with are those of building a defense, initiating an
offense, and a certain amount of learnablity. Ok world, what text
or tome describes techniques to do this ? I have a book on "gaming
theory" that is nearly useless, I suspect. One that was a little more
practical and less "and this is the proof ...", 10 pages later the
next sentence begins. Maybe something like Newman and Sproul's graphics
text but for AI.
--Chuck McManis
ihnp4! Disclaimer : All opinions expressed herein are my
\ own and not those of my employer, my
dual! proper! friends, or my avacado plant.
/ \ /
fortune! \ /
X--------> intelca!cem
ucbvax! / \
\ / \
hplabs! rocks34! ARPAnet : "hplabs!intelca!cem"@Berkeley
/
hao!
------------------------------
Date: Fri 1 Jun 84 15:17:06-PDT
From: Mark Crispin <MRC@SU-SCORE.ARPA>
Subject: Stanford University News Service press release
[Forwarded from the Stanford bboard by CC.Clive@UTEXAS-20.]
[Forwarded from the UTexas-20 bboard by CMP.Werner@UTEXAS-20.]
STANFORD UNIVERSITY NEWS SERVICE
STANFORD, CALIFORNIA 94305
(415) 497-2558
FOR INFORMATION CONTACT: Joel Shurkin
FOR IMMEDIATE RELEASE
STANFORD COMMISSIONS COMPUTER TO REPLACE LARGE DEC-20'S.
STANFORD--
Stanford University is negotiating with a small Silicon
Valley company to build large computers to replace the ubiquitous
DECSYSTEM-20s now ``orphaned'' by their manufacturer, Digital
Equipment Corp. (DEC).
The proposed contract, which would total around $1.4
million, would commision two machines from Foonly Inc. of
Mountain View for delivery early in 1986. Foonly is owned by
former Stanford student David Poole.
According to Len Bosack, director of the Computer Science
Department's Computer Facilities, the Foonly F1B computer system
is about four times faster than the DEC model 2060 and 10 times
faster when doing floating-point computations (where the decimal
point need not be in the same place in each of the numbers
calculated) that are characteristic of large-scale engineering
and scientific problems.
Ralph Gorin, director of Stanford's Low Overhead Time
Sharing (LOTS) Facility -- the academic computer center -- said
the Foonly F1B system, which is totally compatible with the
DEC-20, is an outgrowth of design work done by Poole and others
while at the Stanford Artificial Intelligence Laboratory.
Since 1977, Foonly has built one large system, the F1,
and several dozen smaller systems. The Foonly F1B is a
descendant of the original F1, with changes reflecting advances
in integrated circuit technology and the architectural
refinements (internal design) of the latest DEC-20s.
A spokesman for DEC said the company announced last year
it had discontinued work on a successor to the DEC-20, code named
``Jupiter,'' and would continue to sell enhanced versions of the
large mainframe. Service on the machines was promised for the
next ten years.
However, said Sandra Lerner, director of the Computing
Facilities at the Graduate School of Business, the
discontinuation of DEC-20 development left approximately 1,000
customers world-wide without a practicable ``growth path.''
Ten DECSYSTEM-20 computers on campus make that machine the
most numerous large system at Stanford.
The Graduate School of Business uses its two DEC-20s for
administration, coursework, and research. The Computer Science
Department uses two systems for research and administration.
LOTS, the academic computer facility, supports instruction and
unsponsored research on three systems and hopes to add one more
in the time before the F1B is available.
Other DEC-20s are at the Department of Electrical
Engineering, the artifical intelligence project at the Medical
Center (SUMEX), and the recently formed Center for the Study of
Language and Information (CSLI).
The Stanford University Network (SUNet), the main
university computer communications network, links together the
10 DEC-20s, approximately 30 mid-size computers, about 100
high-performance workstations, and nearly 400 terminals and
personal computers.
The DEC-20 has been a cornerstone of research in artificial
intelligence (AI). Most of the large AI systems evolved on the
DEC-20 and its predecessors. For this reason, Stanford and other
computer science centers depend on these systems for their
on-going research.
Lerner said the alternative to the new systems would
entail prohibitive expense to change all programs accumulated
over nearly twenty years at Stanford and to retrain several
thousand student, faculty, and staff users of these systems. The
acquisition of the Foonly systems would be a deliberate effort to
preserve these university investments.
6-1-84 -30- JNS3A
EDITORS: Lerner may be reached at (415) 497-9717, Gorin at
497-3236, and Bosack at 497-0445.
------------------------------
Date: Mon 4 Jun 84 22:22:51-EDT
From: David Shaw <DAVID@COLUMBIA-20.ARPA>
Subject: Correcting Stone's Mosaic comments
Reluctant as I am to engage in a computer-mediated professional spat, it is
clear that I can no longer let the inaccuracies suggested by Harold Stone's
Mosaic quote go uncorrected. During the past two weeks, I've been
inundated with computer mail asking me to clarify the issues he raised. In
my last message, I tried to characterize what I saw as the basic
philosophical differences underlying Harold's attacks on our research.
Upon reading John Nagle's last message, however, it has become clear to me
that it is more important to first straighten out the surface facts.
First, I should emphasize that I do not in any way hold John Nagle
responsible for propagating these inaccuracies. Nagle interpreted Stone's
remarks in Mosaic exactly as I would have, and was careful to add an
"according to the writer quoted" clause in just the right place. I also
agree with Nagle that Stone's observations would have been of interest to
the AI community, had they been true, and thus can not object to his
decision to circulate them over the ARPANET. As it happens, though, the
obvious interpretation of Stone's published remarks, as both Nagle and I
interpreted them, were, quite simply, counterfactual.
Nagle interpreted Stone's remarks, as I did, to imply that (in Nagle's
words) "NON-VON's 1 to 3 are either unfinished or were never started."
(Stone's exact words were "Why is there a third revision when the first
machine wasn't finished?") In fact, a minimal (3 processing element)
NON-VON 1 has already been completed and thoroughly tested. The custom IC
on which it is based has been extensively tested, and has proved to be 100%
functional. Construction of a non-trivial (though, at 128 PE's, still
quite small) NON-VON 1 machine awaits only the receipt from ARPA's MOSIS
system of enough chips to build a working prototype. If MOSIS is in fact
able to deliver these parts according to the estimated timetable they have
given us, we should be able to demonstrate operation of the 128-node
prototype before our original milestone date of 12/84.
In fact, we have proceeded with all implementation efforts for which we
have received funding, have developed and tested working chips in an
unusually short period of time, and have met each and every one of our
project milestones without a single schedule overrun. When the editors of
Mosaic sent me a draft copy of the text of their article for my review, I
called Stone, and left a message on his answering device suggesting that
(even if he was not aware of, did not understand, or had some principled
objection to our phased development strategy) he might want to change the
words "wasn't finished" to "hasn't yet been finished" in the interest of
factual accuracy. He never returned my call, and apparently never
contacted Mosaic to correct these inaccuracies.
For the record, let me try to explain why NON-VON has so many numbers
attached to its name. NON-VON 2 was a (successful) "paper-and-pencil"
exercise intended to explore the conceptual boundaries of SIMD vs. MIMD
execution in massively parallel machines. As we have emphasized both in
publications and in public talks, this architecture was never slated for
physical implementation. To be fair to Stone, he never explicitly said
that it was. Still, I (along with Nagle and others who have since
communicated with me) felt that Stone's remarks SUGGESTED that NON-VON 2
provided further evidence that we were continually changing our mind about
what we wanted to build, and abandoning our efforts in midstream. This is
not true.
NON-VON 3, on the other hand, was in fact proposed for actual
implementation. Although we have not yet received funding to build a
working prototype, and will probably not "freeze" its detailed design for
some months, considerable progress has been made in a tentative design and
layout for a NON-VON 3 chip containing eight 8-bit PE's. The NON-VON 3 PE
is based on the same general architectural principles as the working
NON-VON 1 PE, but incorporates a number of improvements derived from
detailed area, timing, and electrical measurements we have obtained from
the NON-VON 1 chip. In addition, we are incorporating a few features that
were considered for implementation in NON-VON 1, but were deemed too
complex for inclusion in the first custom chip to be produced at Columbia.
While we still expect to learn a great deal from the construction of a
128-node NON-VON 1 prototype, the results we have obtained in constructing
the NON-VON 1 chip have already paid impressive dividends in guiding our
design for NON-VON 3, and in increasing the probability of obtaining a
working, high-performance, 65,000-transistor chip within the foreseeable
future. Based on his comments, I can only assume that, in my position,
Stone would have attempted to jump directly from an "armchair design" to a
working, highly optimized 65,000-transistor nMOS chip without wasting any
silicon on interim experimentation. This strategy has two major drawbacks:
1. It tends to result in architectures that micro-optimize (in both the
area and time dimensions) things that ultimately don't turn out to make
much difference, at the expense of things that do.
2. It often seems to result in chips that never work. Even when they do,
the total expenditure for development, measured in either calendar months,
designer-months, or fabrication costs, is typically far larger than is the
case with a phased strategy employing carefully selected elements of
"bottom-up" experimentation.
Finally, let again state my view that one of the essential characteristics
of the emerging paradigm for experimental research in the field of
nonstandard architectures is the development of "non-optimal" machines that
nonetheless clearly explicate and test new architectural ideas. Even in
NON-VON 3, we have not attempted to embody all of (or even all of the most
important) architectural features that we believe will ultimately prove
important in massively parallel machines.
By way of illustration, we have thus far limited the scope of our
experimental work to very fine grain SIMD machines supporting only a single
physical PE interconnection scheme. This is not because we believe that
the future of computation lies in the construction of such machines. On
the contrary, I am personally convinced that, if massively parallel
machines ever do find extensive use in practical applications (and, in my
view, it is too early to predict whether they will), they are almost
certain to exhibit heterogeneity in all three dimensions (granularity,
synchrony and topology).
Ultimately, we hope to broaden the scope of the NON-VON project to consider
the opportunities and problems associated with more than one class of
processing element, multiple-SIMD, as opposed to strictly SIMD, execution
schemes, and the inclusion of additional communication links. In the
context of a research (as opposed to a development) effort, however, it
often seems to be more productive to explore a few mechanisms in some
detail than incorporate within the first architectural experiment all
features that seem like they might ultimately come in handy.
The NON-VON 1 prototype, along with our proposed NON-VON 3 machine,
exemplify this approach to experimental research in computer architecture.
Until we lose interest in the problems of massively parallel computation,
or run out of either unresolved questions or the funding to answer them, we
are likely to stick to our current research strategy, which is based in
part on the implementation of experimental hardware in multiple, partially
overlapped stages. Although I know this will upset Harold, there may thus
someday be a NON-VON 4, a NON-VON 5, and possibly even a NON-VON 6. Some
of these later successors may never get past the stage of educated doodles,
while others may yield only concrete evidence of the shortcomings of some
of our favorite architectural ideas.
I believe it to be characteristic of the paradigm shift to which I referred
in my last message that the very strategy to which we attribute much of our
success is casually dismissed by Stone as evidence of indecison and
failure. As decreasing IC device dimensions and the availability of
rapid-turnaround VLSI facilities combine to significantly expand the
possibilities for experimental research on computer architectures, it may
be useful to take a fresh look our criteria for evaluating research methods
and research results in this area.
David
P.S. For those who may be interested, a more detailed explanation of the
rationale behind our plan for the phased development of NON-VON prototypes
is outlined in a paper presented at COMPCON '84. This paper was not,
however, available to Stone at the time his remarks were quoted in Mosaic;
in general, our failure to promptly publish papers describing our work is
probably the source of much legitimate criticism of the NON-VON project.
------------------------------
Date: 29 May 1984 16:59-EDT
From: DISRAEL at BBNG.ARPA
Subject: Seminar - Expert System for Maintenance
[Forwarded from the MIT bboard by SASW@MIT-MC.]
There will be a seminar on Thursday, June 7th at 10:30 in the 2nd floor
large conference room. The speaker will be Gregg Vesonder of Bell
Labs.
ACE: An Expert System for Telephone Cable Maintenance
Gregg T. Vesonder
Bell Laboratories
Whippany, NJ
As more of the record keeping and monitoring functions of the local
telephone network are automated, there is an increasing burden on the
network staff to analyze the information generated by these systems.
An expert system called ACE (Automated Cable Expertise) was developed
to help the staff manage this information. ACE analyzes the
information by using the same rules and procedures that a human analyst
uses. Standard knowledge engineering techniques were used to acquire
the expert knowledge and to incorportae that knowledge into ACE's
knowledge base. The most significant departure from "standard" expert
system architecture was ACE's use of a conventional data base
management system as its primary source of information. Our experience
with building and deploying ACE has shown that the technology of expert
systems can be useful in a variety of business data processing
environments.
------------------------------
End of AIList Digest
********************