Copy Link
Add to Bookmark
Report

AIList Digest Volume 3 Issue 148

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest            Friday, 18 Oct 1985      Volume 3 : Issue 148 

Today's Topics:
Queries - Go & VMS vs. UNIX for AI & KR Languages for Semantic Nets &
Common Lisp Compiler/Interpreter for VAX/750(ULTRIX),
Literature - Foreign Language Abstracting,
Applications - ALV Demo,
AI Tools - YAPS & AI Machines

----------------------------------------------------------------------

Date: 15-Oct-85 22:02-EDT
From: David Nicol <cscboic%BOSTONU.bitnet@WISCVM.ARPA>
Subject: the ancient oriental game of Go

I am in the process of writing a program to mediate a Go game, and hopefully
will be able to write an algorithm or two for playing.
If anyone has done any thinking towards algorithms to play Go, or maybe
has one already, I would appreciate very much hearing from you.

Cscboic@bostonu

------------------------------

Date: Wed, 16 Oct 85 11:50 EST
From: "Christopher A. Welty" <weltyc%rpicie.csnet@CSNET-RELAY.ARPA>
Subject: VMS vs UNIX for AI development

I know this may set people at each others throats, but it is a
legitimate concern of mine, so here goes:

What experience has there been out there with AI (mainly ES)
development on VMS? We use both UNIX and VMS here at RPI, and I have
found in my experience that VMS makes it more difficult to do work,
and UNIX makes it easier. But there seems to be a number of people
(who don't even work for DEC) that swear by VMS. There must be some rational
reason for this. I don't really want to see a discussion of the Operating
Systems themselves (as that is another Newslist), just what support
they offer for development of Expert Systems (mainly LISP, but feel free
to add other languages). I know what UNIX offers, let me hear (see) what
VMS offers.

-Christopher Welty
RPI / CIE Systems Mgr.

------------------------------

Date: 16 Sep 85 1615 WEZ
From: U02F%CBEBDA3T.BITNET@WISCVM.ARPA (Franklin A. Davis)
Subject: Query: Languages for knowledge rep using semantic nets?

We are interested in knowledge representation using semantic nets
and frames, and we would like to know who has experience with
special languages for this purpose. Furthermore, are there
distributors for such software packages? Thanks in advance.

Regards, Franklin Davis <U02F@CBEBDA3T.BITNET>
Institut fuer Informatik und angewandte Mathematik
Universitaet Bern
Laenggassstrasse 51
CH-3012 Bern
Switzerland

------------------------------

Date: Thu, 17 Oct 85 9:24:26 EDT
From: "Srinivasan Krishnamurthy" <1438@NJIT-EIES.MAILNET>
Subject: Common Lisp Compiler/Interpreter for VAX/750(ULTRIX)

Dear Readers,
Can somebody give me details about a CommonLisp Compiler
and interpreter for a VAX/750 running ULTRIX? Heard that
KEE and Knowledge Craft(KC) work only on VMS, want to port
it to the above enviornment. Any ideas, leads are welcome.
Please message me directly at the following net addreses:

Mailnet: srini@NJIT-EIES.MAILNET
Arpanet: srini%NJIT-EIES.MAILNET@MIT-MULTICS.ARPA
USMAIL: S Krishnamurthy
COMSAT LABS, NTD.
22300 Comsat Drive.
Clarksburg, MD-20871
(301)428-4531

Thanks in advance.
Srini.

------------------------------

Date: 17 Oct 1985 0744-PDT (Thursday)
From: eugene@AMES-NAS.ARPA (Eugene Miya)
Subject: Last call for assistance: helping with foreign language abstracting


I would like to thank all of the people who responded for my first
call for people to help in the translation/abstraction of foreign
language documents. I have been travelling quite a bit during the
past five weeks, so next week, I will have a chance to lay the
groundwork for determining what journals to monitor and where to
post information.

For those of you who missed this earlier posting: I am seeking people
interested in monitoring foreign language technical documents with
an eye to post significant new articles to various bulletin boards.
This would be prior to translation, and would hopefully speed translation
of potentially significant papers in: AI, graphics, and so forth.
Languages which are particularly critical are Eastern Asian: Japanese and
Chinese, perhaps French, and other western European languages. We have
a few people of each, but it would help to spread the load out.

If you are interested, or want to hear more, send me mail to a UUCPnet/ARPAnet
gateway listed below.

--eugene miya
NASA Ames Research Center [Rock of Ages Home for ...]
eugene@ames-nas.ARPA
UUCP: {ihnp4,hao,hplabs,nsc,cray,research,decwrl}!ames!amelia!eugene

------------------------------

Date: 8 Oct 1985 1205-PDT
From: LAWS at SRI-AI.ARPA
Subject: Ogling Overseas Technology

>From the EE's Tools & Toys column,
IEEE Spectrum, Volume 22, No. 10, 10/85, p. 85

The latest research results being published outside the United States
may not be as difficult to monitor as one might think. The U.S. Dept.
of Commerce publishes a weekly newsletter called Foreign Technology
that abstracts new reports and papers from outside the U.S. that are
available through the National Technical Information Service. Each
abstract lists the report's title, author(s), date, and NTIS publication
number, along with a brief synopsis.

Topics that the Commerce Department tracks in the newsletter are:
biomedical technology; civil, construction, structural, and building
engineering; communications; computer technology; electro and optical
technology; energy; manufacturing and industrial engineering; materials
sciences; physical sciences; transportation technology; and mining and
minerals technology.

An annual subscription to the weekly newsletter, which is indexed
every January, costs $90 in North America. To subscribe or to request
subscription prices for other areas, write to U.S. Dept. of Commerce,
National Technical Information Service, 5285 Port Royal Rd., Springfield,
VA 22161; telephone (703) 487-4630.

------------------------------

Date: 17 Oct 1985 20:51:51 EDT
From: Spacely's Space Sprockets <MMDA@USC-ISI.ARPA>
Subject: ALV DEMO


In response to the recent post regarding the Autonomous Vehicle demo:

The Autonomous Land Vehicle project, sponsored by DARPA through the
Army ETL, is part of DARPA's Strategic Computing Program, sort of
the US's answer to Japan's Fifth Generation effort. Martin Marietta
Denver Aerospace ( the Advanced Automation Technology Section ) is
the prime contractor of the project -- we are actually developing the
system and performing the demos. The project started in late '84
(actually early '85 for most of us), and the first demo was in May '85.
We have another demonstration set for next month, and will have about
one per year for the next four years, I believe, each demo being more
ambitious.

The May demo was a preliminary road-following demonstration, with
the main point being that we actually got the vehicle going, hardware mounted,
some communications figured out, and made it follow a road autonomously.
It traversed a 1 km track of road at a speed of about 3 km/h (yep, pretty
slow). The vision system (based on a Vicom image processing computer)
produced scene models about every 3 seconds for the navigator/pilot
to interpret and control the vehicle. The scene model is basically 3D
road centerpoints.

In November, the vehicle will travel about 5 times as far at speeds up to
10 km/hr and handle things such as shadows on the road, intersecting roads,
and sharp curves. We will also be using an ERIM laser range scanner as well
as the camera we used in May to provide road images. In later demos we will
avoid obstacles, go over cross-country terrain, and other neato tricks.

Martin Marietta is officially the integrator of this project, and other
universities and companies also have research contracts -- University of
Maryland, Carnegie-Mellon University, SRI, AI&DS, Hughes, Honeywell,
and maybe some others that I'm not aware of. So far, most of the work
contributing to the demos has been done here at Martin Marietta. These other
folks will be contributing a lot in the future.

A paper describing the project and the May configuration will come out soon
in the proceedings of the SPIE Conference on Intelligent Robotics and Computer
Vision (Sept. '85) [1].

Matthew Turk
MMDA@USC-ISI.arpa

[1] Lowrie, Thomas, Gremban, Turk
The Autonomous Land Vehicle Preliminary Road-following Demonstration

------------------------------

Date: 18 Oct 1985 09:29-EDT
From: Hans.Tallis@ML.RI.CMU.EDU
Subject: YAPS

Srinivasan,
I'm working at Mitre for the summer (tallis@mitre) and we have a
version of YAPS which is source-code runnable under Franz,
Lambda Zetalisp and Symbolics Zetalisp. Since YAPS is
practically public domain, Liz Allen at Maryland probably
wouldn't mind our giving you a copy. Send mail if you're
intersted.
--Hans

------------------------------

Date: 14 Oct 1985 23:48:39-BST
From: Aaron Sloman <aarons%svga@ucl-cs.arpa>
Subject: DO you really need an AI machine?

Mon Oct 14 23:47:48 BST 1985

To John and Srinivasan,
At Sussex University we have been involved in AI teaching and
research for many years. Being British we have a relatively
small budget, and for this reason we have resisted going for
machines like Symbolics, since a wonderful tool is not much
use if you have to spend most of your time queueing up to use
it. Instead we have mostly been using VAXen for a range of AI
projects.

But we did not like the AI software available, so we developed
our own - POPLOG.. We've found that with suitable software 10
to 14 AI MSc students can be kept happy most of the time on a
4 Mbyte VAX 750 running Berkeley Unix or VMS. For more
advanced researchers the number drops, as it does if you get
someone doing image or speech processing. We can support this
number because most of the time most people are editing, not
running their programs. Of course, we are then stuck with a
terrible human-machine interface: a 24 by 80 VDU. So we are
now trying to shift as much as possible of our research onto
SUN workstations - cheaper than Symbolics. At least SUNs run
Unix, unlike most purpose-built AI workstations, and for us
that's a big advantage. We use POPLOG, but there's also
Quintus Prolog, Common Lisp, and other AI tools available on
the SUN. Of course it will be a little while before these
machines have the mature interfaces available on Lisp
machines.
Aaron Sloman, Cognitive Studies Programme,
University of Sussex, Brighton, England.

------------------------------

Date: Tue, 15 Oct 85 21:22:55 PDT
From: Richard K. Jennings <jennings@AEROSPACE.ARPA>
Subject: AI Hype & Big AI Machines.

I think Martins (#146) missed the point of Wylands (#145) salient
observation that AI researchers focus on *problems* while disciples of
other forms of CS focus on *solutions*. For those of you near good
college libraries let me suggest that you look up the "Collected Works
of John VonNeumann"
and read what he has to say about computers. In
short, he advocates (see Vol 5) that computers be used to obtain
insights into problems, which are then presumably solved in closed
form.
It makes sense, then, to use AI to develop an understanding
of problems which are really too difficult to deal with without AI
techniques, and then close in on a CS *solution*, and finally with
insights so obtained on a closed form, mathematically verifiable
true solution.
In my area of interest we have to deal with lots of nasty
solutions to differential equations. At last, I have sold my
bosses to get Macsyma to help us beat these monster's into a form
which we can implement in reasonable time on our mainframes.
[Macsyma is a super duper symbolic algebra package which costs
5K for a Symbolics 3670 from Symbolics -- it was developed over
the last 20 years at MIT -- and yes, the price just dropped].

With regard to messages from Peck (#144) suggesting small
Xerox AI machines and Connolly (#145) and Welty (#144) singing the
praises of large AI machines -- there is no doubt that if you have
experienced AI investigators, and a network of solid general
purpose processing the large AI (and small ones) are worth their
cost.
The original questions was, however, where does one start?
Cugini was correct in pointing out the risks of jumping in to
the AI culture too quickly for the reasons he stated and two others:
1) *solutions* often are often dependant upon CS techniques, 2) if
you don't know that AI is part of your solution (eg. you are
not part of an research AI group) why commit yourself prematurely?
Coupled with the availability of good learning systems (and
adequate but perhaps not best) development systems on the VAX and
PC-AT there is really little need to invest initially in a
dedicated AI machine of any size (although Xerox's 6085 sure looks
nice). Slow as Golden Hill's Common Lisp is, it is the Lisp system
of choice for beginning Lisper's at our organization (we also have
a Symbolics 3670 as I implied above). In fact, rolling out of
bed in the morning does not qualify one to appreciate the Symbolics
development environment. {if it did it probably would not be
worth using}

[flame off]

Richard Jennings
Arpa: jennings@aerospace

I don't work for them, just use their arpanet port.

------------------------------

Date: Wed, 16 Oct 85 10:21 CDT
From: Joseph_Tatem <tatem%ti-eg.csnet@CSNET-RELAY.ARPA>
Subject: AI machines

I have been reading with interest the discussion about AI (LISP)
machines and their usefulness. Since I have been thinking about this
myself, I will take this oppurtunity to throw my two cents in.

>From what I hear, the average time that it takes to come up to speed
on one of these LISP machines is about 3 months. This corresponds
roughly to my own experience. Of course, some vendors provide online
services that can aid you to various degrees. However, I have found
that John Cugini's sentiments are common and not altogether
ill-founded. Have you ever done any work with the Window System on
one of these beasts?? If you have you know that it is a mess and is
not well-documented. You find flavors like STREAM-MIXIN-WITH-HACKS.
When you look these up in the manual, you will likely-as-not read
something like, "This function does not work reliably, don't use it."

On the other hand, once you have learned your way around a little
bit, you find that you are using a very powerful machine with a nice
development environment. I can get at lots of imformation in the
debugger and I can incrementally develop my systems, etc, etc. I
find that I have become spoiled. Things that I once did fine without
(or with) now seem essential (unnecessary): I don't know how I lived
without (with) them before.

I believe that the problem is a design issue. Most of these machines
are based on software that was developed at (and licensed by) MIT.
It seems to me that the system was never really designed (at least
the user-interface), but that it was a concatenation of Master's
Theses (and other grad. student type work). This is not to say that
it is not a good system. There has been a lot of good work put into
these machines. It is just that there needs to be some consistency.

I see a need to redesign at two levels. First, I would like to see
a consistent set of functions (and flavors, etc). Secondly, I would
like to see a well-designed user interface. A mouse and a few
windows do not make a good interface just by being there. By now, we
should know the kinds of things that make computers easy to use. There
certainly is no dearth of ideas in the literature. At the very least,
I would like for a brand-new user to be able to sit down at one of
these beasts and at least be able to figure out which mouse button to
click or which function to enter to get himself started.

So whaddya think?? Joe Tatem
tatem%ti-eg@csnet-relay

Note: The opinions expressed herein are strictly my own and in no
way reflect those of my employer.

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT