Copy Link
Add to Bookmark
Report

AIList Digest Volume 4 Issue 032

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest            Friday, 21 Feb 1986       Volume 4 : Issue 32 

Today's Topics:
Queries - AI Teaching/Tutoring Package &
Expert Systems and Software Engineering &
Micro-PROLOG & 68k Unix LISP & NL Dialogue,
Literature - LISP Texts & Logo & MRS,
Methodology - Taxonomizing

----------------------------------------------------------------------

Date: Wed, 12 Feb 86 15:37:28 EST
From: munnari!goanna.oz!ysy@seismo.CSS.GOV (yoke shim YAP)
Subject: AI Teaching/Tutoring Package


Recently, I heard that a teaching / tutoring package has been
written using Artificial Intelligence techniques. It seems that
this piece of information appeared in an article. Has anyone
read or heard anything about this article? I would like to
get hold of this article and if possible, contact the author
of this package.

Y. S. YAP
Dept. of Computing
Faculty of Applied Science
RMIT ...or munnari!goanna.oz!ysy@SEISMO.ARPA
GPO BOX 2476V
Melbourne VIC. 3001
AUSTRALIA

------------------------------

Date: Mon, 17 Feb 86 14:18:11 -0100
From: Jorun Eggen <j_eggen%vax.runit.unit.uninett@nta-vax.arpa>
Subject: Expert Systems and Software Engineering

Hello out there!

Can anyone give me references to work carried out in order to see what
theory, methodologies and tools from Software Engineering can do to assist
the process of building expert systems? Or to put it another way: Is Knowledge
Acquisition today at the same level as Software Engineering was 20 years ago?
If the answer is yes, what can we learn from Software Engineering to help us
to provide reinventing the wheel and instead consentrate on the new unsolved
problems?

References to articles, reports, people, books etc. are welcome.
Thanks a lot, and be aware that my net-address "uninett" is spelled with
double t.

Jorun Eggen
RUNIT/SINTEF
N-7034 Trondheim-NTH
NORWAY

------------------------------

Date: 13 Feb 86 16:50:56 GMT
From: amdcad!lll-crg!gymble!umcp-cs!deba@ucbvax.berkeley.edu (Deba Patnaik)
Subject: micro-PROLOG info wanted

I am thinking of purchasing "micro-PROLOG". I would like to know
Price, comments and who distributes it.
Are there any other PROLOG ( Interepreters or Compilers ) ?
deba@maryland.arpa
deba%umdc.bitnet@wiscvm.arpa

------------------------------

Date: Mon, 17 Feb 86 16:00:16 -0500
From: johnson <johnson@dewey.udel.EDU>
Subject: seeking lisp for 68k unix world

Computer Logic, Inc., is seeking a license for an efficient lisp
running on 680XX-based Unix systems.

We are looking for an implementation of lisp that meets the
following criteria:

. source code is available
. runs on 68k-based UNIX machines
. allows loading of modules written in C, or other system-level language
. small and fast (even at the expense of advanced features)
. one-time license available, or nominal run-time environment royalties
. floating-point and integer arithmetic (arbitrary precision is NOT required)
. lisp "impurities" such as: setq, rplaca, rplacd

If you know of any lisp that meets these criteria, please pass us a pointer
to its author.


If YOU own an implementation of lisp, and would like to SELL it to us,
please send us:

. a description of your lisp, including:
. a list of the primitive functions
. the hardware/software requirements for a run-time system
. the hardware/software requirements for building your system from
source code
. some indication of the hard and soft limits of your system
(w/r/t maximum number of objects, number of symbols,
number of numbers, etc.)
. a brief description of any special features that you feel
would expedite software development in your lisp,
{editors, compilers, structured-objects, environment-dumps}

. how many times can you perform (T1 2000) without garbage collection on
a machine with 1048576 bytes of available memory?
(please extrapolate or interpolate from tests run on whatever
machine is available to you; be sure tell us the way that you arrived
at your figure)
when the garbage collection does occur, how long does it take?

. how long does (T2 20) take?

. if your lisp has an iterative construct (do, loop, or prog with goto)
how long does it take to perform (T3 5000)?


Feel free to modify these functions syntactically to allow them to run
in your version of lisp, but please include the modified versions along
with your results. (ps: these functions will run unmodified
in muLISP-85)

Most unix systems provide a means to measure the elapsed time allocated
to a given process (try "man time" on your system). Please give your
times in terms of this quantity. If no such facility is available, be
sure to indicate the conditions under which you ran the benchmark.

(DEFUN T1
(LAMBDA (N)
(COND ((> N 1) (LIST N (T1 (- N 1))))
(T (LIST 1)))))

(DEFUN T2
(LAMBDA (N)
(COND ((< N 2) 1)
(T (+ (T2 (- N 1)) (T2 (- N 2)))))))

(DEFUN T3
(LAMBDA (N)
(LOOP (IF (= N 0) (RETURN)) (SETQ N (- N 1)))))

Please send all description responses to:

Apperson H. Johnson
Computer Logic Inc.
2700 Philadelphia Pike, P.O. Box 9640
Wilmington, De. 19809

{johnson@udel will read any pointers}

------------------------------

Date: 13 Feb 86 11:34:13 GMT
From: mcvax!ukc!cstvax!hwcs!aimmi!george@SEISMO (George Weir)
Subject: Dialogue help please needed ?

I am currently am working on Dialogue Management Systems, with Natural
Language Understanding in it. Despite weeks of effort (including
Saturdays), I find my system is still unable to handle it with several
forms of natural expression.
Please help to cure my depressions : if you have a system working which
manages dialogue in of course natural language (complete with efficient
interpreter/complier), and it's able to cope with all known syntactic forms,
as well as most semantics, please send me a copy, or post it to this news
group.
I prefer a system which works in English but Norwegian would do it.
thanks you,
Ingy
P.S. Doesn't matter if your documentation isn't up to IEEE standards, if
they are close to it.

------------------------------

Date: Thu, 13 Feb 86 11:12:03 pst
From: sdcsvax!uw-beaver!ssc-vax!bcsaic!pamp@ucbvax.berkeley.edu
Subject: Re: request for LISP source code

In article <8602031844.AA28255@ucbvax.berkeley.edu> you write:

> I am teaching an AI course for the continuing education program at
>St. Mary's College in Southern Maryland. This is my first time teaching
>LISP and I would appreciate access to the source code for "project-
>sized"
LISP programs or any other teaching aids or material. We are
>using the 2nd edition of both Winston's AI and Winston&Horne's LISP.
>I hate to ask for help, but we are pretty far from mainstream AI
>down here and my students and I all have full time jobs so any help we
>can get from the professional AI community would be greatly
>appreciated by all of us.
>
> Bob Woodruff
> Veda@paxrv-nes.arpa


I'd like to make a recommendation in additional texts. We have found
Winston&Horn to be a bit irritating to work with, especially since
the problems and answers are either too vaguely stated or filled with
bugs. Two other books that we have found to handle LISP more adaquately
are:
Touretzky,David S.,1984,Lisp - A gentle introduction to
symbolic computation;Harper & Row , New York, 384p.
-- A good intro text for those who have no
experience in symbolic processing (generally, most
conventional programmers). Gives a good covering of
the basic principles behind LISP.

Wilensky,Robert,1984,LISPcraft,W.W.Norton & Company,New York
385p.
--Covers programming techniques and LISP philosophy
over different dialects quite well.


One thing that has helped with the training around the AI center
here is to take the time to give a little of the history of
LISP, where and why the different dialects have developed, and
a little of history of hardware currently in use. A short time
spent on relations to PROLOG couldn't hurt. (A good short article
of LISP and PROLOG history is:

Tello,Ernie,April 16,1985, The Languages of AI research,
PC Magazine,v.4,no.8,p.173-189.)

Hope this helps.

P.M.Pincha-Wagener

------------------------------

Date: 14 Feb 86 22:11:11 GMT
From: decvax!cwruecmp!leon@ucbvax.berkeley.edu (Leon Sterling)
Subject: Re: Pointers to Logo?

The AI department at the University of Edinburgh used to teach its
undergraduate courses in AI using Logo several years ago.
The lecture notes appear as a book called
Artificial Intelligence, published (I think) by Edinburgh University
Press, the editor is Alan Bundy.

------------------------------

Date: Wed, 19 Feb 86 07:57:07 PST
From: Curtis L. Goodhart <goodhart%cod@nosc.ARPA>
Subject: MRS

There was a recent question about what MRS stands for. According to
"The Compleat Guide to MRS" by Stuart Russell Esq., Stanford University
Knowledge Systems Laboratory Report No. KSL-85-12, page 2, "MRS stands for
Meta-level Representation System"
. In the preface on page i MRS is
described briefly as "a logic programming system with extensive meta-level
facilities. As such it can be used to impement virtually all kinds of
artificial intelligence applications in a wide variety of architectures."


Curt Goodhart (goodhart@nosc ... on the arpanet))

------------------------------

Date: Thu, 13 Feb 86 10:13 EST
From: Seth Steinberg <sas@BBN-VAX.ARPA>
Subject: Re: Taxonomizing in AI and Dumplings

Building a taxonomy is a means of predicting what will be found. Anyone
who has read any of Steve Gould's columns in Natural History will be
quite familiar with this problem. When Linnaeus devised the modern
biological taxonomy of the plant kingdom he was criticized for his
heavy emphasis on the sex lives of the flowers. He was considered
crude and salacious. He worked in a hurry to preempt any competetive
scheme and avoid a split in the field but his choice was prophetic and
his emphasis on sex was vindicated by Darwin's later work which argued
that sex was both essential to selection (no sex, no children) AND to
the origin and maintenance of the species.

Of course for every "good" taxonomy there are dozens of losers. Take
the old earth, air, fire and water taxonomy with its metaphoric power.
It still works; look in the Science Fiction and Fantasy section of your
local bookstore. Of course chemists and physicists use Medeleev's
taxonomy of the elements which has much better predictive power. There
is nothing wrong with building these structures as long as they can be
used to predict or explain something. Breaking up LISP programs into
families based on the number of parentheses has only limited predictive
power.

Building a taxonomy is no more or less than constructing a theory and
building a theory is useful because it gives people an idea of what to
look for. A sterile taxonomy is not particularly useful. That is the
positive side. A theory also tells people what to ignore and biology
is full of overlooked clues, all carefully noted and explained, waiting
to be illuminated by a new theory.

I think the debate going on now is typical in any young field. If we
had a theory we could use it to march rapidly along its path, much like
an Interstate highway. Even if we find it doesn't get us where we want
to go, we had a smooth pleasant ride. Witness classical
electrodynamics, its collapse and the advent of quantum theory. The
justifiable fear is that we will race past our exit and exclude or
ignore crucial signs which indicate the correct path.

Personally I think that it is time to set up a few theories of AI so
that we can have the fun of knocking them down. As one might expect
most theories at this stage are either useless and lack predictive
power (except possibly for predicting tenure) or are so weak and full
of holes that you can drive a truck full of LISP machines through them.
When people start developing theories with real predictive power that
are really hard to knock down then we can relax a bit.

Seth Steinberg

P.S. This month's Scientific American had an article on quantum effects
in biological reactions at low temperatures and the author argues that
conformational resonances (which determine reactivities) are driven by
quantum tunneling! Maybe there ARE carcinogenic vibrations!

------------------------------

Date: 12 Feb 86 17:02:35 GMT
From: hplabs!utah-cs!shebs@ucbvax.berkeley.edu (Stanley Shebs)
Subject: Re: taxonomizing in AI: useless, harmful

In article <3600038@iuvax.UUCP> marek@iuvax.UUCP writes:
>... Taxonomizing is a debatable art of empirical
>science, usually justified when a scientist finds itself overwhelmed with
>gobs and gobs of identifiable specimens, e.g. entymology. But AI is not
>overwhelmed by gobs and gobs of tangible singulars; it is a constructive
>endeavor that puts up putatative mechanisms to be replaced by others. The
>kinds of learning Michalski so effortlessly plucks out of the thin air are not
>as incontrovertibly real and graspable as instances of dead bugs.

Now I'm confused! Were you criticizing Michalski et al's taxonomy of
learning techniques in pp. 7-13 of "Machine Learning", or the "conceptual
clustering"
work that he has done? I think both are valid - the first
is basically a reader's guide to help sort out the strengths and limitations
of dozens of different lines of research. I certainly doubt (and hope)
no one takes that sort of thing as gospel.

For those folks not familiar with conceptual clustering, I can characterize
it as an outgrowth of statistical clustering methods, but which uses a
sort of Occam's razor heuristic to decide what the valid clusters are.
That is, conceptual "simplicity" dictates where the clusters lie. As an
example, consider a collection of data points which lie on several
intersecting lines. If the data points you have come in bunches at
certain places along the lines, statistical analysis will fail dramatically;
it will see the bunches and miss the lines. Conceptual clustering will
find the lines, because they are a better explanation conceptually than are
random bunches. (In reality, clustering happens on logical terms in
a form of truth table; I don't think they've tried to supplant statisticians
yet!)


>Please consider whether taxonomizing kinds of learning from the AI perspective
>in 1981 is at all analogous to chemists' and biologists' "right to study the
>objects whose behavior is ultimately described in terms of physics."
If so,
>when is the last time you saw a biology/chemistry text titled "Cellular
>Resonance"
in which 3 authors offered an exhaustive table of carcinogenic
>vibrations, offered as a collection of current papers in oncology?...

Hmmm, this does sound like a veiled reference to "Machine Learning"!
Personally, I prefer a collection of different viewpoints over someone's
densely written tome on the ultimate answer to all the problems of AI...


>More constructively, I am in the process of developing an abstract machine.
>I think that developing abstract machines is more in the line of my work as
>an AI worker than postulating arbitrary taxonomies where there's neither need
>for them nor raw material.
>
> -- Marek Lugowski

I detect a hint of a suggestion that "abstract machines" are Very Important
Work in AI. I am perhaps defensive about taxonomies because part of my
own work involves taxonomies of programming languages and implementations,
not as an end in itself, but as a route to understanding. And of course
it's also Very Important Work... :-)
stan shebs

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT