Copy Link
Add to Bookmark
Report
AIList Digest Volume 7 Issue 020
AIList Digest Tuesday, 7 Jun 1988 Volume 7 : Issue 20
Today's Topics:
Queries:
MACIE
Response to: Inductive rule-learning tools - BEAGLE
Object Oriented Programming in Canada
Resignation from "AI in Engineering"
Philosophy:
definition of AI?
Informatics vs Computer Science + AI in Space
Re: CogSci and AI
----------------------------------------------------------------------
Date: 2 Jun 88 13:22:48 GMT
From: uh2@psuvm.bitnet (Lee Sailer)
Subject: MACIE
Is it possible to obtain MACIE, the neural-net Expert System described
in the Feb. issue of CACM?
Can someone offer me a pointer to the author, Stephen Gallant, at
Northeastern U?
thanks.
------------------------------
Date: 3-JUN-1988 21:26:35 GMT
From: POPX%VAX.OXFORD.AC.UK@MITVMA.MIT.EDU
Subject: Response to: Inductive rule-learning tools - BEAGLE
Inductive rule-learning tools: BEAGLE
From: Jocelyn Paine,
Experimental Psychology,
South Parks Road,
Oxford.
Janet Address: POPX @ UK.AC.OX.VAX
In reply to M.E. van Steenbergen's request for details of other
inductive tools in AILIST V7/16:
There is a program called BEAGLE (named after the ship on which Darwin
sailed to the Galapagos) which "breeds" classificatory rules by "natural
selection". BEAGLE's input is a training set of records, all having the
same structure as one another. Each record contains a number of
variables, some of which may depend on others. BEAGLE's output is one or
more rules describing the conditions for this dependence, in terms of
ANDs, ORs, NOTs, arithmetic comparisons, and (I think) plus, minus,
times, and divide.
An example given by Richard Forsyth, BEAGLE's author: assume a training
set describing observations of iris plants, where there are three
varieties of iris. Each record in the set contains:
The name of that variety - this is essentially an enumerated type,
which can take one of 3 values;
The length of its stamens;
The date at which it starts flowering;
The length of its petals.
BEAGLE's task is to induce rules which predict, from the stamen length,
flowering date, and petal length of a new iris, which variety it is.
BEAGLE begins by generating some syntactically correct rules at random,
paying no attention to their meaning. It then tests them against the
training set, selecting for the fittest (those which classify best), and
discarding the worst rules. It then subjects their "genetic material" to
crossover re-combination, as well as random mutation, so making new
rules. It continues for several cycles, until some criterion (I'm not
sure what) is satisfied by the remaining rules.
Richard says that BEAGLE has been used by the health insurance company
BUPA to induce useful relations concerning heart disease amongst BUPA's
subscribers. He sells it for (at least) PC-clones (for about #200?) and
VAXes (for about #1500?), from his company Warm Boot, Nottingham. I
don't know the rest of his address, or the exact prices of his software:
I'm recalling a talk he gave us recently. However, perhaps what I've
said will help.
------------------------------
Date: Sun, 5 Jun 88 20:08 EDT
From: "Nahum (N.) Goldmann" <ACOUST%BNR.BITNET@MITVMA.MIT.EDU>
Subject: Object Oriented Programming in Canada
Everybody in Canada who is interested in the Object-Oriented Programming
(OOP) and/or in the behavioral design research as related to the
development of human-machine interfaces (however remotely connected to
these subjects) - please reply to my e-mail address. The long-term
objective - organization of a corresponding Canadian bulletin board.
Greetings and thanks.
Nahum Goldmann
(613)763-2329
e-mail (BITNET): <ACOUST@BNR.CA>
------------------------------
Date: Mon, 06 Jun 88 10:29:37 EDT
From: <sriram@ATHENA.MIT.EDU>
Subject: Resignation from the AI in Engineering
Due to some personal problems with the publishing director of the
International Journal for AI in Engineering, I have decided to
resign as the co-editor of the Journal. Please note that as
of April 1988, I am no longer associated with Computational Mech.
Publications in any capacity.
If you plan to write a book in the near future, the following books
are worthing reading before you sign a contract.
The Bussiness of Being a Writer
S. Golding and K. Sky
Carroll and Graf Publishers, Inc.
How to Understand and Negotiate a Book Contract
R. Balkin
Writer's Digest Books
Sriram
1-253, Intelligent Engineering Systems Lab.
M.I.T., Cambridge, MA 02139
ARPAnet: sriram@athena.mit.edu
tel.no:(617)253-6981
------------------------------
Date: 5 Jun 88 16:13:25 GMT
From: ruffwork@cs.orst.edu (Ritchey Ruff)
Subject: definition of AI? (was: who else isn't a science?)
I've always liked my thesis advisor's (Tom Dietterich) definition
of AI. It works so much better then any other I've seen.
AI is the study of ill-defined computational methods.
This explains why once we understand something it is no
longer AI, it also explains why AI does not spend a lot of
time trying to verify that "this is the way OUR brain does it".
I think AI should be as defined above, and leave the strong/weak
"AI" to cognitive science...
--ritchey ruff ruffwork@cs.orst.edu -or- ...!tektronix!orstcs!ruffwork
------------------------------
Date: Thu, 2 Jun 88 23:22:12 PDT
From: larry@VLSI.JPL.NASA.GOV
Subject: Informatics vs Computer Science + AI in Space
--
The term "computer science" has done much to limit the field in ways I don't
like. It focuses attention on computers, which seems to me to make as
little sense as calling musicology "music-instrument studies."
To me the proper study is information: what it is, where it comes from, what
its deep structure is, how it's processed, etc. Whether it's processed by a
carbon-based or silicon-based life-form, whether the processor naturally
evolved or was built, is of less interest to me than the processing: the
algorithms or heuristics that transform and use information.
Further, the use of "science" in conjunction with "computer" confuses
people, who complain that a science of artifacts makes sense only as anthro-
pology, that the proper companion words are "computer engineering." (We
went through this argument on AIList a few weeks ago.)
I'd prefer "information science." Unfortunately, that phrase has already
been picked to mean the automated part of library science (another misnomer).
The French "informatique" seems to capture the connotation I'd prefer.
I doubt if the English-speaking world would take well to it, however,
so my second suggestion is "informatics," the English variant of Swedish
and Russian words.
Informatique/informatics suggests automated information processing, but
leaves open the possibility of a data-processing system composed of people,
or an integral part of it. That last is usually the case, but because of
the emphasis on computers the people parts are often slighted. Documen-
tation and user interfaces, for instance, are often poor.
Even worse, attempts are made to do away with the human components
altogether even where people can do a job more flexibly and efficiently (and
possibly more humanely). Perhaps one of the beneficial effects of AI will
be more respect for the "simple" things people do. Walking, picking up and
inspecting an object, even standing still without falling, is very complex,
as roboticists have found out.
JPL (which is part of CalTech and does most of its work for NASA) is
responsible for all space exploration beyond the Moon. We are very
interested in robotics, due to the danger and cost of sending people into
space--and money is very tight for space exploration. But experience is
showing that robotics has much higher costs than was anticipated, and that
human skills and judgement are needed in even mundane areas.
I've come to believe that the most cost-effective solution is AI-assisted
teleoperators. It should be possible, for instance, to construct space
stations with almost all the construction personnel on the ground, using
color stereoscopic cameras and remote manipulaters in orbit. In low-Earth
orbit, and even at geosynchronous orbit, round-trip transmission delays make
direct manipulation possible.
Some complete automation is practical, of course, especially since it can be
fairly dumb; remote operators can watch a robot work and intervene when the
unexpected occurs or the schedule calls for tasks that require human skill
or judgement. At Moon-orbit and on Moon-surface some AI assistance and more
automation would be needed; round-trip transmission delays are about three
seconds. The remote operator would primarily act as a supervisor, though
the low gravity of the Moon would make it possible to catch a dropped
object. However, the remote operator might have to be on Qaaludes to keep
from going nuts!
Larry @ jpl-vlsi
------------------------------
Date: 6 Jun 88 14:04 PDT
From: hayes.pa@Xerox.COM
Subject: Re: CogSci and AI
Gilbert Cockton says "The proper object of the study of humanity is humans, not
machines" . What he fails to understand is that people ARE machines; and
moreover, to say that they are is not to dehumanise them or reduce them or
disparage them in any way. Thinking of ourselves as machines - which is really
no more than saying that we belong in the natural world - if anything gives one
MORE respect for people. Anyone who has thought seriously about navigating
robots will look with awe at a congenital idiot lumbering clumsily around the
streets.
By the way, the difference between Cognitive Science and AI is that the former
is a ( rather loosely defined ) interdisciplinary area where AI, cognitive and
perceptual psychology, psycholinguistics, neuroscience and bits of philosophy
all cohabit with more or less mutual discomfort. ( Like all such
interdiscipinary areas, it is a bit like a singles bar. ) The key sociological
point to bear in mind is that these different people have interests which
overlap, but are trained to use methods and to respect criteria of success and
honesty which often contradict one another, or at best have no mutual relevance.
This makes mutual communication difficult, and only fairly recently have
computer science and psychology come to see what it is that the other considers
as vital to respectability. ( Respectively: models specified in sufficient
detail to be implemented; theories specified in a way which admits of clean
empirical test. ) Unfortunately these requirements are often at odds, a source
of continual difficulty in communication. As for connectionism ( of which the
much cited PDP work is one variety), nobody doubts its importance, but it
sometimes comes along with a philosophical ( or perhaps methodological ) claim
which is much more controversial, which is that it spells the end of the idea of
mental representations. This has caused it to become the political
rallying-point for a curious mixture of people ( including Dreyfus, for
example ) who have long been opponents of AI, and so it has come to have the
odor of being somehow on the opposite side of a fence from AI, and we get such
oddities as Gilbert's question, "Will the recent interdisciplinary shift,
typified by the recent PDP work, be the end of AI as we knew it?". No, it won't.
What's in a name? Nothing, provided we all agree on what it means; but since
communication is notoriously difficult, its wise to try to be sensitive to what
names are intended to mean.
Pat Hayes
( After typing this, I read Richard O'Keefe's suggested definitions of cogsci
and AI, and largely agree with them. So there's one, Richard. )
------------------------------
End of AIList Digest
********************