Copy Link
Add to Bookmark
Report
AIList Digest Volume 1 Issue 108
AIList Digest Saturday, 3 Dec 1983 Volume 1 : Issue 108
Today's Topics:
Editorial Policy,
AI Jargon,
AI - Challenge Responses,
Expert Systems & Knowledge Representation & Learning
----------------------------------------------------------------------
Date: Fri 2 Dec 83 16:08:01-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Editorial Policy
It has been suggested that the volume on this list is too high and the
technical content is too low. Two people have recently written to me
suggesting that the digest be converted to a magazine format with
perhaps a dozen edited departments that would constitute alternating
special issues.
I appreciate their offers to serve as editors, but have no desire to
change the AIList format. The volume has been high, but that is
typical of new lists. I encourage technical contributions, but I do
not wish to discourage general-interest discussions. AIList provides
a forum for material not appropriate to journals and conferences --
"dumb" questions, requests for information, abstracts of work in
progress, opinions and half-baked ideas, etc. I do not find these a
waste of time, and attempts to screen any class of "uninteresting"
messages will only deprive those who are interested in them. A major
strength of AIList is that it helps us develop a common vocabulary for
those topics that have not yet reached the textbook stage.
If people would like to split off their own sublists, I will be glad
to help. That might reduce the number of uninteresting messages
each reader is exposed to, although the total volume of material would
probably be higher. Narrow lists do tend to die out as their boom and
bust cycles gradually lengthen, but AIList could serve as the channel
by which members could regroup and recruit new members. The chief
disadvantage of separate lists is that we would lose valuable
cross-fertilization between disciplines.
For the present, I simply ask that members be considerate when
composing messages. Be concise, preferably stating your main points
in list form for easy reference. Remember that electronic messages
tend to seem pugnacious, so that even slight sarcasm may arouse
numerous rebuttals and criticisms. It is unnecessary to marshall
massive support for every claim since you will have the opportunity to
reply to critics. Also, please keep in mind that AIList (under my
moderatorship) is primarily concerned with AI and pattern recognition,
not psychology, metaphysics, philosophy of science, or any other topic
that has its own major following. We welcome any material that
advances the progress of intelligent machines, but the hard-core
discussions from other disciplines should be directed elsewhere.
-- Ken Laws
------------------------------
Date: Tue 29 Nov 83 21:09:12-PST
From: David Rogers <DRogers@SUMEX-AIM.ARPA>
Subject: Re: Dyer's flame
In this life of this list a number of issues, among them intelligence,
parallelism and AI, defense of AI, rational psychology, and others have
been maligned as "pointless" or whatever. Without getting involved in a
debate on "philosophy" vs. "real research", a quick scan of these topics
shows them to be far from pointless. I regret that Dyer's students have
stopped reading this list; perhaps they should follow his advice of submitting
the right type of article to this list.
As a side note, I am VERY interested in having people outside of mainstream
AI participate in this list; while one sometimes wades through muddled articles
of little value, this is more than repaid by the fresh viewpoints and
occasional gem that would have been otherwise never been found.
Ken Laws has done an excellent job grouping the articles by interest and
topic; uninterested readers can then skip reading an entire volume, if the
theme is uninteresting. A greater number of articles submitted can only
improve this process; the burden is on those unsatisfied with the content of
this board to submit them. I would welcome submissions of the kind suggested
by Dr. Dyer, and hope that others will follow his advice and try to lead the
board to whatever avenue they think is the most interesting. There's room
here for all of us...
David Rogers
DRogers@SUMEX-AIM.ARPA
------------------------------
Date: Tue 29 Nov 83 22:24:14-PST
From: PEREIRA@SRI-AI.ARPA
Subject: Tools
I agree with Michael Dyer's comments on the lack of substantive
material in this list and on the importance of dealing with
new "real" tasks rather than using old solutions of old problems
to show off one's latest tool. However, I feel like adding two
comments:
1. Some people (me included) have a limited supply of "writing energy"
to write serious technical stuff: papers, proposals and the like.
Raving about generalities, however, consumes much less of that energy
per line than the serious stuff. The people who are busily writing
substantive papers have no energy left to summarize them on the net.
2. Very special tools, in particular fortunate situations
("epiphanies"?!) can bring a new and better level of understanding of a
problem, just by virtue of what can be said with the new tool, and
how. Going the other direction, we all know that we need to change our
tools to suit our problems. The paradigmatic relation between subject
and tool is for me the one between classical physics and mathematical
analysis, where tool and subject are intimately connected but yet
distinct. Nothing of the kind has yet happened in AI (which shouldn't
surprise us, seeing at how long it took to develop that other
relationship...).
Note: Knowing of my involvement with Prolog/logic programming, some
reader of this might be tempted to think "Ahah! what he is really
driving at is that logic/Horn clauses/Prolog [choose one] is that kind
of tool for AI. Let me kill that presumption in the bud, these tool
addicts are dangerous!" Gentle reader, save your flame! Only time will
show whether anything of the kind is the case, and my private view on
the subject is sufficiently complicated (confused?) that if I could
disentangle it and write about it clearly I would have a paper rather
than a net message...
Fernando Pereira
------------------------------
Date: Wed 30 Nov 83 11:58:56-PST
From: Wilkins <WILKINS@SRI-AI.ARPA>
Subject: jargon
I understand Dyer's comments on what he calls the tool/content distinction.
But it seems to me that the content distinctions he rightly thinks are
important can often be expressed in terms of tools, and that it would be
clearer to do so. He talked about handling one's last trip to the restaurant
differently from the last time one is in love. I agree that this is an
important distinction to make. I would like to see the difference expressed
in "tools", e.g., "when handling a restaurant trip (or some similar class of
events) our system does a chronological search down its list of events, but
when looking for love, it does a best first search on its list of personal
relationships." This is clearer and communicates more than saying the system
has a "love-MOP" and a "restaurant-script". This is only a made up example
-- I am not saying Mr. Dyer used the above words or that he does not explain
things well. I am just trying to construct a non-personal example of the
kind of thing to which I object, but that occurs often in the literature.
------------------------------
Date: Wed, 30 Nov 83 13:47 EST
From: Steven Gutfreund <gutfreund%umass-cs@CSNet-Relay>
Subject: McCarthy and 'mental' states
In the December Psychology Today John McCarthy has a short article that
raises a fairly contentious point.
In his article he talks about how it is not necessarily a bad thing that
people attribute "human" or what the calls 'mental' attributes to complex
systems. Thus when someone anthropomorphises the actions of his/her
car, boat, or terminal, one is engaging in a legitimate form of description
of a complex process.
Indeed he argues further that while currently most computer programs
can still be understood by their underlying mechanistic properties,
eventually complex expert systems will only be capable of being described
by attributing 'mental' states to them.
----
I think this is the proliferation of jargon and verbiage that
Ralph Johnson noted is associated with
a large segment of AI work. What has happened is not a discovery or
emulation of cognitive processes, but a break-down of certain weak
programmers' abilities to describe the mechanical characteristics of
their programs. They then resort to arcane languages and to attributing
'mental' characteristics to what are basically fuzzy algorithms that
have been applied to poorly formalized or poorly characterized problems.
Once the problems are better understood and are given a more precise
formal characterization, one no longer needs "AI" techniques.
- Steven Gutfreund
------------------------------
Date: 28 Nov 83 23:04:58-PST (Mon)
From: pur-ee!uiucdcs!uicsl!Anonymous @ Ucb-Vax
Subject: Re: Clarifying my 'AI Challange' - (nf)
Article-I.D.: uiucdcs.4190
re: The Great Promises of AI
Beware the promises of used car salesmen. The press has stories to
sell, and so do the more extravagant people within AI. Remember that
many of these people had to work hard to convince grantmakers that AI
was worth their money, back in the days before practical applications
of expert systems began to pay off.
It is important to distinguish the promises of AI from the great
fantasies that have been speculated by the media (and some AI
researchers) in a fit of science fiction. AI applications will
certainly be diverse and widespread (thanks no less to the VLSI
people). However, I hope that none of us really believes that machines
will possess human general intelligence any time soon. We banter about
such stuff hoping that when ideas fly, at least some of them will be
good ones. The reality is that nobody sees a clear and brightly lit
path from here to super-intelligent robots. Rather we see hundreds of
problems to be solved. Each solution should bring our knowledge and
the capabilities of our programs incrementally forward. But let's not
kid ourselves about the complexity of the problems. As it has already
been pointed out, AI is tackling the hard problems -- the ones for
which nobody knows any algorithms.
------------------------------
Date: Wed, 30 Nov 83 10:29 PST
From: Tong.PA@PARC-MAXC.ARPA
Subject: Re: AI Challenge
Tom Dietterich:
Your view of "knowledge representations" as being identical with data
structures reveals a fundamental misunderstanding of the knowledge vs.
algorithms point. . .Why, I'll bet there's not a single AI program that
uses leftist-trees or binomial queues!
Sanjai Narain:
We at Rand have ROSS. . .One implementation of ROSS uses leftist trees for
maintaining event queues. Since these queues are in the innermost loop
of ROSS's operation, it was only sensible to make them as efficient as
possible. We think we are doing AI.
Sanjai, you take the letter but not the spirit of Tom's reflection. I
don't think any AI researcher would object to improving the efficiency
of her program, or using traditional computer science knowledge to help.
But - look at your own description of ROSS development! Clearly you
first conceptualized ROSS ("queues are the innermost loop") and THEN
worried about efficiency in implementing your conceptualization ("it was
only sensible to make them as efficient as possible"). Traditional
computer science can shed much light on implementation issues, but has
in practice been of little direct help in the conceptualization phase
(except occasionally by analogy and generalization). All branches of
computer science share basic interests such as how to represent and use
knowledge, but AI differs in the GRAIN SIZE of the knowledge it
considers. It would be very desirable to have a unified theory of
computer science that provides ideas and tools along the continuum of
knowledge grain size; but we are not quite there, yet. Until that time,
perceiving the different branches of computer science as contributing
useful knowledge to different levels of implementation (e.g. knowledge
level, data level, register transfer level, hardware level) is probably
the best integration our short term memories can handle.
Chris Tong
------------------------------
Date: 28 Nov 83 22:25:35-PST (Mon)
From: pur-ee!uiucdcs!marcel @ Ucb-Vax
Subject: RJ vs AI: Science vs Engineering? - (nf)
Article-I.D.: uiucdcs.4187
In response to Johnson vs AI, and Tom Dietterich's defense:
The emergence of the knowledge-based perspective is only the beginning of
what AI has achieved and is working on. Obvious corollaries: knowledge
acquisition and extraction, representation, inference engines.
Some rather impressive results have been obtained here. One with which I
am most familiar is work being done at Edinburgh by the Machine Intelligence
Research Unit on knowledge extraction via induction from user-supplied
examples (the induction program is commercially available). A paper by
Shapiro (Alen) & Niblett in Computer Chess 3 describes the beginnings of the
work at MIRU. Shapiro has only this month finished his PhD, which effectively
demonstrates that human experts, with the aid of such induction programs,
can produce knowledge bases that surpass the capabilities of any expert
as regards their completeness and consistency. Shapiro synthesized a
totally correct knowledge base for part of the King-and-Pawn against
King-and-Rook chess endgame, and even that relatively small endgame
was so complex that, though it was treated in the chess literature, the
descriptions provided by human experts consisted largely of gaps. Impressively,
3 chess novices managed (again with the induction program) to achieve 99%
correctness in this normally difficult problem.
The issue: even novices are better at articulating knowledge
by means of examples than experts are at articulating the actual
rules involved, *provided* that the induction program can represent
its induced rules in a form intelligible to humans.
The long-term goal and motivation for this work is the humanization of
technology, namely the construction of systems that not only possess expert
competence, but are capable of communicating their reasoning to humans.
And we had better get this right, lest we get stuck with machines that run our
nuclear plants in ways that are perhaps super-smart but incomprehensible ...
until a crisis happens, when suddenly the humans need to understand what the
machine has been doing until now.
The problem: lack of understanding of human cognitive psychology. More
specifically, how are human concepts (even for these relatively easy
classification tasks) organized? What are the boundaries of 'intelligibility'?
Though we are able to build systems that function, in some ways, like a human
expert, we do not know much about what distinguishes brain-computable processes
from general algorithms.
But we are learning. In fact, I am tempted to define this as one criterion
distinguishing knowledge-based AI from other computing: the absolute necessity
of having our programs explain their own processing. This is close to demanding
that they also process in brain-compatible terms. In any case we will need to
know what the limits of our brain-machine are, and in what forms knowledge
is most easily apprehensible to it. This brings our end of AI very close to
cognitive psychology, and threatens to turn knowledge representation into a
hard science -- not just
What does a system need, to be able to X?
but How does a human brain produce behavior/inference X, and how do
we implement that so as preserve maximal man-machine compatibility?
Hence the significance of the work by Shapiro, mentioned above: the
intelligibility of his representations is crucial to the success of his
knowledge-acquisition method, and the whole approach provides some clues on
how a humane knowledge representation might be scientifically determined.
A computer is merely a necessary weapon in this research. If AI has made little
obvious progress it may be because we are too busy trying to produce useful
systems before we know how they should work. In my opinion there is too little
hard science in AI, but that's understandable given its roots in an engineering
discipline (the applications of computers). Artificial intelligence is perhaps
the only "application" of computers in which hard science (discovering how to
describe the world) is possible.
We might do a favor both to ourselves and to psychology if knowledge-based AI
adopted this idea. Of course, that would cut down drastically on the number of
papers published, because we would have some very hard criteria about what
comprised a tangible contribution. Even working programs would not be
inherently interesting, no matter what they achieved or how they achieved it,
unless they contributed to our understanding of knowledge, its organization
and its interpretation. Conversely, working programs would be necessary only
to demonstrate the adequacy of the idea being argued, and it would be possible
to make very solid contributions without a program (as opposed to the flood of
"we are about to write this program" papers in AI).
So what are we: science or engineering? If both, let's at least recognize the
distinction as being valuable, and let's know what yet another expert system
proves beyond its mere existence.
Marcel Schoppers
U of Illinois @ Urbana-Champaign
------------------------------
End of AIList Digest
********************