Copy Link
Add to Bookmark
Report
AIList Digest Volume 3 Issue 084
AIList Digest Monday, 1 Jul 1985 Volume 3 : Issue 84
Today's Topics:
Queries - Expert System Validation & LISP Productivity,
Psychology - Predation/Cooperation & Common Sense,
Business - TI and Sperry Join Forces,
Games - Chess Programs and Cheating,
Seminars - Learning in Expert Systems (Rutgers) &
How to Clear a Block (SRI)
----------------------------------------------------------------------
Date: Sat, 29 Jun 85 01:58:21 edt
From: Walter Maner <maner%bgsu.csnet@csnet-relay.arpa>
Subject: Expert System Validation
I would appreciate pointers to research dealing with answers to questions about
expert system bugs, e.g.,
How can expert-system advice be validated?
Are there failure modes specific to expert systems?
What classes of error can be prevented by consistency enforcers?
I am primarily interested in how these answers would apply to very large
rule-based systems which have evolved under multiple authorship.
Walter Maner
CSNet maner@bgsuvax
UseNet ...cbosgd!osu-eddie!bgsuvax!maner
SnailMail Department of Computer Scinece
Bowling Green State University
Bowling Green, OH 43403
------------------------------
Date: Wed, 26 Jun 85 13:02 CDT
From: Patrick_Duff <pduff%ti-eg.csnet@csnet-relay.arpa>
Subject: requested: papers concerning LISP programmer man-hours
I am trying to locate articles which discuss the differences between LISP
and non-AI languages in terms of the time and effort required to create
prototype systems, to make additions or revisions to a design after much of
the programming is completed, total programming time from start to finish,
etc.. My opinion is that in general, it takes fewer man-hours to create a
LISP program than to create a program to do the same task using languages
such as Ada, Pascal, FORTRAN, or an assembly language. Note that I am
*not* claiming that the program will also be "better", more efficient, or
faster--just that most relatively large programs will take less time to
write in LISP. I have been asked to come up with justification for using
LISP based upon the total man-hours required. Does anyone know of a paper
which would support or undercut my opinion? Has there been a convincing
demonstration or test of the power of LISP (and its powerful programming
environment) versus more traditional languages?
regards, Patrick
Patrick S. Duff, ***CR 5621*** pduff.ti-eg@csnet-relay
5049 Walker Dr. #91103 214/480-1659 (work)
The Colony, TX 75056-1120 214/370-5363 (home)
(a suburb of Dallas, TX)
------------------------------
Date: Saturday, 29 Jun 1985 22:22-EST
From: munnari!psych.uq.oz!ross@seismo
Subject: Predation/Cooperation (AIL v3 #78)
David West (AIL v3 #82) mentioned the work of Robert Axelrod on the
evolution of cooperation. Another good summary of Axelrod's work can
be found in Douglas Hofstadter's Metamagical Themas column in Scientific
American, May 1983, v248 #5, pp 14-20.
UUCP: {decvax,vax135,eagle,pesnta}!mulga!psych.uq.oz!ross
ARPA: ross%psych.uq.oz@seismo.arpa
CSNET: ross@psych.uq.oz
ACSnet: ross@psych.uq.oz
Mail: Ross Gayler Phone: +61 7 224 7060
Division of Research & Planning
Queensland Department of Health
GPO Box 48
Brisbane 4001
AUSTRALIA
------------------------------
Date: Thu, 27 Jun 85 14:08:05 pdt
From: Evan C. Evans <evans%cod@Nosc>
Subject: Common Sense
Common sense = conclusions reached thru the processes of
natural reasoning (or behaviors resulting from such). I borrow
heavily from Julian Jaynes, The Origin of Consciousness in the
Breakdown of the Bicameral Mind. Natural reasoning is neither
conscious nor rigorous in the sense of formal logic. For in-
stance, upon observing a piece of wood floating on a given pond
one will conclude directly that ANOTHER piece of wood will float
on ANOTHER pond. This is sometimes called reasoning from partic-
ulars. More simply, it's expectation based on subliminal gen-
eralization. A baby quickly concludes that objects will fall
without being AWARE of that conclusion. We're constantly ex-
ercising natural reasoning to reach conclusions about others'
feelings or motives based on their expressions or actions. Such
reasoning was early recognized as unconscious & called automatic
inference or COMMON SENSE, see John Steward Mill or James Sully.
Pu's elaboration on Pratt stands, but it is well to
remember that natural reasoning is usually unconscious & does not
necessarily proceed by logical means. In fact, automatic infer-
ence sometimes achieves correct conclusions by demonstrably il-
logical means.
evans@nosc-cc
------------------------------
Date: Thu 27 Jun 85 15:54:15-CDT
From: Werner Uhrig <CMP.WERNER@UTEXAS-20.ARPA>
Subject: news: TI and SPERRY join forces to sell AI
[ from Austin American Statesman - June 26, 1985 ]
TI captures computer deal with Sperry
=====================================
(Kirk Ladendorf - Statesman staff) - TI has landed what it calls its biggest
ever sales contract in the still-infant artificial intelligence industry - a
three-year, $42 million deal to supply computers and related equipment to
Sperry Corp.
Sperry, one of the largest computer makers with $5.7 billion in sales last
year, plans a large-scale campaign to develop specialized, salable uses for the
TI machine, which Sperry will call the Knowledge Workstation.
For TI the contract gives credibility that its well-regarded artificial
intelligence system, called Explorer, is more than just an esoteric product
with limited sales potential.
.....
Sperry will combine the TI machine with a software system called the Knowledge
Engineering Environment software developed by Intellicorp of Menlo Park, Calif.
Intellicorp software is regarded as a very sophisticated tool for building
specialized AI-programs.
The new system can be used to create so-called "expert systems" which ...
Such programs have been used on a demonstration basis to perform such tasks as
the running of an electrical power-plant and experimental weather forecasting.
Sperry's 26 specialized applications programs will be aimed at areas that have
been difficult to serve with traditional computers. Those areas include
software development; testing and debugging; navigation; communications sognal
processing; CAD/CAM; and scheduling and resource allocation.
Sperry chose TI over 2 principal competitors in the field, Symbolics and Lisp
Machine Inc, because TI "has the best AI hardware available," a Sperry
spokesman said.
.....
Sales of AI Lisp-machines totaled only about $85 million last year, but Sperry
projects the AI market will mushroom to more than $4 billion by 1990.
.....
TI has announced no major additions to its 3,000 person Austin staff because of
the new contract, but ... it has already begun to build the staff it needs to
support the Explorer and the Business Pro.
TI is already at work developing new features for the Explorer. They include
developing computer communications links so that the AI machine can interact
with Sperry and other IBM-compatible mainframes.
....
------------------------------
Date: Fri 21 Jun 85 21:44:18-EDT
From: Andrew M. Liao <WESALUM.A-LIAO-85@KLA.WESLYN>
Reply-to: LIAO%Weslyn.Bitnet@WISCVM.ARPA
Subject: Chess, Programs And Cheating
A Consideration Of "Do Computers Cheat At Chess?"
I've been giving some thought to the question, "Do computers
actually cheat at chess?". To start, I'm going to assume that
what is at issue in the first objection is a chess program's use
of a game tree whose nodes are representations of
potential board/piece/move configurations. I think the
objection that computers cheat because they use "external
boards" (albeit represented internally) can be answered by
saying, "No - there is no cheating involved because humans 'look
ahead' in some way and since no physical external boards are
allowed, the only way to 'look ahead' is to represent an
'image' of potential board positions in one's mind [though in
a real limited way]. But isn't this just what a program does -
only better?" I think that, in some sense, the argument that
programs cheat at chess by virtue of having "internally
represented 'external boards'" is just wrong. What a program
tries to do on one aspect is to simulate what is going on
inside a person's mind and, in a limited sense, this is
actually achieved (albeit by brute force game trees).
The second objection concerns the problems of "moves-made-
by-reference". The objection, if I understand it correctly,
is that (1) one cannot refer to moves that have been pre-
recorded for the player's use during a match and that (2) such
moves are encoded into a program (we disallow an external
database file of moves since it is, in some way, a set of moves
that have been pre-recorded for future use), and without these
encoded moves, a program does not know what opening
move(s)/strategy(ies) is optimal. Presumably, the reason for this
rule is to force a player to rely on his experience and no one
else's (i.e. no outside help) and at the same time, prevent any
player being put at an unfair disadvantage. But I think it
cannot be denied that encoding any move into a chess program is
tantamount to making the program dependent upon the author's
experience and not its own - a clear violation of the spirit of
the rule. The question remains - Is it cheating? I am of the
opinion that such a program is cheating on the basis that the
program cannot decide during the opening of the game what
strategy is optimal for it and hence must rely on outside help,
in the form of stored data, given to it by its author.
Although I feel the first objection is easily answered, I am
still not happy with my reply to the second, although my
intuition tells me that my reply to the second objection is, at
least in spirit, on the right track. The motivation for my second
reply is due (in great part) to J.R. Searle's conception of the
Background which directly relates to the problem of "experience"
and the like.
------------------------------
Date: Wed 26 Jun 85 09:58:03-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Re: Chess, Programs And Cheating
Reply to Andrew Liao:
When I open with a king pawn, I am relying on the experience and
knowledge of others -- that doesn't seem to be cheating. I prefer
an interpretation of the rules as "you run what you brung" -- namely
that you cannot access external help >>during the match<<. I do
admit that stored book openings seem questionable (although chess
masters certainly memorize such material), but to say that a
computer's superior memory gives it an advantage is no more damning
than to say that its superior speed gives it an advantage. In just
a few years it will be obvious that computers are inherently better
"chess machines" than people are, and people will stop quibbling
about handicaping the computer in one way or another to make
the contest "fair".
-- Ken Laws
------------------------------
Date: 28 Jun 85 11:07:37 EDT
From: PRASAD@RUTGERS.ARPA
Subject: Seminar - Learning in Expert Systems (Rutgers)
LEARNING IN SECOND GENERATION EXPERT SYSTEMS
Walter Van De Velde
AI Laboratory, Vrije Universiteit Brussel
This talk discusses a learning mechanism for second generation expert
systems: rule-learning by progressive refinement. Second generation expert
systems not only use heuristic rules, but also have a model of the domain of
expertise so that deeper reasoning is possible whenever the rules are
deficient. A learning component is described that abstracts new rules out of
the results of deep reasoning. Gradually, the rule set is refined and
restructured so that the expert system can solve more problems in a more
effecient way. The approach is illustrated with concrete implemented
examples.
Date: Friday, June 28, 1985
Time: 11 AM
Place: Hill Center, Room 423
------------------------------
Date: Thu 27 Jun 85 12:21:03-PDT
From: LANSKY@SRI-AI.ARPA
Subject: Seminar - How to Clear a Block (SRI)
"HOW TO CLEAR A BLOCK"
or
Unsolved Problems in the Blocks World #17
Richard Waldinger -- SRI AI Center
11:00 am, WEDNESDAY, July 3
Room EJ232, SRI International
ABSTRACT:
Apparently simple problems in the blocks world get more complicated
when we look at them closely. Take the problem of clearing a block.
In general, it requires forming conditionals and loops and even
strengthening the specifications; no planner has solved it.
We consider how such problems might be approached by bending a
theorem prover a little bit.
------------------------------
End of AIList Digest
********************