Copy Link
Add to Bookmark
Report
AIList Digest Volume 3 Issue 155
AIList Digest Thursday, 24 Oct 1985 Volume 3 : Issue 155
Today's Topics:
Query - AI and Responsibility Panel,
Literature - AI Book by Jackson,
Philosophy - MetaPhilosophers Mailing List,
News - New Jersey Regional AI Colloquium Series,
Logic - Modus Ponens,
AI Tools - AI Workstations,
Opinion - SDI Software and AI Hype &
Problems with Current Knowledge-Based Systems
----------------------------------------------------------------------
Date: Tue, 22 Oct 85 22:02:00 PDT
From: Richard K. Jennings <jennings@AEROSPACE.ARPA>
Subject: AI & Responsibility
After reading comments on this net concerning the
responsibility of AI systems, I finally got around to
looking into the IJCAI proceedings. There was evidently
a pretty lively panel discussion between people (one
lawyer) who think that computers are the next group
to be franchised as people (following blacks and
women) and others (AI researchers) who tended to
argue that computers are unreliable and bear close
watching.
Anybody out there attend the real thing and
care to comment on how the oral discussion went? Any
other comments on the Proceedings text? (pp1260+ in
Vol II, IJCAI '85).
Rich.
------------------------------
Date: Wed, 23 Oct 1985 00:36 EDT
From: MINSKY%MIT-OZ@MIT-MC.ARPA
Subject: AI Book by Jackson
I am reading Jackson's AI book. It's very good and particularly in
respect to the early decades of AI. I have seen no better way to get
a picture of all the ideas of the 1960's, which many students don't
know and do not always re-invent either.
------------------------------
Date: Tue 22 Oct 85 22:33-EDT
From: Glen Daniels <MLY.G.DANIELS%MIT-OZ@MIT-MC.ARPA>
Subject: New mailing list
MetaPhilosophers%MIT-OZ@MIT-MC.ARPA
Discussion of personal philosophies, cosmologies, and metaphysical things.
The place to air your ideas (or see others) on life, why we're here, what
Mind is (as opposed to Brain), where our "selves" come from, what the
universe is, what God is, any anything else in a metaphysical/philosophical
vein.
Send mail to MetaPhilosophers-Request%MIT-OZ@MIT-MC for more
information or to be added.
Everyone is welcome!
--Gub (The MetaModerator)
------------------------------
Date: 23 Oct 85 15:59:04 EDT
From: DRASTAL@RED.RUTGERS.EDU
Subject: New Jersey regional AI colloquium series
Dear Colleague,
During the last IJCAI, it became clear to me that keeping in touch
with other members of the AI community is only getting harder. Networks
are not the right communication medium for reporting new work in progress,
and the major conferences have grown too large for lively exchange. Yet,
there are quite enough of us in the central New Jersey area who have
something to say about our work in AI.
This is why Dr. Yousry and I have decided to parent an informal
colloquium series for researchers in this geographic area, and we invite
your participation. Reports are welcome in areas ranging from theoretical
foundations to implementation techniques. Anyone wishing to present or
host a colloquium should send an abstract to one of us at the address below.
We will coordinate the date, location, and distribution of announcements.
Since this letter creates the series, it is most important that we
hear from you now so that a distribution list can be compiled. Speakers
will be recruited once we develop a critical mass of interested people.
I know that we can look forward to some stimulating chain reactions among
the participants.
George A. Drastal Mona A. Yousry
RCA AT&T
Artificial Intelligence Laboratory Engineering Research Center
Route 38 ATL Building P.O. Box 900
Moorestown Corporate Center Princeton, NJ 08540
Moorestown, NJ 08057
609-639-2405
609-866-6653
ihnp4!erc780!may
DRASTAL@RUTGERS.ARPA
------------------------------
Date: 23 Oct 85 14:43:06 GMT
From: Bob Stine <stine@edn-vax.arpa>
Subject: re: modus ponens
Mike Dante writes:
> Suppose a class consists of three people, a 6 ft boy (Tom), a 5 ft girl
>(Jane), and a 4 ft boy (John). Do you believe the following statements?
>
> (1) If the tallest person in the class is a boy, then if the tallest is
> not Tom, then the tallest will be John.
> (2) A boy is the tallest person in the class.
> (3) If the tallest person in the class is not Tom then the tallest
> person in the class will be John.
>
> How many readers believe (1) and (2) imply the truth of (3)?
In answer to the last question - gosh, I sure do. The way the
question is framed, however, blurs the distinctions between several
separate issues.
It would do to review what it means for a statement (or statements)
to imply another, which is just that statement A implies statement
B if and only if statement A and the negation of statement B are
contradictory. If several statements imply B, then their conjunction
is inconsistent with the negation of B.
In the above example, whether or not (1) and (2) are true, false,
or silly, (1) and (2) imply (3). What we believe about the truth
or falsity of an argument's premises is quite another issue from
the soundness of the argument.
What clouds the issue, I think, is that you have introduced a
contra-factual hypothesis in (3) (i.e., "assume that Tom is not
tallest"). If we assumed that Tom were not tallest, then to preserve
consistency, some or all of the other atomic suppositions
(Jane is five feet, etc) would have to go. This would terminate
the support for the argument's premises, and get us off the hook
for asserting its conclusion.
One final point. Note that (1) is equivalent to
(1') A boy is not tallest or Tom is tallest or John is tallest.
>From Mike's supposition - Tom is 6, Jane is 5, and John is 4 feet tall -
we can deduce that Tom is tallest. It would be unusual to ask whether
we believe the weaker statement (1') once we have established that Tom
is indeed tallest. This points to another area where questions of
logic part company from questions of belief - logic holds, even where
questions of belief are inappropriate.
- Bob Stine
------------------------------
Date: 23 Oct 85 08:57:00 EDT
From: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Reply-to: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Subject: Son of DO you really need an AI machine?
Since my original eruption provoked some responses (gratifyingly enough),
I thought I'd indulge myself to a few comments on the comments.
> I think, therefore, that one can learn how to teach people
> LISP-machineology if one studies the way people learn games.
> Mostly, they learn from other people. In an informal study I did
> at M.I.T., I discovered that people learn to play rogue by watching
> other people play rogue and by asking more experienced players about
> what they should do whenever a difficult situation came up. People who
> play at home or who never ask don't play as well. Profound discovery.
Yes, I agree completely - we did not have a local Symbolics wizard,
and no doubt this made my life more difficult. The situation is
reflected in the fact that I had to *develop* (as I said earlier)
about 10-12 pages of densely-packed cheat sheets, rather than
*inheriting* them, and then customizing.
> I agree that LISP machines are darned hard to learn; I also agree
> that they're worth the effort. My interests are twofold: why is it
> that intelligent, capable people like Cugini aren't willing to make the
> effort? How can the learning be made easier, or at least, more
> attractive.
The heart of the issue here is I *did* make the effort and did get to
the point of feeling reasonably comfortable (though I certainly
did not attain wizardom) with the beast - I even knew by heart
how to get out of the inspector! - and even with that I never
felt I was quite getting my money's worth. I believe there are two
factors:
1) My own style of programming leans away from spontaneity - perhaps
I "over-design", but usually for me the coding is "merely" (hah!)
a realization of an existing design. All the features of an
AI-machine are focused on *coding and testing* - but by then
in some sense the real work is done. Debugging aids are
always helpful, of course, but I never really felt the
need for all the exotic editor features. Perhaps also
a lot of these features really come into their own only
with truly large systems (> 5,000 lines).
2) the issue is always, not: is this AI-machine good?, but: is this
AI-machine better than the alternatives? If the alternative is
writing an expert system in BASIC with a line-oriented editor,
then I too would kill to get on a Symbolics. But in my case
(not wholly atypical, I think) the alternative was the use
of VAX/VMS Common Lisp. My previous message discussed the costs
of moving from a familiar, fully functional and maintained
(by someone else, I'm pleased to say - who wants to do tape
backup, anyway?) system to a new standalone machine. I should
re-emphasize a point made in passing last time: the VAX
implementation is very well done - it has a slightly intelligent
editor (even blinks matching parens for you!), a good debugger,
prettyprinter, etc etc.
Now in one sense, the AI-machine advocates can crow: "well, the
only reason you like the VAX is that they stole, er, borrowed
some of the nifty techniques originally developed on AI machines."
True enough, but I'm not giving out prizes for creativity; if
I can get "most" of the advantages of an AI machine, together
with those of a plain old VAX (FORTRAN, Pascal, SNOBOL4, mail
to other people including ailist, laser printer, TEX, a single
set of files, my very own terminal, free (to me) maintenance,
etc..), isn't this the best deal?
John Cugini <Cugini@NBS-VMS>
Institute for Computer Sciences and Technology
National Bureau of Standards
Bldg 225 Room A-265
Gaithersburg, MD 20899
phone: (301) 921-2431
------------------------------
Date: Wed, 23 Oct 1985 00:28 EDT
From: MINSKY%MIT-OZ@MIT-MC.ARPA
Subject: SDI Software and AI hype
I agree generally with Cowan's analysis of that SDI debate: that I did
not consider the "political software" problem. I don't know about the
split-second decision problem, because you can complain that we can't
program such things, but I'm not so confident about what the President
would do in 30 seconds, either.
IN any case, I repeat that I didn't mean to suggest that my opinion on
SDI has any value because I haven't studied it. I was only reacting to
what I thought were political reasons for dragging weak computer
science arguments into the debate. As for SDI itself, my only
considered opinion is based on meeting some of its principal
supporters, and on the whole, they don't seem to be a suitably
thoughful crowd to deserve the influence they've acquired.
------------------------------
Date: Wed 23 Oct 85 15:18:31-EDT
From: MCCOWN@RADC-TOPS20.ARPA
Subject: Fundamental problems with current knowledge based systems
The following are my views of some of the fundamental problems with
current engineering of knowledge based systems. Most of this is not
new, but perhaps needs restating. These ideas have been stated by
others in other forms before, but I would like to make sure this
captures what has been said.
Primarily, the systems are inflexible. If new information is input to
the system which is not explicitly represented in the knowledge base,
similar though it may be to previous inputs or existing
representations, the system cannot deal with it unless explicitly told
how. This lack of generalization and analogy capability causes a
great bottleneck in the maintenence of the system, requiring experts
and knowledge engineers to continuously update the knowledge base to
reflect the current possibilities of input. In the rapidly changing
real world this is unacceptable.
The lack of ability to generalize and analogize is closely related
to the ever present learning problem, and this not only affects the
knowledge base maintenance problem, but the problem of knowledge
acquisition as well. Currently, an inordinate number of hours of an
expert's time are required in the interative process of knowledge
acquisition. In addition, the capability of the knowledge engineer to
understand the domain and to program such knowledge directly affects
the quality of the system. A poor knowledge engineer makes for a poor
system, regardless of the quality of the expert.
While learning is a very general term, the type of learning referred
here is the ability to recognize new and relevant information and its
relation to information already known, and the ability to store that
information and its relationships. While much work has been done in
the representation of knowledge (related work being semantic net
variations, frames, scripts, and MOPs for taxonomic and time ordered
information, as well as production rules for procedural information,
and predicate logic), no effective work has been done in getting
information from a source into these representations, except for the
method currently used - have a human (knowledge engineer) do it.
Automated techniques to implement representations from examples (such
as RuleMaster) are heavily domain dependent and are nothing more than
complex weighted decision tables which work only for certain types of
information. Other generalization work (such as RESEARCHER, IPP) are
also heavily domain dependent, and are successful in capturing well
only taxonomic information (A is a B, B works for C, etc.), and simple
time-ordered information (A happened before B). Ways to recognize new
related information, and (more important) new relevant related
information are still lacking, as are ways of converting input
information to consistent internal formats (consistent with the
previous existing related knowledge).
Indeed, even in the area of knowledge representation itself, the
representations are often difficult to relate to other representations
in a general way. Such relationships again depend upon the domain and
are rigidily coded, creating difficulty in generalization and analogy.
Time dependence, location dependence, non-monotonic reasoning, and
uncertainty all require the programmer to jump through hoops to find
ways to represent and relate information and procedures, forcing
domain-dependent representations as well.
Many of the problems in distributed and cooperating expert systems
also stem from this apparent requisite to code knowledge in a domain-
dependent fashion. Obviously if there are no generic techniques for
coding knowledge, then a communication scheme must be developed to
transfer information from one knowledge base to another (and as with
any communication, often something is lost in the translation).
It seems to be apparent that the learning problem is probably the
most critical missing element in current knowledge based system
technology, and that the knowledge representation issue may be the
most critical element in the learning problem. This observation is
not new, and this line of thought has been persued many times in the
past. What I would like to add to the discussion is my belief that
the current methods of knowledge representation are fundamentally
incapable of solving the learning problem due to their discreteness.
While discrete, cleanly delineated representations are (relatively)
easy to work with, program well, and are easy to implement using the
discrete representations that binary, Von Neuman machines offer,
these representations, due to this very same discreteness, do not and
can not represent reality in any generic and flexible way. I have an
argument as to why, but I would like to here some criticism of the
assertions set forth here first. No sense arguing from a shaky
foundation. Solutions along the lines (or at least in the spirit of)
coarse coding, distributed representation, etc., seem to be a possible
solution to some of these problems.
Obviously this type of discussion is related to that of "shallow
structure" vs. "deep structure" in natural language processing. We
can represent the shallow structure using these current representation
an relational techniques. However, I feel that these techniques
cannot effectively represent the deep structure owing to the property
of their inherent discreteness. All relationships must be explicitly
represented in these techniques, and are not implicit in the
representation. Some means of content-addressable representation is
required for implicit relationships between information.
This is not to say that these representations are useless. On the
contrary, they are very useful programming techniques for some types
of analysis problems. They offer insights into problem areas in AI
(such as learning), and they're representative of some real
psychologically functional products of the human mind, and are useful
in representing these products. However, it's time to ask "products
of what", and to approach the "what" (learning) rather than the
product (knowledge).
Thanks for taking the time.
Mike McCown
mccown@RADC-TOPS20.ARPA
RADC/IRDT
Griffiss AFB, NY 13441-5700
------------------------------
End of AIList Digest
********************