Copy Link
Add to Bookmark
Report

AIList Digest Volume 4 Issue 245

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest           Wednesday, 5 Nov 1986     Volume 4 : Issue 245 

Today's Topics:
Queries - Electronics Troubleshooting & Consciousness &
English-Hangul Machine Translation & Belle,
AI Tools - Object-Oriented Programming,
Ethics - Moral Responsibility,
Logic - Nonmonotonic Reasoning,
Linguistics - Nonsense Tests and the Ignorance Principle,
Philosophy - Artificial Humans & Mathematics and Humanity

----------------------------------------------------------------------

Date: 3 Nov 86 07:23 PST
From: kendall.pasa@Xerox.COM
Subject: Electronics Troubleshooting

For AI Digest 11/3/86:

I am looking for some public domain software that would pertain to
electronics troubleshooting. I am interested in any special graphics
editors for electronics schematics and any shells for troubleshooting
the circuits.

Thank you.

------------------------------

Date: Tue, 4 Nov 86 00:41 EDT
From: JUDD%cs.umass.edu@CSNET-RELAY.ARPA
Subject: a query for AIList

I would like references to books or articles that argue the
following position:
"The enigma of consciousness is profound but the basis for it
is very mundane. Consciousness reduces to sensation;
sensation reduces to measurement of nervous events;
nervous events are physical events. Since physical events can
obviously occur in `inanimate matter', so can consciousness."

References to very recent articles are my main objective here,
but old works (ie pre-computer age) would be helpful also.
sj

------------------------------

Date: Mon, 3 Nov 86 20:53:05 EST
From: "Maj. Ken Rose" (USATRADOC | mort) <krose@BRL.ARPA>
Subject: English-Hangul Machine Translation

I would like to connect with anyone who has had some experience with machine
translation between English and Hangul. Please contact me at krose@brl.arpa
or phone (804) 727-2347. May-oo kam s'ham nee da.

------------------------------

Date: 4 Nov 86 15:11 EST
From: Vu.wbst@Xerox.COM
Reply-to: Vu.wbst@Xerox.COM
Subject: Belle

Does anyone have any information about Belle, the strongest chess-playing
program today? It has ELO rating of over 2000. It uses tree-pruning,
and cutoff heuristics.

Is the code available for the public in Interlisp-D? Any pointer would
be very helpful. By the way, what is the complete path for
net.games.chess that was mentioned in V4# 243? Thank you.

Dinh Vu

------------------------------

Date: Mon, 3 Nov 86 12:25:40 -0100
From: Hakon Styri <styri%vax.runit.unit.uninett@nta-vax.arpa.ARPA>
Subject: Re: Is there OOP in AI?

In response to the item in AIList issue No. 231.

Yes, if you have a look in the October issue of SIGPLAN NOTICES,
i.e. the special issue on the Object-Oriented Programming Workshop
at IBM Yorktown Heights, June 1986. At least two papers will be
of interest...

Going back a few years, you may also find some ICOT papers about
OOP in Prolog. Try New Generation Computing, the ICOT/Springer-Verlag
Journal. There are a few papers in Vol. 1, No. 1 (1983).

In the Proceedings of the International Conference on FGCS 1984
there is another paper: "Unique Features of ESP", which is a Logic
Programming Language with features for OOP.

H. Styri -- Yu No Hoo :-)

------------------------------

Date: Mon 3 Nov 86 10:59:37-PST
From: cas <PHayes@SRI-KL.ARPA>
Subject: moral responsibility

The idea of banning vision research ( or any other, for that matter ) is
even sillier and more dangerous than Bill Trost points out. The analogy
is not to ban automatic transmissions, but THINKING about automatic
transmissions. And banning thinking about anything is about as dangerous
as any course of action can be, no matter how highminded or sincerely morally
concerned those who call for it.
To be fair to Weizenbaum, he does have a certain weird consistency. He tells
me, for example, that in his view helicopters are intrinsically evil ( as
the Vietnam war has shown ). One can see how the logic works: if an artifact
is ( or can be ) used to do more bad than good, then it is evil, and research
on evil things is immoral.
While this is probably not the place to start a debate in theoretical ethics,
I do think that this view, while superficially attractive, simply doesnt stand
up to a little thought, and can be used to label as wicked anything which one
dislikes for any reason at all. Weizenbaum has made a successful career by
systematically attacking AI research on the grounds that it is somehow
immoral, and finding a large and willing audience. He doesnt make me squirm.
Pat Hayes

------------------------------

Date: Mon, 3 Nov 86 11:09 ???
From: GODDEN%gmr.com@CSNET-RELAY.ARPA
Subject: banning machine vision

As regards the banning of research on machine vision (or any technical field)
because of the possible end uses of such technology in "mass murder machines",
I should like to make one relevant distinction. The immediate purpose of
the research as indicated in the grant proposal or as implied by the source of
funding is of the utmost importance. If I were to do research on vision for
the auto industry and published my results which some military type then
schlepped for use in a satellite to zap civilians, I would not feel ANY
responsibility if the satellite ever got used. On the other hand, if I
worked for a defense contractor doing the same research, I certainly would
bear some responsibility for its end use.
-Kurt Godden
godden@gmr.com

------------------------------

Date: Sat, 1 Nov 86 16:58:21 pst
From: John B. Nagle <jbn@glacier.stanford.edu>
Reply-to: jbn@glacier.UUCP (John B. Nagle)
Subject: Re: Non-monotonic Reasoning

Proper mathematical logic is very "brittle", in that two axioms
that contradict each other make it possible to prove TRUE=FALSE, from
which one can then prove anything. Thus, AI systems that use
traditional logic should contain mechanisms to prevent the introduction
of new axioms that contradict ones already present; this is referred
to as "truth maintenance". Systems that lack such mechanisms are prone
to serious errors, even when reasoning about things which are not
even vaguely related to the contradictory axioms; one contradiction
in the axioms generally destroys the system's ability to get useful
results.
Non-monotonic reasoning is an attempt to make reasoning systems
less brittle, by containing the damage that can be caused by
contradiction in the axioms. The rules of inference of non-monotonic
reasoning systems are weaker than those of traditional logic. There
is not full agreement on what the rules of inference should be in
such systems. There are those who regard non-monotonic reasoning as
hacking at the mathematical logic level. Non-monotonic reasoning
lies in a grey area between the worlds of logic and heuristics.

John Nagle

------------------------------

Date: Tue, 4 Nov 86 09:30:29 CST
From: mklein@aisung.cs.uiuc.edu (Mark Klein)
Subject: Nonmonotonic Reasoning

My understanding of nonmonotonic reasoning (NMR) is different from what you
described in a recent ailist posting. As I understand it, NMR differs
from monotonic reasoning in that the size of the set of true theorems
can DECREASE when you add new axioms - thus the set size does not
increase monotonically as axioms are added. It is often implemented
using truth maintenance systems (TMS) that allow something to be justified
by something else NOT being believed. Monotonic reasoning, by contrast,
could be implemented by a TMS that only allows something to be justified by
something else being believed. Default reasoning is an instance of
nonmonotonic reasoning. Nonmonotonic reasoning is thus not synonymous with
truth maintenance.

Mark Klein

------------------------------

Date: Thu, 30 Oct 86 17:26:02 est
From: Graeme Hirst <gh%ai.toronto.edu@CSNET-RELAY.ARPA>
Subject: Re: Nonsense tests and the Ignorance Principle

>A couple of years ago, on either this list or Human-Nets, there appeared
>a short multiple-choice test which was written so that one could deduce
>"best" answers based on just the form, not the content, of the questions ...

The Ignorance Principle ("Forget the content, just use the form", or "Why
clutter up the system with a lot of knowledge?") has a long and distinguished
history in AI. In 1975, Gord McCalla and Michael Kuttner published an article
on a program that could successfully pass the written section of the British
Columbia driver's license exam. For example:

QUESTION:
Must every trailer which, owing to size or construction tends to prevent
a driving signal given by the driver of the towing vehicle from being seen
by the driver of the overtaking vehicle be equipped with an approved
mechanical or electrical signalling device controlled by the driver of the
towing vehicle?
ANSWER:
Yes.

In fact, the program was able to answer more than half the questions just by
looking for the words "must", "should", "may", "necessary", "permissible",
"distance", and "required".

The system was not without its flaws. For example:

QUESTION:
To what must the coupling device between a motor-vehicle and trailer
be affixed?
ANSWER:
Yes.

This is wrong; the correct answer is "frame". (This was an early instance of
the frame problem.)

The authors subsequently attempted a similar program for defending PhD theses,
but the results were never published.

REFERENCE

McCalla, G. and Kuttner, M. "An extensible feature-based procedural question
answering system to handle the written section of the British Columbia
driver's examination". _CSCSI/SCEIO Newsletter_ [now published as _Canadian
Artificial Intelligence_], 1(2), February 1975, 59-67.

\\\\ Graeme Hirst University of Toronto Computer Science Department
//// utcsri!utai!gh / gh@ai.toronto.edu / 416-978-8747

------------------------------

Date: 27 Oct 86 09:38 PST
From: Ghenis.pasa@Xerox.COM
Subject: Why we waste time training machines

>why are we wasting time training machines when we could be training
humans
>instead. The only reasons that I can see are that intelligent systems
can be
>made small enough and light enough to sit on bombs. Are there any
other reasons?

Why do we record music instead of teaching everyone how to sing? To
preserve what we consider top performance and make it easily available
for others to enjoy, even if the performer himself cannot be present and
others are not inclined to or capable of duplicating his work, but
simply wish to benefit from it.

In the case of applied AI there is the added advantage that the
"recordings" are not static but extendable, so the above question may be
viewed as a variation of "to stand on the shoulders of giants" vs. "to
reinvent the wheel".

This is just ONE of the reasons we "waste our time" the way we do.

-- Pablo Ghenis, speaking for myself (TO myself most of the time)

------------------------------

Date: Mon, 27 Oct 86 17:29 CDT
From: stair surfing - an exercise in oblivion
<"NGSTL1::EVANS%ti-eg.csnet"@CSNET-RELAY.ARPA>
Subject: reasons for making "artificial humans"

>In the last AI digest (V4 #226), Daniel Simon writes:
>
>>One question you haven't addressed is the relationship between
intelligence and
>>"human performance". Are the two synonymous? If so, why bother to make
>>artificial humans when making natural ones is so much easier (not to mention
>>more fun)?
>
>This is a question that has been bothering me for a while. When it is so much
>cheaper (and possible now, while true machine intelligence may be just a dream)
>why are we wasting time training machines when we could be training humans in-
>stead. The only reasons that I can see are that intelligent systems can be
made
>small enough and light enough to sit on bombs. Are there any other reasons?
>
>Daniel Paul
>
>danny%ngstl1%ti-eg@csnet-relay

First of all, I'd just like to comment that making natural humans may be easier
(and more fun) for men, but it's not necessarily so for women. It also seems
that once we get the procedure for "making artificial humans" down pat, it
would take less time and effort than making "natural" ones, a process which
currently requires at least twenty years (sometimes more or less).

Now to my real point - I can't see how training machines could be considered
a waste of time. There are thousands of useful but meaningless (and generally
menial) jobs which machines could do, freeing humans for more interesting
pursuits (making more humans, perhaps). Of more immediate concern, there are
many jobs of high risk - mining, construction work, deep-sea exploration and so
forth - in which machines, particularly intelligent machines, could assist.
Putting intelligent systems on bombs is a minor use, of immediate concern only
for its funding potentials. Debating the ethics of such use is a legitimate
topic, I suppose, but condemning all AI research on that basis is not.


Eleanor Evans
evans%ngstl1%ti-eg@csnet-relay

------------------------------

Date: 28 Oct 86 17:18:00 EST
From: walter roberson <WADLISP7%CARLETON.BITNET@WISCVM.WISC.EDU>
Subject: Mathematics, Humanity


Gilbert Cockton <mcvax!ukc!its63b!hwcs!aimmi!gilbert@seismo.css.gov>
recently wrote:

>This is contentious and smacks of modelling all learning procedures
>in terms of a single subject, i.e. mathematics. I can't think of a
>more horrible subject to model human understanding on, given the
>inhumanity of most mathematics!

The inhumanity of *most* mathematics? I would think that from the rest of
your message, what you would really claim is the inhumanity of *all*
mathematics -- for *all* of mathematics is entirely deviod of the questions
of what is morally right or morally wrong, entirely missing all matters of
human relationships. Mathematical theorems start by listing the assumptions,
and then indicating how those assumptions imply a result. Many humans seem
to devote their entire lifes to forcibly changing other people's assumptions
(and not always for the better!); most people don't seem to care about
this process. Mathematics, then, could be said to be the study of single
points, where "real life" requires that humans be able to adapt to a line
or perhaps something even higher order.| And yet that does not render
mathematics "inhumane", for we humans must always react to the single point
that is "now", and we *do* employ mathematics to guide us in that reaction.
Thus, mathematics is not inhumane at all -- at worst, it is a subclass of
"humanity". If you prefer to think if it in such terms, this might be
expressed as " !! Humanity encompasses something Universal!"

Perhaps, though, there should be a category of study devoted to modelling
the transformation of knowledge as the very assumptions change. A difficult
question, of course, is whether such a study should attempt to, in any
way, model the "morality" of changing assumptions. I would venture that
it should not, but that a formal method of measuring the effects of such
changes would not be out of order.
-----
Gilbert, as far as I can tell, you have not presented anything new in your
article. Unless I misunderstand you completely, your entire arguement is based
upon the premise that there is something special about life that negates the
possibility of life being modelled by any formal system, no matter how
complex. As I personally consider that it might be possible to do such a
modelling note that I don't say that it *is* possible to do such a modelling|,
I disregard the entire body of your arguements. The false premise implies
all conclusions.
-----
>Nearer to home, find me
>one computer programmer who's understanding is based 100% on formal procedures.
>Even the most formal programmers will be lucky to be in program-proving mode
>more than 60% of the time. So I take it that they don't `understand' what
>they're doing the other 40% of the time?

I'm not quite sure what you mean to imply by "program-proving mode". The
common use of the word "prove" would imply "a process of logically
demonstrating that an already-written program is correct". The older use of
"prove" would imply "a process of attempting to demonstrate that an already-
written program is incorrect." In either case, the most formal of programmers
spend relatively little time in "program-proving mode", as those programmers
employ formal systems to write programs which are correct in the first place.
It is only those that either do not understand programming, or do not
understand all the implications of the assumptions they have programmed, that
require 60% of their time to "prove" their programs. 60% of their time proving
to others the validity of the approach, perhaps...

walter roberson <WADLISP7@CARLETON.BITNET>

walter

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT