Copy Link
Add to Bookmark
Report

AIList Digest Volume 4 Issue 196

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest           Thursday, 25 Sep 1986     Volume 4 : Issue 196 

Today's Topics:
Linguistics - NL Generation,
Logic - TMS, DDB and Infinite Loops,
AI Tools - Turbo Prolog & Xerox vs Symbolics,
Philosophy - Associations & Intelligent Machines

----------------------------------------------------------------------

Date: Mon, 22 Sep 86 10:31:23 EDT
From: "William J. Rapaport" <rapaport%buffalo.csnet@CSNET-RELAY.ARPA>
Reply-to: rapaport@sunybcs.UUCP (William J. Rapaport)
Subject: followup on NL generation

In article <MS.lb0q.0.hatfield.284.1@andrew.cmu.edu>
lb0q@ANDREW.CMU.EDU (Leslie Burkholder) writes:
>Has work been done on the problem of generating relatively idiomatic English
>from sentences written in a language for first-order predicate logic?
>Any pointers would be appreciated.
>
>Leslie Burkholder
>lb0q@andrew.cmu.edu


We do some work on NL generation from SNePS, which can easily be translated
into pred. logic. See:

Shapiro, Stuart C. (1982), ``Generalized Augmented Transition Network
Grammars For Generation From Semantic Networks,'' American Journal of
Computational Linguistics 8: 12-25.


William J. Rapaport
Assistant Professor

Dept. of Computer Science, SUNY Buffalo, Buffalo, NY 14260

(716) 636-3193, 3180

uucp: ..!{allegra,decvax,watmath,rocksanne}!sunybcs!rapaport
csnet: rapaport@buffalo.csnet
bitnet: rapaport@sunybcs.bitnet

------------------------------

Date: 20 Sep 86 15:41:26 edt
From: Walter Hamscher <hamscher@ht.ai.mit.edu>
Subject: TMS, DDB and infinite loops


Date: Mon, 08 Sep 86 16:48:15 -0800
From: Don Rose <drose@CIP.UCI.EDU>

Does anyone know whether the standard algorithms for belief revision
(e.g. dependency-directed backtracking in TMS-like systems) are
guaranteed to halt? That is, is it possible for certain belief networks
to be arranged such that no set of mutually consistent beliefs can be found
(without outside influence)?

I think these are two different questions. The answer to the
second question depends less on the algorithm than on whether
the underlying logic is two-valued or three-valued. The answer
to the first question is that halting is only a problem when the
logic is two-valued and the space of beliefs isn't fixed during
belief revision [Satisifiability in the propositional calculus
is decidable (though NP-complete)]. Doyle's TMS goes into
infinite loops. McAllester's won't. deKleer's ATMS won't loop
either, but that's because it finds all the consistent
labelings, and there just might not be any. Etc, etc; depends
on what you consider ``standard.''

Walter Hamscher

------------------------------

Date: Sat, 20 Sep 86 15:02 PDT
From: dekleer.pa@Xerox.COM
Subject: TMS, DDB and infinite loops question.

Does anyone know whether the standard algorithms for belief revision
(e.g. dependency-directed backtracking in TMS-like systems) are
guaranteed to halt?

It depends on what you consider the standard algorithms and what do you
consider a guarantee? Typically a Doyle-style (NMTMS) comes in two
versions, (1) guaranteed to halt, and, (2) guaranteed to halt if there
are no "odd loops". Version (2) is always more efficient and is
commonly used. The McAllester-style (LTMS) or my style (ATMS) always
halt. I don't know if anyone has actually proved these results.

That is, is it possible for certain belief networks
to be arranged such that no set of mutually consistent beliefs
can be found (without outside influence)?

Sure, its called a contradiction. However, the issue of what to do
about odd loops remains somewhat unresolved. By odd loop I mean a node
which depends on its own disbelief an odd number of times, the most
trivial example being give A a non-monotonic justification with an empty
inlist and an outlist of (A).

------------------------------

Date: Tue 23 Sep 86 14:39:47-CDT
From: Larry Van Sickle <cs.vansickle@r20.utexas.edu>
Reply-to: CS.VANSICKLE@R20.UTEXAS.EDU
Subject: Money back on Turbo Prolog

Borland will refund the purchase price of Turbo Prolog
for sixty days after purchase. The person I talked to
at Borland was courteous, did not argue, just said to
send the receipt and software.

Larry Van Sickle
U of Texas at Austin
cs.vansickle@r20.utexas.edu 512-471-9589

------------------------------

Date: Tue 23 Sep 86 13:54:29-PDT
From: Ken Laws <Laws@SRI-STRIPE.ARPA>
Reply-to: AIList-Request@SRI-AI.ARPA
Subject: Turbo Prolog

For another review of Turbo Prolog see the premier issue of AI Expert.
Darryl Rubin discusses several weaknesses relative to Clocksin-and-Mellish
prologs, but is enthusiastic about the package for users who have no
experience with (i.e., preconceptions from) other prologs. The Turbo
version is very fast, quite compact, well documented, comes with a
lengthy library of example programs, and interfaces to a sophisticated
window system and other special tools. It could be an excellent system
for database retrieval and other straightforward tasks. His chief
reservation was about the "subroutine call" syntax that requires
all legal arities and argument types to be predeclared and does not
permit use of comma as a reduction operator.

-- Ken Laws

------------------------------

Date: 19 Sep 86 14:27:15 GMT
From: sdcrdcf!darrelj@hplabs.hp.com (Darrel VanBuer)
Subject: Re: Dandelion vs Symbolics

A slight echo on the Interlisp file package (partly response to earlier note
on problems using MAKEFILE, and losing a bunch of user-entered properties.

Rule 1. Users never call MAKEFILE (in 9 years of Interlisp hacking, I've
probably called it half a dozen times).

So how do you make files? I mainly use two functions:
CLEANUP() or CLEANUP(file1 file2 ...) Former does all files containing
modifications, latter only named files. The first thing CLEANUP does
is call UPDATEFILES, which is also called by:
FILES?() Reports the files which need action to have up to date
source, compiled and hardcopies, also calls UPDATEFILES, which will
engage you in a dialog asking you the location of every "new" object.

Most of the ways to define or modify objects are "noticed" by the file
package (e.g. the structure editor [DF, EF, DV ...], SETQ, PUTPROP, etc which
YOU type at top level). When an object is noticed as modified, either the
file(s) containing it are marked as needing a remake, or it gets noted as
something to ask you about later. You can write functions which play the
game by calling MARKASCHANGED as appropriate.

Two global variables interact with details of the process:
RECOMPILEDEFAULT usually EXPRS or CHANGES. I prefer the former, but CHANGES
has been the default in Interlisp-D because EXPRS didn't work before
Intermezzo.
CLEANUPOPTIONS My setting is usually (RC STF LIST) which means as part of
cleanup, recompile, with compiler flags STF (F means forget source from in
core, filepkg will automagically retrieve it if you edit, etc), and make a
new hardcopy LISTing.

For real fun with filepkg and integration with other tools, try
MASTERSCOPE(ANALYZE ALL ON file1)
MASTERSCOPE(EDIT WHERE ANY CALLS FOO)
CLEANUP()

--
Darrel J. Van Buer, PhD
System Development Corp.
2525 Colorado Ave
Santa Monica, CA 90406
(213)820-4111 x5449
...{allegra,burdvax,cbosgd,hplabs,ihnp4,orstcs,sdcsvax,ucla-cs,akgua}
!sdcrdcf!darrelj
VANBUER@USC-ECL.ARPA

------------------------------

Date: Sat, 20 Sep 86 10:23:18 PDT
From: larus@kim.Berkeley.EDU (James Larus)
Subject: Symbolics v. Xerox

OK, here are my comments on the Great Symbolics-Xerox debate. [As
background, I was an experienced Lisp programmer and emacs user before
trying a Symbolics.] I think that the user interface on the Symbolics
is one of the poorest pieces of software that I have ever had the
misfortune of using. Despite having a bit-mapped display, Symbolics
forces you to use a one-window on the screen at a time paradigm. Not
only are the default windows too large, but some of them (e.g. the
document examiner) take over the whole screen (didn't anyone at
Symbolics think that someone might want to make use of the
documentation without taking notes on paper?). Resizing the windows
(a painful process involving a half-dozen mouse-clicks) results in
unreadable messages and lost information since the windows don't
scroll (to be fixed in Genera 7). I cannot understand how this
interface was designed (was it?) or why people swear by it (instead of
at it).

The rest of the system is better. Their Common Lisp is pretty solid
and avoids some subtle bugs in other implementations. Their debugger
is pretty weak. I can't understand why a debugger that shows the
machine's bytecodes (which aren't even documented for the 3600
series!) is considered acceptable in a Lisp environment. Even C has
symbolic debuggers these days! Their machine coexists pretty well
with other types of systems on an internet. Their local filesystem is
impressively slow.

The documentation is pretty bad, but is getting better. It reminds me
of the earlier days of Unix, where most of the important stuff wasn't
written down. If you had an office next to a Unix guru, you probably
thought Unix was great. If you just got a tape from Bell, then you
probably thought Unix sucked. There appears to be a large amount of
information about the Symbolics that is not written down and is common
knowledge at places like MIT that successfully use the machines.
(Perhaps Symbolics should ship a MIT graduate with their machines.)
We have had a lot of difficulty setting up our machines. Symbolics
has not been very helpful at all.

/Jim

------------------------------

Date: Tue Sep 23 12:31:35 GMT+1:00 1986
From: mcvax!lasso!ralph@seismo.CSS.GOV (Ralph P. Sobek)
Subject: Re: Xerox 11xx vs. Symbolics 36xx vs. ...

I enjoyed all the discussion on the pluses and minuses of these and other
lisp machines. I, myself, am an Interlisp user. Those who know a
particular system well will prefer it over another. All these lisp systems
are quite complex and require a long time, a year or so, before one achieves
proficiency. And as any language, human or otherwise, one's perception of
one's environment depends upon the tools/semantics that the language provides.

I have always found Interlisp much more homogeneous than Zetalisp. The
packages are structured so as to interface easily. I find the written
documentation also much more structured, and smaller, than the number of
volumes that come with a Symbolics. Maybe, Symbolics users only use the
online documentation and thus avoid the pain of trying to find something
in the written documentation. The last time I tried to find something in
the Symbolics manuals -- I gave up, frustrated! :-)

Interesting will be the future generation of lisp machines, after Common
Lisp.

Ralph P. Sobek

UUCP: mcvax!inria!lasso!ralph@SEISMO.CSS.GOV
ARPA: sobek@ucbernie.Berkeley.EDU (automatic forwarding)
BITNET: SOBEK@FRMOP11

------------------------------

Date: 22 Sep 86 12:28:00 MST
From: fritts@afotec
Reply-to: <fritts@afotec>
Subject: Associations -- Comment on AIList Digest V4 #186

The remark has been made on AIList, I think, and elsewhere that computers
do not "think" at all like people do. Problems are formally stated and
stepped through sequentially to reach a solution. Humans find this very
difficult to do. Instead, we seem to think in a series of observations
and associations. Our observations are provided by our senses, but how
these are associated with stored memory of other observations is seemingly
the key to how humans "think". I think that this process of sensory
observation and association runs more or less continuously and we are not
conciously aware of much of it. What I'd like to know is how the decision
is made to associate one observation with another; what rules of association
are made and are they highly individualized or is there a more general
pattern. How is it that we acquire large bodies of apparently diverse
observations under simple labels and then make complex decisions using
these simple labels rather than stepping laboriously through a logical
sequence to achieve the same end? There must be some logic to our
associative process or we could not be discussing this subject at all.

Steve Fritts
FRITTS@AFOTEC

------------------------------

Date: 22 Sep 86 09:01:50 PDT (Monday)
From: "charles_kalish.EdServices"@Xerox.COM
Subject: Re: intelligent machines

In his message, Peter Pirron sets out what he believes to be necessary
attributes of a machine that would deserved to be called intelligent
>From my experience, I think that his intuitions about what it would take
for for a machine to be intelligent are, by and large, pretty widely
shared and as far as I'm concerned, pretty accurate. Where we differ,
though, is in how these intuitions apply to designing and demonstrating
machine intelligence.

Pirron writes: "There is the phenomenon of intentionality amd motivation
in man that finds no direct correspondent phenomenon in the computer."
I
think it's true that we wouldn't call anything intelligent we didn't believe
had intentions (after all intelligent is an intentional ascription).
But I think that Dennet (see "Brainstorms") is right in that intentions
are something we ascribe to systems and not something that is built in
or a part of that system. The problem then becomes justifying the use
of intentional descriptions for a machine; i.e. how can I justify my
claim that "the computer wants to take the opponent's queen" when the
skeptic responds that all that is happening is that the X procedure has
returned a value which causes the Y procedure to move piece A to board
position Q?

I think the crucial issue in this question is how much (or whether) the
computer understands. The problem with systems now is that it is too
easy to say that the computer doesn't understand anything, it's just
manipulating markers. That is that any understanding is just
conventional -- we pretend that variable A means the Red Queen, but it
only means that to us (observers) not to the computer. How then could
we ever get something to mean anything to a computer? Some people (I'm
thinking of Searle) would say you can't, computers can't have semantics
for the symbols they process. I found this issue in Pirron's message
where he says:
"Real "understanding" of natural language however needs not only
linguistic competence but also sensory processing and recognition
abilities (visual, acoustical). Language normally refers to objects
which we first experience by sensory input and then name it."
The
idea is that you want to ground the computer's use of symbols in some
non-symbolic experience.

Unfortunately, the solution proposed by Pirron:
"The constructivistic theory of human learning of language by Paul
Lorenzen und O. Schwemmer (Erlanger Schule) assumes a "
demonstration
act" (Zeigehandlung) constituting a fundamental element of man (child)
learning language. Without this empirical fundament of language you
will never leave the hermeneutic circle, which drove former philosphers into
despair."
( having not read these people, I presume the mean something
like pointing at a rabbit and saying "rabbit") has been demonstrated by
Quine (see "Word and Object") to keep you well within the circle. But
these arguments are about people, not computers and we do (at least
feel) that the symbols we use and communicate with are rooted in
non-symbolic something. I can see two directions from this.

One is looking for pre-symbolic, biological constraints; Something like
Rosch's theory of basic levels of conceptualization. Biologically
relevant, innate concepts, like mother, food, emotions, etc. would
provide the grounding for complex concepts. Unfortunately for a
computer, it doesn't have an evolutionary history which would generate
innate concepts-- everything it's got is symbolic. We'd have to say
that no matter how good a computer got it wouldn't really understand.

The other point is that maybe we do have to stay within this symbolic
"prison-house" after all event the biological concepts are still
represented, not actual (no food in the brain just neuron firings). The
thing here is that, even though you could look into a person's brain
and, say, pick out the neural representation of a horse, to the person
with the open skull that's not a representation, it constitutes a horse,
it is a horse (from the point of view of the neural sytem). And that's
what's different about people and computers. We credit people with a
point of view and from that point of view, the symbols used in
processing are not symbolic at all, but real. Why do people have a
point of view and not computers? Computers can make reports of their
internal states probably better than we. I think that Nagel has hit it
on the head (in "What is it like to be a Bat" I saw this article in "The
Minds I"
) with his notion of "it is (or is not) like something to be
that thing."
So it is like something to be a person and presumably is
not like something to be a computer. For a machine to be intelligent
and truly understand it must be like something to be that machine. Only
then can we credit that machine with a point of view and stop looking at
the symbols it uses as "mere" symbols. Those symbols will have content
from the machine's point of view. Now, how does it get to be like
something to be a machine? I don't know but I know it has a lot more to
do with the Turing test than what kind of memory orgainization or search
algorithms the machine uses.

Sorry if this is incoherent, but it's not a paper so I'm not going to
proof it. I'd also like to comment on the claim that:
" I would claim, that the conviction mentioned above {that machines
can't equal humans} however philosphical or sophisticated it may be
justified, is only the "
RATIONALIZATION".. of understandable but
irrational and normally unconscious existential fears and need of human
beings"
but this message is too long anyway. Suffice it too say that
one can find a nasty Freudian interpretation of any point.

I'd appreciate hearing any comments on the above ramblings.

-Chuck

ARPA: chuck.edservices@Xerox.COM

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT