Copy Link
Add to Bookmark
Report
AIList Digest Volume 4 Issue 226
AIList Digest Sunday, 19 Oct 1986 Volume 4 : Issue 226
Today's Topics:
Logic Programming - Proof of TMS Termination,
Philosophy - Review of Nagel's Book &
Searle, Turing, Symbols, Categories
----------------------------------------------------------------------
Date: Thu, 16 Oct 86 09:20 EDT
From: David A. McAllester <DAM@OZ.AI.MIT.EDU>
Subject: TMS Query Response
I saw a recent message concerning the termination of
belief revision in a Doyle-style TMS. Some time ago I
proved that determining the existence of a fixed point
for a set of Doyle justifications is NP-complete. It
is possible to give a procedure that terminates but
all such procedures will have exponential worst case
behaviour. The proof is given below:
***********************************************************
DEFINITIONS:
A NM-JUSTIFICATION is an "implication" of the form:
(IN-DEPENDENCIES, OUT-DEPENDENCIES) => N
where IN-DEPENDENCIES and OUT-DEPENDENCIES are sets of nodes and N is
the justified node.
A labeling L marks every node as either "in" or "out". An
nm-justification is said to be "active" under a labeling L if every out
dependency in the justification is labeled "out" and every in dependency
of the justification is labeled "in".
Let J be a set of nm-justifications and L be a labeling. We say that a
node n is JUSTIFIED under J and L if there is some justification for n
which is active under the labeling L.
A set J of nm-justifications will be called Doyle Satisfiable if there
is a labeling L such that every justified node is "in" and every node
which is not justified is "out".
*******************
THEOREM: The problem of determining the Doyle satisfiability
of a set J of nm-justifications is NP-complete.
*******************
PROOF: PSAT can be reduced to Doyle satisfiability as follows:
Let C be any set of propositional clauses (i.e. a problem in PSAT).
For each atomic proposition symbol P appearing in C let P and
nP be two nodes and construct the following justifications:
({}, {nP}) => P (i.e. if nP is "out" then P is justified)
({}, {P}) => nP (i.e. if P is "out" then nP is justified)
We introduce an additional node F (for "false") and for each clause
(L1 or L2 ... or LN) in C we construct the justification:
({nL1, nL2, ... nLn} {F}) => F
where the node nLj is the node nP if Lj is the symbol P and nLj is the
node P if Lj is the literal (NOT P).
The set J of nm-justifications constructed in this way is
Doyle-Satisfiable iff the original set C is propositionally satisfiable.
To verify this last claim note that if L is a labeling which satisfies J
then exactly one of P and nP is "in"; if P is "out" then nP must
be "in" and vice versa, and if P is "in" then nP must be "out" and vice
versa.
Next note that if L is a labeling satisfying J then F must be
"out"; if F were "in" then no justification for F would be active
contradicting the requirement that F is not justified then F is
not "in".
Finally note that a labeling L satisfies J just in case none of the
justifications for F are active, i.e. just in case the corrosponding
truth assignment to the proposition symbols in C satisfies every clause.
**************
David McAllester
------------------------------
Date: 16 Oct 86 07:21:00 EDT
From: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Reply-to: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Subject: Book alert
This week's New Republic has a review of Thomas Nagel's (of
What-is-it-like-to-be-a-bat fame) new book, "The View from
Nowhere". For those interested in the philosophical issues
associated with the objective/subjective distinction, it
sounds like it's worth reading.
John Cugini <Cugini@nbs-vms>
------------------------------
Date: 15 Oct 86 23:17:57 GMT
From: mnetor!utzoo!utcsri!utai!me@seismo.css.gov (Daniel Simon)
Subject: Re: Searle, Turing, Symbols, Categories
In article <167@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>
>In response to my article <160@mind.UUCP>, Daniel R. Simon asks:
>
>> 1) To what extent is our discernment of intelligent behaviour
>> context-dependent?...Might not the robot version [of the
>> turing test] lead to the...problem of testers being
>> insufficiently skeptical of a machine with human appearance?
>> ...Is it ever possible to trust the results of any
>> instance of the test...?
>
>My reply to these questions is quite explicit in the papers in
>question:
>The turing test has two components, (i) a formal, empirical one,
>and (ii) an informal, intuitive one. The formal empirical component (i)
>is the requirement that the system being tested be able to generate human
>performance (be it robotic or linguistic). That's the nontrivial
>burden that will occupy theorists for at least decades to come, as we
>converge on (what I've called) the "total" turing test -- a model that
>exhibits all of our robotic and lingistic capacities.
By "nontrivial burden", do you mean the task of defining objective criteria
by which to characterize "human performance"? If so, you are after the same
thing as I am, but I fail to see what this has to do with the Turing test as
originally conceived, which involved measuring up AI systems against observers'
impressions, rather than against objective standards. Apparently, you're not
really defending the Turing test at all, but rather something quite different.
Moreover, you haven't said anything concrete about what this test might look
like. On what foundation could such a set of defining characteristics for
"human performance" be based? Would it define those attributes common to all
human beings? Most human beings? At least one human being? How would we
decide by what criteria to include observable attributes in our set of "human"
ones? How could such attributes be described? Is such a set of descriptions
even feasible? If not, doesn't it call into question the validity of seeking
to model what cannot be objectively characterized? And if such a set of
describable attributes is feasible, isn't it an indispensable prerequisite for
the building of a working Turing-test-passing model?
Please forgive my impertinent questions, but I haven't read your articles, and
I'm not exactly clear about what this "total" Turing test entails.
>The informal,
>intuitive component (ii) is that the system in question must perform in a
>way that is indistinguishable from the performance of a person, as
>judged by a person.
>
>Now the only reply I have for the sceptic about (ii) is
>that he should remember that he has nothing MORE than that to go on in
>the case of any other mind than his own. In other words, there is no
>rational reason for being more sceptical about robots' minds (if we
>can't tell their performance apart from that of people) than about
>(other) peoples' minds. The turing test is ALREADY the informal way we
>contend with the "other-minds" problem [i.e., how can you be sure
>anyone else but you has a mind, rather than merely acting AS IF it had
>a mind?], so why should we demand more in the case of robots? ...
>
I'm afraid I must disagree. I believe that people in general dodge the "other
minds" problem simply by accepting as a convention that human beings are by
definition intelligent. For example, we use terms such as "autistic",
"catatonic", and even "sleeping" to describe people whose behaviour would in
most cases almost certainly be described as unintelligent if exhibited by a
robot. Such people are never described as "unintelligent" in the sense of the
word that we would use to describe a robot who showed the exact same behaviour
patterns. Rather, we imply by using these terms that the people being
described are human, and therefore *would* be behaving intelligently, but for
(insert neurophysiological/psychological explanation here). This implicit
axiomatic attribution of intelligence to humans helps us to avoid not only
the "other minds" problem, but also the problem of assessing intelligence
despite the effect of what I previously referred to loosely as the "context" of
our observations. In short, we do not really use the Turing test on each
other, because we are all well acquainted with how easily we can be fooled by
contextual traps. Instead, we automatically associate intelligence with human
beings, thereby making our intuitive judgment even less useful to the AI
researcher working with computers or robots.
>As to "context," as I argue in the paper, the only one that is
>ultimately defensible is the "total" turing test, since there is no
>evidence at all that either capacities or contexts are modular. The
>degrees of freedom of a successful total-turing model are then reduced
>to the usual underdetermination of scientific theory by data. (It's always
>possible to carp at a physicist that his theoretic model of the
>universe "is turing-indistinguishable from the real one, but how can
>you be sure it's `really true' of the world?")
>
Wait a minute--You're back to component (i). What you seem to be saying is
that the informal component (component (ii)) has no validity at all apart from
the "context" of having passed component (i). The obvious conclusion is that
component (ii) is superfluous; any system that passes the "total Turing test"
exhibits "human behaviour", and hence must by definition be indistinguishable
from a human to another human.
>> 2) Assuming that some "neutral" context can be found...
>> what does passing (or failing) the Turing test really mean?
>
>It means you've successfully modelled the objective observables under
>investigation. No empirical science can offer more. And the only
>"neutral" context is the total turing test (which, like all inductive
>contexts, always has an open end, namely, the everpresent possibility
>that things could turn out differently tomorrow -- philosophers call
>this "inductive risk," and all empirical inquiry is vulnerable to it).
>
Again, you have all but admitted that the "total" Turing test you have
described has nothing to do with the Turing test at all--it is a set of
"objective observables" which can be verified through scientific examination.
The thoughtful examiner and "comparison human" have been replaced with
controlled scientific experiments and quantifiable results. What kinds of
experiments? What kinds of results? WHAT DOES THE "TOTAL TURING TEST"
LOOK LIKE?
>> 3) ...are there more appropriate means by which we
>> could evaluate the human-like or intelligent properties of an AI
>> system? ...is it possible to formulate the qualities that
>> constitute intelligence in a manner which is more intuitively
>> satisfying than the standard AI stuff about reasoning, but still
>> more rigorous than the Turing test?
>
>I don't think there's anything more rigorous than the total turing
>test since, when formulated in the suitably generalized way I
>describe, it can be seen to be identical to the empirical criterion for
>all of the objective sciences...
>
>Stevan Harnad
>princeton!mind!harnad
One question you haven't addressed is the relationship between intelligence and
"human performance". Are the two synonymous? If so, why bother to make
artificial humans when making natural ones is so much easier (not to mention
more fun)? And if not, how does your "total Turing test" relate to the
discernment of intelligence, as opposed to human-like behaviour?
I know, I know. I ask a lot of questions. Call me nosy.
Daniel R. Simon
"We gotta install database systems
Custom software delivery
We gotta move them accounting programs
We gotta port them all to PC's...."
------------------------------
Date: 14 Oct 86 16:01:44 GMT
From: ssc-vax!bcsaic!michaelm@beaver.cs.washington.edu
Subject: Re: Searle, Turing, Symbols, Categories
In article <167@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>...since there is no
>evidence at all that either capacities or contexts are modular.
Maybe I'm reading this out of context (not having read your books or papers),
but could you explain this statement? I know of lots of evidence for the
modularity of various aspects of linguistic behavior. In fact, we have a
parser + grammar of English here that captures a large portion of English
syntax, but has absolutely no semantics (yet). That is, it could parse
Jabberwocky or your article (well, I can't quite claim that it would parse
*all* of either one!) without having the least idea that your article is
meaningful whereas Jabberwocky isn't (apart from an explanation by Humpty
Dumpty). On the other hand, it wouldn't parse something like "book the table
on see I", despite the fact that we might make sense of the latter (because
of our world knowledge). Likewise, human aphasics often show similar deficits
in one or another area of their speech or language understanding. If this
isn't modular, what is? But as I say, maybe I don't understand what you
mean by modular...
--
Mike Maxwell
Boeing Advanced Technology Center
...uw-beaver!uw-june!bcsaic!michaelm
------------------------------
Date: 16 Oct 86 06:17:51 GMT
From: rutgers!princeton!mind!harnad@spam.ISTC.SRI.COM (Stevan Harnad)
Subject: Re: Searle, Turing, Symbols, Categories
In reply to a prior iteration D. Simon writes:
> I fail to see what [your "Total Turing Test"] has to do with
> the Turing test as originally conceived, which involved measuring
> up AI systems against observers' impressions, rather than against
> objective standards... Moreover, you haven't said anything concrete
> about what this test might look like.
How about this for a first approximation: We already know, roughly
speaking, what human beings are able to "do" -- their total cognitive
performance capacity: They can recognize, manipulate, sort, identify and
describe the objects in their environment and they can respond and reply
appropriately to descriptions. Get a robot to do that. When you think
he can do everything you know people can do formally, see whether
people can tell him apart from people informally.
> I believe that people in general dodge the "other minds" problem
> simply by accepting as a convention that human beings are by
> definition intelligent.
That's an artful dodge indeed. And do you think animals also accept such
conventions about one another? Philosophers, at least, seem to
have noticed that there's a bit of a problem there. Looking human
certainly gives us the prima facie benefit of the doubt in many cases,
but so far nature has spared us having to contend with any really
artful imposters. Wait till the robots begin giving our lax informal
turing-testing a run for its money.
> What you seem to be saying is that [what you call]
> the informal component [(i) of the turing test --
> i. e., indistinguishability from a person, as judged by a
> person] has no validity at all apart from the "context" of
> having passed [your] component (i) [i.e., the generation of
> our total cognitive performance capacity]. The obvious
> conclusion is that component (ii) is superfluous.
It's no more superfluous than, say, the equivalent component in the
design of an artificial music composer. First you get it to perform in
accordance with what you believe to be the formal rules of (diatonic)
composition. Then, when it successfully performs according to the
rules, see whether people like its stuff. Peoples' judgments, after
all, were not only the source of those rules in the first place, but
without the informal aesthetic sense that guided them, the rules would
amount to just that -- meaningless acoustic syntax.
Perhaps another way of putting it is that I doubt that what guides our
informal judgments (and underlies our capacities) can be completely
formalized in advance. The road to Total-Turing Utopia will probably
be a long series of feedback cycles between the formal and informal
components of the test before we ever achieve our final passing grade.
> One question you haven't addressed is the relationship between
> intelligence and "human performance". Are the two synonymous?
> If so, why bother to make artificial humans... And if not, how
> does your "total Turing test" relate to the discernment of
> intelligence, as opposed to human-like behaviour?
Intelligence is what generates human performance. We make artificial
humans to implement and test our theories about the substrate of human
performance capacity. And there's no objective difference between
human and (turing-indistinguishably) human-like.
> WHAT DOES THE "TOTAL TURING TEST" LOOK LIKE?... Please
> forgive my impertinent questions, but I haven't read your
> articles, and I'm not exactly clear about what this "total"
> Turing test entails.
Try reading the articles.
******
I will close with an afterthought on "blind" vs. "nonblind" turing
testing that I had after the last iteration:
In the informal component of the total turing test it may be
arguable that a sceptic would give a robot a better run for its money
if he were pre-alerted to the possibility that it was a robot (i.e., if the
test were conducted "nonblind" rather than "blind"). That way the robot
wouldn't be inheriting so much of the a priori benefit of the doubt that
had accrued from our lifetime of successful turing-testing of biological
persons of similar appearance (in our everyday informal solutions to
the "other-minds" problem). The blind/nonblind issue does not seem critical
though, since obviously the turing test is an open-ended one (and
probably also, like all other empirical conjectures, confirmable only
as a matter of degree); so we probably wouldn't want to make up our minds
too hastily in any case. I would say that several years of having lived
amongst us, as in the sci-fi movies, without arousing any suspicions -- and
eliciting only shocked incredulity from its close friends once the truth about
its roots was revealed -- would count as a pretty good outcome on a "blind"
total turing test.
Stevan Harnad
princeton!mind!harnad
------------------------------
End of AIList Digest
********************