Copy Link
Add to Bookmark
Report

AIList Digest Volume 3 Issue 141

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest           Thursday, 10 Oct 1985     Volume 3 : Issue 141 

Today's Topics:
Psychology & Logic - Counterexample to Modus Ponens,
AI Tools - DO you really need an AI machine?

----------------------------------------------------------------------

Date: Mon, 7 Oct 85 14:07 EDT
From: Stephen G. Rowley <SGR@SCRC-PEGASUS.ARPA>
Subject: Formal Logic

Date: Wed, 2 Oct 85 14:11:34 edt
From: John McLean <mclean@nrl-css.ARPA>
Subject: Re: A Counterexample to Modus Ponens

(1) If a Republican wins the election then if the winner is not Ronald
Reagan, then the winner will be John Anderson.

(1a) If a Republican wins the election and the winner is not Ronald
Reagan, then the winner will be John Anderson.

I would like to see some further discussion of this since I'm afraid that
I don't see the difference between (1) and (1a) either. Certainly there
is no difference with respect to inferential power as far as classical
logic is concerned.


There's no difference. (At least, not logically. Interpreting people's
judgements about probabilities from natural language statements is an
extremely subtle art, as I believe Chris said in a reply.)

[Notation: -> means "implies", ~ means "not", & means "and", | means
"inclusive or".]

Let R = "a Republican wins the election"
RR = "Ronald Reagan wins the election"
JA = "John Anderson" wins the election"

[1] R -> [ ~RR -> JA ]

[1a] [ R & ~RR ] -> JA

Of course, p -> q is the same as ~p | q. Then [1] and [1a] both
transform to

~R | RR | JA

------------------------------

Date: Mon, 7 Oct 85 17:39:03 edt
From: pugh@GVAX.CS.CORNELL.EDU (William Pugh)
Subject: Re: A Counterexample to Modus Ponens


>> In a recent issue of AIList, John McLean cited an article about
>> inconsistencies in public opinion that apparently said:
>>
>> Before the 1980 presidential election, many held the two beliefs
>> below:
>>
>> (1) If a Republican wins the election then if the winner is not
>> Ronald Reagan, then the winner will be John Anderson.
>>
>> (2) A Republican will win the election.
>>
>> Yet few if any of these people believed the conclusion:
>>
>> (3) If the winner is not Reagan then the winner will be Anderson.
>>
Just to throw my two bits in:

First, from probability, some background:

P(x) is the probability of x

& P(x|y) is the probability of x given that y is true.

And by Bayes' theorem,

P(x) P(y|x)
P(x|y) = -----------
P(y)

The beliefs stated above can be rephrased as follows:

Let RW stand for "
A Republican wins"
Let RR stand for "
Ronald Reagan wins"
Let JA stand for "
John Anderson wins"

(1) P( ~RR => JA | RW) = 1
(2) P(RW) = 0.96 (for the sake of argument - high at any rate)

and we wish to find

(3) P(JA|~RR)

well, from Bayes' theorem,

P(JA) P(~RR|JA) P(JA)
P(JA|~RR) = --------------- = -------
P(~RR) P(~RR)

For example, if P(JA) = 0.01 and P(RR) = 0.95,
then P(JA|~RR) = 0.2

Note that we have not used 1 at all, and made the (obvious)
assumption (similar to 1) that if Anderson wins, Reagan can not win.

Now, to try to figure out what went wrong in the original example,
consider:

Given P(A|B) and P(B),
P(A&B) = P(A|B)P(B)

HOWEVER, P(A&B) <= P(A)
since P(A) = P(A&B)/P(B|A)

THEREFORE, if
A=>B with high probability
and A with high probability
THEN, B with high probability

Now, going back to our example, we see the the original
conclusion:

>> (3) If the winner is not Reagan then the winner will be Anderson.

is almost certainly true. Although non-obvious, this is because
(3) is true if Reagan wins.

The problem, therefore, is that people do not use the
"
standard" definition of implication. By "if A then B"
people tend to think "
given that A is true, B is true" - if
A is false, the validity of the statement is not
verified one way or the other.


You can find more on Bayesian Inference in "
Introduction to
Artificial Intelligence" by Charniak and McDermott, or in many
other sources.

I have not yet figured out how to make Bayesian Inference work
with this style of implication, but it is obvious that it
requires some form of special treatment. I'll let you know
if I figure anything out.


Bill Pugh
Cornell University
..{uw-beaver|vax135}!cornell!pugh
607-256-4934,ext5

------------------------------

Date: Tue, 8 Oct 85 13:03 EDT
From: Carole D Hafner <HAFNER%northeastern.csnet@CSNET-RELAY.ARPA>
Subject: More on Modus Ponens

Several ideas have been proposed to explain the fact that many people
in 1980 would agree to the following:

1. If a Republican wins the election then if it is not RR then it will be JA.
2. A Republican will win the election.

But they would not agree to the apparent logical consequence:

3. If RR does not win the election then JA will win.

Does this mean that modus ponens is not a rule of common sense reasoning? NO.

The problem is due to the fact that "
a republican" in the
first sentence has an "
intensional" meaning, while in the second it
had (to most people) an extensional meaning.

In other words, people believed:

(there-exists X) [win(X,election) & party(X,republican)]

and not

(forall X) [win(X,election) --> party(X,republican)]

It is the second interpretation of the indefinite noun phrase that
gives rise to the conclusion in (3).

Carole Hafner
hafner@northeastern

------------------------------

Date: Tue 8 Oct 85 10:26:00-PDT
From: EDWARDS@SRI-AI.ARPA
Subject: Equivocation (?) in "
failure" of modus ponens


As far as I have been able to determine, in conversation with Todd
Davies and Marcel Schoppers, the apparent failure of modus ponens rests
on a subtle point about the understanding of "
if-then".

If all conditionals are taken as truth-functional, then most people
would have believed the premises *and* the conclusion:

(1) If a Republican wins then if Reagan doesn't win then Anderson will
win.

(2) A Republican will win.

Therefore,

(3) if Reagan doesn't win then Anderson will win.


For (3), when read truth-functionally, is equivalent to:

(4) Either Reagan will win or Anderson will win

which in turn follows truth-functionally from:

(5) Reagan will win

which most people believed.

The problem is due to the fact that (3) is not normally understood
truth-functionally; it is understood as a counterfactual, setting up a
possible situation (or "
mental space"--Fauconnier, *Mental Spaces*) in
which Reagan doesn't win and asking what assumption will make that
situation most like the actual one. The assumption most people would
make, given such a situation, is that the Democrats will win; so they
believed (6), not (3):

(6) If Reagan doesn't win, then the Democrats will win.

The really interesting question here is whether (1) and (2) are
understood truth-functionally. If they were, then the alleged failure
of modus ponens would rest on a simple equivocation. But I don't
think they are. (1) is a counterfactual just like (3) and (6). The
problem is that (3) is understood quite differently when it appears as
the consequent of (1) than when it appears alone. The antecedent of
(1) sets up a mental space in which a Republican wins, by hypothesis.
This affects the understanding of (3) in a way in which a mere factual
belief that a Republican will win (such as is expressed in (2)) does
not. It rules out the consideration of a Democratic victory even in a
counterfactual situation. So when the antecedent of (3) sets up
another mental space inside the first--where it is presupposed that
Reagan doesn't win--the consequent of (6) is ruled out. Inside the
second mental space, *by hypothesis*, a Republican wins but Reagan
doesn't win. Thus, Anderson's victory is the only conclusion
available.

Note that a factual belief with 100% certainty, that a Republican
will win, would have much the same effect as the antecedent of a
counterfactual. If (2) were believed, not merely as very likely but
with absolutely unshakable confidence, then (3) should follow, even if
(1) is only believed with moderate confidence. Thus those who
attributed the problem to difficulties about probability were in a way
right, though this misses the point about understanding of the
conditionals.

This does pose a problem for classical treatments such as David
Lewis's *Counterfactuals*. According to Lewis, modus ponens applied
to a counterfactual conditional is valid. The argument attributed to
McGee seems to refute this.

P.S.: I write the above without having read Van McGee's article, on
the basis of conversations and reading AIList. I intend to get to
McGee's article in the near future.

------------------------------

Date: 9 Oct 85 10:55:00 EDT
From: "
CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Reply-to: "
CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Subject: DO you really need an AI machine?


> Date: Sun, 06 Oct 85 15:20:34 EDT
> From: "
Srinivasan Krishnamurthy" <1438@NJIT-EIES.MAILNET>
> Subject: Do I really need a AI Machine?
>
> Dear readers,
> I work at COMSAT LABS, Maryland. We are getting into AI in a big way
> and would like comments, suggestions and answers to the
> following questions,to justify a big investment on a AI machine.
>
> * What are the specific gains of using a AI Machine (like symbolics)
> over developing AI products/packages on general purpose
> machines like - VAX-11/750-UNIX(4.2BSD), 68000/UNIX etc.
>
... [more questions follow]

Well, I'm not qualified to give detailed answers, but let me
rave mildly over some personal experience and indulge in a
little heresy in hopes of provoking some discussion:

So we're gonna do an expert systems project. So we got a Symbolics 3600,
so that me and one other guy can develop an expert system.

problem no. 1 - Geez, what a learning curve, when coming off
VAX/VMS - I mean, sure, ya gotta learn expert system techniques,
ya gotta learn Common Lisp, but I also gotta learn another file
system and operating system (which is somehow never quite
referred to as such), I gotta learn an editor with more features
than I could use in a million years, I gotta learn window
navigation -- after 3 months, I have 12 dense pages of cheat
sheets, and am just getting to the point of adequacy (operationally
defined as: not getting caught inside the inspector and being unable
to figure out how to get out short of a boot-up).

problem no. 2 - We've got one screen and one keyboard - even with
only two people, it's surprising how often our schedules don't mesh.
Cugini's tentative hypothesis of hardware-sharing: the number of
people wishing to use a shared single-user system at any time is even.

problem no. 3 - So now we've got an AI lab, and whenever I wanna do
something I actually have to STAND UP AND WALK to the machine
(which may or may not be occupied anyway) - so I find myself
conversing with VMS, whose terminal sits conveniently to my right -
and login takes maybe 20 seconds, as opposed to a 3-minute boot-up.

problem no. 4 - and of course we don't have the Symbolics netted
to the VAX yet (or ever?), and so you can kiss data-sharing good-by.
Where should the (English) documentation for our expert system reside?
On the VAX, where I've got editors, formatters (eg runoff), and laser
printers already, or on the Symbolics, where the code lives, but which
at this point has no hard-copy output? What if I get some good Lisp
code over our beloved ailist? Do I key it in at the Symbolics?

Well, you get the idea - there's more to doing expert systems than
telling the forklift where to place your AI machine. Let me forestall
some rebuttals by saying, sure I know some things we're doing wrong,
and yeah, we should net the Symbolics to the VAX and yeah, we should
buy a 3640 or whatever for additional users. But point 1 is there's
a larger-than-I(-and-maybe-you)-suspected investment in
hardware, software, and time to be able to exploit these "
power
tools" of AI.

Point 2 is it's not so clear when you need all that performance
that an AI workstation is giving you. If PC's are seen as an
adequate delivery vehicle for an expert system, the assumption
would seem to be that you need performance during development -
but that doesn't seem right either - how fast does an editor
have to be? When you're doing logical testing (as opposed to
performance testing), wouldn't you be dealing with *smaller*
amounts of data, than in operational mode? Why would it not make
more sense to develop code on a relatively small system and
then use the performance of a 3600 for large-scale logic-crunching,
just as you might develop FORTRAN code on a PC, and then run it
on a CYBER? I'm willing to believe I'm wrong about this, but I
don't understand why.

Point 3 is that the reason a lot of CompScis deride PL/I and
embrace, say, Pascal, is that PL/I is big, complicated, clumsy,
gives you everything you ever wanted and several that you didn't,
etc etc, whereas Pascal is small, elegant, well-designed, etc etc.
Any analogies here?

I should say that the Symbolics itself *is* fast, reliable,
and well-documented (but complicated) so I'm not complaining
about Symbolics per se. This is a generic complaint.

So now I'm using VAX LISP (which - shame! - does not yet have
complex numbers, but is otherwise pretty good), and I find that
the less-sophisticated editor is powerful enough for me, there
are reasonable tools for tracing, debugging, pretty-printing,
and I haven't yet been slowed down by performance problems.

What am I missing here? Why am I happier on (sneer) a VAX than
on a glamorous Symbolics? (Replies implying coarse sensitivity
on the part of the writer will be, of course, be given the most
serious consideration and then dismissed).

Needless to say, these are my own utterly idiosyncratic views,
and in no way reflect the policy, de jure or de facto, of the
National Bureau of Standards, the Department of Commerce, or the
entire Federal Government.

John Cugini <Cugini@NBS-VMS>
Institute for Computer Sciences and Technology
National Bureau of Standards
Bldg 225 Room A-265
Gaithersburg, MD 20899
phone: (301) 921-2431

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT