Copy Link
Add to Bookmark
Report

AIList Digest Volume 4 Issue 234

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest            Friday, 24 Oct 1986      Volume 4 : Issue 234 

Today's Topics:
Philosophy - Intelligence, Understanding

----------------------------------------------------------------------

Date: Wed, 22 Oct 86 09:49 CDT
From: From the desk of Daniel Paul
<"NGSTL1::DANNY%ti-eg.csnet"@CSNET-RELAY.ARPA>
Subject: AI vs. RI

In the last AI digest (V4 #226), Daniel Simon writes:

>One question you haven't addressed is the relationship between intelligence and
>"human performance". Are the two synonymous? If so, why bother to make
>artificial humans when making natural ones is so much easier (not to mention
>more fun)?

This is a question that has been bothering me for a while. When it is so much
cheaper (and possible now, while true machine intelligence may be just a dream)
why are we wasting time training machines when we could be training humans in-
stead. The only reasons that I can see are that intelligent systems can be made
small enough and light enough to sit on bombs. Are there any other reasons?

Daniel Paul

danny%ngstl1%ti-eg@csnet-relay

------------------------------

Date: 21 Oct 86 14:43:22 GMT
From: ritcv!rocksvax!rocksanne!sunybcs!colonel@rochester.arpa (Col.
G. L. Sicherman)
Subject: Re: extended Turing test

> It is not always clear which of the two components a sceptic is
> worrying about. It's usually (ii), because who can quarrel with the
> principle that a veridical model should have all of our performance
> capacities?

Did somebody call me? Anyway, it's misleading to propose that a
veridical model of _our_ behavior ought to have our "performance
capacities."
Function and performance are relative to the user;
in a human context they have no meaning, except to the extent that
we can be said to "use" one another. This context is political
rather than philosophical.

I do not (yet) quarrel with the principle that the model ought to
have our abilities. But to speak of "performance capacities" is
to subtly distort the fundamental problem. We are not performers!


POZZO: He used to dance the farandole, the fling, the brawl, the jig,
the fandango and even the hornpipe. He capered. For joy. Now
that's the best he can do. Do you know what he calls it?
ESTRAGON: The Scapegoat's Agony.
VLADIMIR: The Hard Stool.
POZZO: The Net. He thinks he's entangled in a net.

--S. Beckett, _Waiting for Godot_
--
Col. G. L. Sicherman
UU: ...{rocksvax|decvax}!sunybcs!colonel
CS: colonel@buffalo-cs
BI: colonel@sunybcs, csdsiche@sunyabvc

------------------------------

Date: 21 Oct 86 14:57:12 GMT
From: ritcv!rocksvax!rocksanne!sunybcs!colonel@rochester.arpa (Col.
G. L. Sicherman)
Subject: Re: Searle & ducks

> I. What is "understanding", or "ducking" the issue...
>
> If it looks like a duck, swims like a duck, and
> quacks like a duck, then it is *called* a duck. If you cut it open and
> find that the organs are something other than a duck's, *then*
> maybe it shouldn't be called a duck. What it should be called becomes
> open to discussion (maybe dinner).
>
> The same principle applies to "understanding".

No, this principle applies only to "facts"--things that anybody can
observe, in more or less the same way. If you say, "Look! A duck!"
and everybody else says "I don't see anything," what are you to believe?

If it feels like a bellyache, don't conclude that it's a bellyache.
There may be an inner meaning to deal with! Appendicitis, gallstones,
trichinosis, you've been poisoned, Cthulhu is due any minute ...

This kind of argument always arises when technology develops new
capabilities. Bell: "Listen! My machine can talk!" Epiktistes: "No,
it can only reproduce the speech of somebody else."
It's something
new--we must argue over what to call it. Any name we give it will
be metaphorical, invoking an analogy with human behavior, or something
else. The bottom line is that the thing is not a man; no amount of
simulation and dissimulation will change that.


When people talk of Ghosts I don't mention the Apparition by which I
am haunted, the Phantom that shadows me about the streets, the image
or spectre, so familiar, so like myself, which lurks in the plate-
glass of shop-windows, or leaps out of mirrors to waylay me.

--L. P. Smith
--
Col. G. L. Sicherman
UU: ...{rocksvax|decvax}!sunybcs!colonel
CS: colonel@buffalo-cs
BI: colonel@sunybcs, csdsiche@sunyabvc

------------------------------

Date: 21 Oct 86 16:47:53 GMT
From: ssc-vax!bcsaic!michaelm@BEAVER.CS.WASHINGTON.EDU
Subject: Re: Searle, Turing, Symbols, Categories

>Stevan Harnad writes:
> ...The objective of the turing test is to judge whether the candidate
> has a mind, not whether it is human or drinks motor oil.

In a related vein, if I recall my history correctly, the Turing test has been
applied several times in history. One occasion was the encounter between the
New World and the Old. I believe there was considerable speculation on the
part of certain European groups (fueled, one imagines, by economic motives) as
to whether the American Indians had souls. The (Catholic) church ruled that
they did, effectively putting an end to the controversy. The question of
whether they had souls was the historical equivalent to the question of
whether they had mind and/or intelligence, I suppose.

I believe the Turing test was also applied to oranguatans, although I don't
recall the details (except that the orangutans flunked).

As an interesting thought experiment, suppose a Turing test were done with a
robot made to look like a human, and a human being who didn't speak English--
both over a CCTV, say, so you couldn't touch them to see which one was soft,
etc. What would the robot have to do in order to pass itself off as human?
--
Mike Maxwell
Boeing Advanced Technology Center
...uw-beaver!uw-june!bcsaic!michaelm

------------------------------

Date: 21 Oct 86 13:29:09 GMT
From: mcvax!ukc!its63b!hwcs!aimmi!gilbert@seismo.css.gov (Gilbert
Cockton)
Subject: Re: Searle, AI, NLP, understanding, ducks

In article <1919@well.UUCP> jjacobs@well.UUCP (Jeffrey Jacobs) writes:
>
>Most so-called "understanding" is the result of training and
>education. We are taught "procedures" to follow to
>arrive at a desired result/conclusion. Education is primarily a
>matter of teaching "procedures", whether it be mathematics, chemistry
>or creative writing. The *better* understood the field, the more "formal"
>the procedures. Mathematics is very well understood, and
>consists almost entirely of "formal procedures".

This is contentious and smacks of modelling all learning procedures
in terms of a single subject, i.e. mathematics. I can't think of a
more horrible subject to model human understanding on, given the
inhumanity of most mathematics!

Someone with as little as a week of curriculum studies could flatten
this assertion instantly. NO respectable curriculum theory holds that
there is a single form of knowledge to which all bodies of human
experience conform with decreasing measures of formal success. In the
UK, it is official curriculum policy to initiate children into
several `forms' of knowledge (mathematics, physical science,
technology, humanities, aesthetics, religion and the other one).
The degree to which "understanding" is accepted as procedural rote
learning varies from discipline to discipline. Your unsupported
equivalence between understanding and formality ("The *better* understood the
field, the more "
formal" the procedures") would not last long in the
hands of social and religious studies, history, literature, craft/design
and technology or art teachers. Despite advances in LISP and
connection machines, no-one has yet formally modelled any of these areas to
the satisfaction of their skilled practitioners. I find it strange
that AI workers who would struggle to write a history/literature/design
essay to the satisfaction of a recognised authority are naive enough to believe
that they could program a machine to write one.

Many educational psychologists and experienced teachers would completely
reject your assertions on the ground that unpersonalised cookbook-style
passively-internalised formalisms, far from being a sign of understanding,
actually constitute the exact opposite of understanding. For me, the term
`understanding' cannot be applied to anything that someone has learnt until
they can act on this knowledge within the REAL world (no text book
problems or ineffective design rituals), justify their action in terms of this
knowledge and finally demonstrate integration of the new knowledge with their
existing views of the world (put it in their own words).

Finally, your passive view of understanding cannot explain creative
thought. Granted, you say `Most so-called "understanding"', but I
would challenge any view that creative thought is exceptional -
the mark of great and noble scientists who cannot yet be modelled by
LISP programs. On the contrary, much of our daily lives has to be
highly creative because our poor understanding of the world forces us to
creatively fill in the gaps left by our inadequate formal education.
Show me one engineer who has ever designed something from start to
finish 100% according to the book. Even where design codes exist, as
in bridge-building, much is left to the imagination. No formal prescription
of behaviour will ever fully constrain the way a human will act.
In situations where it is meant to, such as the military, folk spend a
lot of time pretending either to have done exactly what they were told
or to have said exactly what they wanted to be done. Nearer to home, find me
one computer programmer who's understanding is based 100% on formal procedures.
Even the most formal programmers will be lucky to be in program-proving mode
more than 60% of the time. So I take it that they don't `understand' what
they're doing the other 40% of the time? Maybe, but if this is the case, then
all we've revealed are differences in our dictionaries. Who gave you the
formal procedure for ascribing meaning to the word "understanding"?

>This leads to the obvious conclusion that humans do not
>*understand* natural language very well.
>The lack of understanding of natural languages is also empirically
>demonstrable. Confusion about the meaning
>of a person's words, intentions etc can be seen in every interaction
... over the net!

Words MEAN something, and what they do mean is relative to the speakers and
the situation. The lack of formal procedures has NOTHING to do with
breakdowns in inter-subjective understanding. It is wholly due to
inabilities to view and describe the world in terms other than one's own.
--
Gilbert Cockton, Scottish HCI Centre, Ben Line Building, Edinburgh, EH1 1TN
JANET: gilbert@uk.ac.hw.aimmi ARPA: gilbert%aimmi.hw.ac.uk@cs.ucl.ac.uk
UUCP: ..!{backbone}!aimmi.hw.ac.uk!gilbert

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT