Copy Link
Add to Bookmark
Report

AIList Digest Volume 4 Issue 173

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest           Thursday, 24 Jul 1986     Volume 4 : Issue 173 

Today's Topics:
Philosophy - Perception & Understanding,
Humor - Expert Systems Parable

----------------------------------------------------------------------

Date: Mon, 21 Jul 86 14:01:37 EDT
From: "William J. Rapaport" <rapaport%buffalo.csnet@CSNET-RELAY.ARPA>
Subject: followup on "understanding yellow"

The original version of the "understanding yellow" problem may be found
in:

Jackson, Frank, "Epiphenomenal Qualia," _Philosophical
Quarterly_ 32(1982)127-136.

with replies in:

Churchland, Paul M., ``Reduction, Qualia, and the Direct
Introspection of Brain States," _Journal of Philosophy_
82(1985)8-28.

Jackson, Frank, "
What Mary Didn't Know," _Journal of
Philosophy_ 83(1986)291-95.

(One of the reasons I stopped reading net.philosophy was that its
correspondents seemed not to know about what was going on in philosophy
journals!)

William J. Rapaport
Assistant Professor

Dept. of Computer Science, SUNY Buffalo, Buffalo, NY 14260

(716) 636-3193, 3180

uucp: ..!{allegra,decvax,watmath,rocksanne}!sunybcs!rapaport
csnet: rapaport@buffalo.csnet
bitnet: rapaport@sunybcs.bitnet

------------------------------

Date: Fri 18 Jul 86 14:57:17-PDT
From: Stephen Barnard <BARNARD@SRI-STRIPE.ARPA>
Subject: internal representations vs. direct perception

Eyal Mozes thinks that direct perception is right on, and that
internal representations either don't exist or aren't important.
I think direct perception is a vague and suspiciously mystical
doctrine that has no logical or physical justification.

Barnard:
>>Consider what happens when we look at a realistic
>>painting. We can, at one level, see it as a painting, or we can see
>>it as a scene with no objective existence whatsoever. How could this
>>perception possibly be interpreted as anything but an internal
>>representation?

Mozes:
>Sorry, I can't follow your argument. Of course, a realistic painting is
>a representation; but it is not an INTERNAL representation. Gibson's
>books do contain long discussions of paintings; but he specifically
>distinguishes between looking at a painting (in which case you are
>perceiving a representation of the object) and directly perceiving the
>object itself.

Barnard's reply:
Look, the painting is a representation, but we don't
perceive it AS a representation --- we perceive it as a scene. The
scene has NO OBJECTIVE EXISTENCE; therefore, we cannot perceive it
DIRECTLY. It exists only in our imaginations, presumably as internal
representations. (How else?) If the painter was skillful, the
representations in our imagination match his intention. To counter
this argument, you must tell me how one can "
directly" perceive
something that doesn't exist. Good luck. On the other hand, it is
quite possible to merely represent something that doesn't exist.

Barnard:
>>Gibson emphasized the richness of the visual stimulus,
>>arguing that much more information was available from it than was
>>generally realized. But to go from this observation to the conclusion
>>that the stimulus is in all cases sufficient for perception is clearly
>>not justified.

Mozes:
>Gibson did not deny that there are SOME cases (for example, many
>situations created in laboratories) in which the stimulus is
>impoverished. His point was that these cases are the exception, rather
>than the rule. Even if we agree that in those exceptional cases there
>is some inference from background knowledge, this doesn't justify
>concluding that in the normal cases, where the stimuli do uniquely
>specify the external object, inference also goes on.

To the contrary, ambiguous visual stimuli are not rare exceptions ---
the visual stimulus is ambiguous in virtually EVERY CASE. Gibson was
fond of stereo and optic flow as modes of perception that can
disambiguate static, monocular stimuli (which are clearly ambiguous).
But he simply did not realize that such modalities are themselves
ambiguous. For example, I am not aware of Gibson discussing the
aperture problem, which describes ambiguity in optic flow. Similarly,
depth from stereo is unique once the image-to-image correspondence is
achieved, but, as we know from years of research on computational
stereo, solving the correspondence problem is not easy, primarly due
to the problem of resolving ambiguous matches. Similar problems
occur for every mode of visual perception.

Gibson's hypothesis that the information for perception exists
completely in the stimulus is false, and the entire theory of direct
perception falls apart as a consequence.

------------------------------

Date: Sun, 20 Jul 86 09:25:43 -0200
From: Eyal mozes <eyal%wisdom.bitnet@WISCVM.ARPA>
Subject: Re: Searle and understanding

>I think the example shows that there are two related meanings
>of "
understanding". Certainly, in a formal, scientific sense,
>ETS knows (understands-1) as much about yellow as anyone - all
>the associated wavelengths, retinal reactions, brain-states,
>etc. He can use this concept in formal systems, manipulate it,
>etc. But *something* is missing - ETS doesn't know
>(understand-2) "
what it's like to see yellow", to borrow/bend
>Nagel's phrase.
>
> It's this "
what it's like to be a subject experiencing X" that
> eludes capture (I suppose) by AI systems. And I think the
> point of the Chinese room example is the same - the system as
> a whole *does* understand-1 Chinese, but doesn't understand-2
> Chinese.

No, I think you're missing Searle's point.

What you call "
understanding-2" is applicable only to a very small
class of concepts - to concepts of sensory qualities, which can't be
conveyed verbally. For the concept of a color, you don't even have to
stipulate ETS; any color-blind person with a fair knowledge of physical
optics (and I happen to be such a person) has "
understanding-1", but
not "
understanding-2", of the concept; I know the conditions which
cause other people to see that color, I can reason about it, but I
don't know what it feels like to see it. But for concepts which don't
directly involve sensory qualities (for example, for understanding a
language) there can be only "
understanding-1".

Now, Searle's point is that this "
understanding-1" (such as a native
Chinese's understanding of the Chinese language, or my understanding of
colors) involves intentionality; it does not consist of manipulating
uninterpreted symbols by formal rules. That is why he denies that a
computer program can have it.

Those who think Searle sees something "
magical" in human understanding
also miss his point. Quite on the contrary, he regards understanding as
a completely natural phenomenon, which, like all natural phenomena,
depends on specific material causes. To quote from his paper "
Minds,
Brains and Programs": "Whatever else intentionality is, it is a
biological phenomenon, and it is as likely to be as causally dependent
on the specific biochemistry of its origins as lactation,
photosynthesis, or any other biological phenomena. No one would suppose
that we could produce milk and sugar by running a computer simulation
of the formal sequences in lactation and photosynthesis, but where the
mind is concerned many people are willing to believe in such a miracle
because of a deep and abiding dualism: the mind they suppose is a
matter of formal processes and is independent of quite specific
material causes in the way that milk and sugar are not".

Eyal Mozes

BITNET: eyal@wisdom
CSNET and ARPA: eyal%wisdom.bitnet@wiscvm.ARPA
UUCP: ..!ucbvax!eyal%wisdom.bitnet

------------------------------

Date: Sun, 20 Jul 86 17:34:03 PDT
From: kube%cogsci@BERKELEY.EDU (Paul Kube)
Subject: comment on Hayes, V4 #169

Pat Hayes <PHayes@SRI-KL> in AIList V4 #169:
>re: Searle's chinese room
>There has been by now an ENORMOUS amount of discussion of this argument, far
>more than it deserves.

Pat is right, for two reasons: the argument says nothing one way or
the other about the possibility of constructing systems which exhibit
any kind of behavior you like; and the point of the Chinese Room
argument proper--that computation is insufficient for intentionality--
had already been made to most everyone's satisfaction by Block, Fodor,
Rey, and others, by the time Searle went to press. (The question of
the sufficiency of computation plus causation, or of the sufficiency of
neurobiology, are further issues which have probably not been
discussed more than they deserve.)

>... ultimately
>whether or not he is right will have to be decided empirically, I
>believe.

Searle thinks this too, but it's not obvious what the empirical decision
would be based on. Since behavior and internal structure (by
hypothesis), and material (to avoid begging the question), are no
guide, it would seem that the only way to tell if a silicon system has
intentional states is by being one. The crucial empirical test looks
disturbingly elusive, so far as the brain-based scientific community
is concerned.

> When the robots get to be more convincing, let's
>come back and ask him again ( or send one of them to do it ).

Searle, of course, has committed himself to not being convinced by a
robot, no matter how convincing. But some elaboration of this
scenario is, I think, the right picture of how the question will be
answered (and not `empirically'): as increasingly perfected robots
proliferate, socio-political mechanisms for the establishment of
person-based rights will act in response to the set of considerations
present at the time; eventually lines will be drawn that most folks
can live with, and the practice of literal attribution of
psychological predicates will follow these lines. If this process is
(at least for practical purposes) unpredictable, then only time will
tell if Searle's paper will come to be regarded as a pathetically
primitive racist tract, or as an enlightened contribution to the
theory of the new order.

Paul Kube
kube@berkeley.edu
...ucbvax!kube

------------------------------

Date: Tue 22 Jul 86 13:30:10-PDT
From: Glenn Silverstein <SILVERSTEIN@Sushi.Stanford.EDU>
Subject: A Parable (about AI in large organizations)


Once upon a time, in a kingdom nothing like our own, gold was very
scarce, forcing jewelers to try and sell little tiny gold rings and
bracelets. Then one day a PROSPECTOR came into the capitol sporting a
large gold nugget he found in a hill to the west. As the word went out
that there was "
gold in them thar hills", the king decided to take an
active management role. He appointed a "
gold task force" which one year
later told the king "
you must spend lots of money to find gold, lest
your enemies get richer than you."
So a "
Gold Center" was formed, staffed with many spiffy looking
Ph.D. types who had recently published papers on gold (remarkably similar
to their earlier papers on silver). Experienced prospectors had been
interviewed, but they smelled and did not have a good grasp of gold
theory.
The Center bought a large number of state of the art bulldozers and
took them to a large field they had found that was both easy to drive on
and freeway accessible. After a week of sore rumps, getting dirty, and
not finding anything, they decided they could best help the gold cause
by researching better tools.
So they set up some demo sand hills in clear view of the king's
castle and stuffed them with nicely polished gold bars. Then they split
into various research projects, such as "
bigger diggers", for handling
gold boulders if they found any, and "
timber-gold alloys", for making
houses from the stuff when gold eventually became plentiful.
After a while the town barons complained loud enough and also got
some gold research money. The lion's share was allocated to the most
politically powerful barons, who assigned it to looking for gold in
places where it would be very convenient to find it, such as in rich
jewelers' backyards. A few bulldozers, bought from smiling bulldozer
salespeople wearing "
Gold is the Future" buttons, were time shared across
the land. Searchers who, in their allotted three days per month of
bulldozer time, could just not find anything in the backyards of "
gold
committed" jewelers were admonished to search harder next month.
The smart money understood that bulldozers were the best digging
tool, even though they were expensive and hard to use. Some backward
prospector types, however, persisted in panning for gold in secluded
streams. Though they did have some success, gold theorists knew that
this was due to dumb luck and the incorporation of advanced bulldozer
research ideas in later pan designs.
After many years of little success, the king decided the whole
pursuit was a waste and cut off all funding. The Center people quickly
unearthed their papers which had said so all along.
The end.

P.S. There really was gold in them thar hills. Still is.

by Robin Hanson (using silverstein@sushi)
[credit to M. Franklin for story ideas]

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT