Copy Link
Add to Bookmark
Report
AIList Digest Volume 7 Issue 017
AIList Digest Saturday, 4 Jun 1988 Volume 7 : Issue 17
Today's Topics:
Philosophy -
Even Yet More Free Will ...
Who else isn't a science?
grounding of thought in emotion
Souls (in the machine and at large)
Objective standards for agreement on theories
punishment metaphor
----------------------------------------------------------------------
Date: 1 Jun 88 15:36:13 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net (Gilbert
Cockton)
Subject: Constructive Question
What's the difference between Cognitive Science and AI? Will the
recent interdisciplinary shift, typified by the recent PDP work, be
the end of AI as we knew it?
What IS in a name?
--
Gilbert Cockton, Department of Computing Science, The University, Glasgow
gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert
The proper object of the study of humanity is humans, not machines
------------------------------
Date: 1 Jun 88 14:30:38 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net (Gilbert
Cockton)
Subject: Me and Karl Kluge (no flames, no insults, no abuse)
In article <1792@pt.cs.cmu.edu> kck@g.gp.cs.cmu.edu (Karl Kluge) writes:
>> Because so little of our effective knowledge is formalised, we learn
>> in social contexts, not from books. I presume AI is full of relative
>> loners who have learnt more of what they publicly interact with from
>> books rather than from people.
>
>You presume an awful lot. Comments like that show the intellectual level
>of your critique of AI.
I also presume that comparative attitudes to book and social knowledge
are a measurable, and probably fairly stable, feature of someone's
make up. It would be intriguing to test the hypothesis that AI
researchers place more faith in the ability of text (including
programs) to capture social reality than other academic groups. Now,
does this still have no intellectual respectability?
>
>Well, what kind of AI research are you looking to judge? If you're looking
>at something like SOAR or ACT*, which claim to be computational models of
>human intelligence, then comparisons of the performance of the architecture
>with data on human performance in given task domains can be (and are) made.
You obviously have missed my comments about work by John Anderson and
other psychological research. If AI were all conducted this way,
there would be less to object about.
>If you are looking at research which attempts to perform tasks we usually
>think of as requiring "intelligence", such as image understanding, without
>claiming to be a model of human performance of the task, then one can ask
>to what extent does the work capture the underlying structure of the task?
>how does the approach scale? how robust is it? and any of a number of other
>questions.
OK then. Point to an AI text book that covers Task Analysis? Point to
work other than SOAR and ACT* where the Task Domain has been formally
studied before the computer implementation? My objection to much work
in AI is that there has been NO proper study of the tasks which the
program attempts to simulate. Vision research generally has very good
psychophysical underpinnings, and I accept that my criticisms do not
apply to this area either. To supply one example, note how the
research on how experts explain came AFTER the dismal failure of rule
traces in expert systems to be accepted as explanation. See Alison
Kidd's work on the unwarranted assumptions behind much (early?) expert
systems work. One reason I did not pursue a PhD in AI was that one
potential supervisor told me that I didn't have to do any empirical
work before designing a system, indeed I was strongly encouraged NOT
to do any empirical studies first. I couldn't believe my ears. How
the hell can you model what you've never studied? Fiction.
>Mr. Cockton, it is more than a little arrogant to assume that anyone who
>disagrees with you is some sort of unread, unwashed social misfit
When did I mention hygiene? On "unread", this a trivial charge to
prove, just read through the references in AAAI and IJCAI. AI
researchers are not reading what educational researchers are reading,
something which I can't understand, as they are both studying the same
thing. Finally, anyone who is programming a lot of the time cannot be
studying people as much as someone who never programs.
I never said anything about being a misfit. Modern societies are too
diverse for the word to be used without qualification. Being part of a
subculture, like science or academia is only a problem when it prevents
comfortable interaction with people from different subcultures. Part
of the subculture of AI is that the intellectual tools of maths and
physics transfer to the study of humans. Part of the subculture of
human disciplines is that they do not. I would be a misfit in AI, AI
types could be misfits in a human discipline. I've certainly seen a
lot of misanthropy and "we're building a better humanity" in recent
postings. Along with last year's debate over "flawed" minds, it's
clear that many posters to this group believe they can do a better job
than whatever made us. But what is it exactly that an AI is going to be
better than? No image of man, no superior AI. Wonder that's why some
AI people have to run humanity down. It improves the chance of ELIZA
being better than us.
The point I have been making repeatedly is that you cannot study human
intelligence without studying humans. John Anderson and his paradigm
partners and Vision apart, there is a lot of AI research which has
never been near a human being. Once again, what the hell can a computer
program tell us about ourselves? Secondly, what can it tell us that we
couldn't find out by studying people instead?
--
Gilbert Cockton, Department of Computing Science, The University, Glasgow
gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert
The proper object of the study of humanity is humans, not machines
------------------------------
Date: 1 Jun 88 19:34:02 GMT
From: bwk@mitre-bedford.arpa (Barry W. Kort)
Subject: Re: Human-human communication
Tom Holroyd brings us around to the inevitable ineffable dilemma.
How can we talk about that which cannot be encoded in language?
I suppose we will have to invent language to do the job.
I presently possess some procedural knowledge which I am unable
to transmit symbolically. I can transmit the <name> of the knowledge,
but I cannot transmit the knowledge itself.
I know how to walk, how to balance on a bicycle, and how to reduce
my pulse. But I can't readily transmit that knowledge in English.
In fact, I don't even know how I know these things.
I suspect I could teach them to others, but not by verbal lecture.
The dog has similar kinds of knowledge. Maybe our friends in robotics
can assure us that there is a code for doing backward somersaults,
but such language is not commonly exchanged over E-mail channels.
--Barry Kort
------------------------------
Date: 3 Jun 88 06:58:13 GMT
From: pasteur!agate!garnet!weemba@ames.arpa (Obnoxious Math Grad
Student)
Subject: Who else isn't a science?
In article <3c671fbe.44e6@apollo.uucp>, nelson_p@apollo writes:
>>fail to see how this sort of intellectual background can ever be
>>regarded as adequate for the study of human reasoning. On what
>>grounds does AI ignore so many intellectual traditions?
> Because AI would like to make some progress (for a change!). I
> originally majored in psychology. With the exception of some areas
> in physiological pyschology, the field is not a science. Its
> models and definitions are simply not rigorous enough to be useful.
Your description of psychology reminds many people of AI, except
for the fact that AI's models end up being useful for many things
having nothing to do with the motivating application.
Gerald Edelman, for example, has compared AI with Aristotelian
dentistry: lots of theorizing, but no attempt to actually compare
models with the real world. AI grabs onto the neural net paradigm,
say, and then never bothers to check if what is done with neural
nets has anything to do with actual brains.
ucbvax!garnet!weemba Matthew P Wiener/Brahms Gang/Berkeley CA 94720
------------------------------
Date: 3 Jun 88 08:17:00 EDT
From: "CUGINI, JOHN" <cugini@icst-ecf.arpa>
Reply-to: "CUGINI, JOHN" <cugini@icst-ecf.arpa>
Subject: Free will - a few quick questions for McDermott:
Drew McDermott writes:
> Freedom gets its toehold from the fact that it is impossible for an
> agent to think of itself in terms of causality.
I think the strength of "impossible" here is that: the agent would get
itself into an infinite regress if it tried, or some such.
In any event, isn't the question: Is the agent's "decision" whether
or not to make the futile attempt to model itself in terms of causality
caused or not? I assume McDermott believes it is caused, no?
(unless it is made to depend on some QM process...).
If this decision not to get pointlessly self-referential is caused,
then why call any of this "free will"? I take McDermott's earlier point
that science always involves a technical re-definition of terms which
originated in a less austere context, eg "force", "energy", "light",
etc.
But it does seem to me that the thrust of the term "free will" has always
been along the lines of an *uncoerced* decision. We've learned more
throughout the years about the subtle forms such coercion might take,
and some are willing to allow some kinds of coercion (perhaps "internal")
and not others. (I think coercion from any source means unfreedom).
But McDermott's partially self-referential robot is clearly determined
right from the start (or could be). What possible reason is there
for attributing to it, even metaphorically, ***free*** will?
Why should (pragmatically or logically ?) necessary partial ignorance
of one's own place in the causal nexus be called **freedom**?
Why not call it "Impossibility of Total Self-Prediction" ?
John Cugini <Cugini@ecf.icst.nbs.gov>
------------------------------
Date: Fri, 3 Jun 88 11:37:41 EDT
From: "Bruce E. Nevin" <bnevin@cch.bbn.com>
Subject: grounding of thought in emotion
DS> AIList 7.12
DS> From: dan@ads.com (Dan Shapiro)
DS> Subject: Re: [DanPrice@HIS-PHOENIX-MULTICS.ARPA: Sociology vs Science
DS> Debate]
DS> I'd like to take the suggestion that "bigger decisions are made less
DS> rationally" one step further... I propose that
DS> irrationality/bias/emotion (pick your term) are *necessary*
DS> corollaries of intelligence . . .
There have been indications in recent years that feelings are the
organizers of the mind and personality, that thoughts and memories are
coded in and arise from subtle feeling-tones. Feelings are the vehicle,
thoughts are passengers. Physiologically, this has to do with links
between the lymbic system and the cortical system. References: Gray &
LaViolette in _Man-Environment Systems_ 9.1:3-14, 15-47; _Brain/Mind
Bulletin_ 7.6, 7.7 (1982); S. Sommers (of UMass Boston) in _Journal of
Personality and Social Psychology 41.3:553-561; perhaps Manfred Clynes'
stuff on "sentics".
Bruce Nevin
bn@cch.bbn.com
<usual_disclaimer>
------------------------------
Date: 3 Jun 88 06:53:00 GMT
From: quintus!ok@sun.com (Richard A. O'Keefe)
Subject: Re: Constructive Question
In article <1313@crete.cs.glasgow.ac.uk>, Gilbert Cockton writes:
> What's the difference between Cognitive Science and AI? Will the
> recent interdisciplinary shift, typified by the recent PDP work, be
> the end of AI as we knew it?
>
> What IS in a name?
To answer the second question first, what's in a name is "history".
I do not expect much in the way of agreement with this, but for me
- Cognitive Science is the discipline which attempts to understand
and (ideally) model actual human individual & small-group behaviour.
People who work in this area maintain strong links with education,
psychology, philosophy, and even AI. Someone who works in this
area is likely to conduct psychological experiments with ordinary
human subjects. It is a science.
- AI is the discipline which attempts to make "intelligent" artefacts,
such as robots, theorem provers, IKBSs, & so on. The primary goal
is to find *any* way of doing things, whether that's the way humans
do it or not is not particularly interesting. Machine learning is a
part of AI: a particular technique may be interesting even if humans
*couldn't* use it. (And logic continues to be interesting even though
humans normally don't follow it.) Someone trying to produce an IKBS may
obtain and study protocols from human experts; in part it is a matter
of how well the domain is already formalised.
AI is a creative art, like Mathematics.
- The "neural nets" idea can be paraphrased as "it doesn't matter if you
don't know how your program works, so long as it's parallel."
If I may offer a constructive question of my own: how does socialisation
differ from other sorts of learning? What is so terrible about learning
cultural knowledge from a book? (Books are, after all, the only contact
we have with the dead.)
------------------------------
Date: Fri, 03 Jun 88 13:13:15 EDT
From: Thanasis Kehagias <ST401843%BROWNVM.BITNET@MITVMA.MIT.EDU>
Subject: Souls (in the machine and at large)
Manuel Alfonseca, in reply to my Death posting, points out that
creation of AI will not settle the Free Will/Existence of Soul argument,
because it is a debate about axioms, and finally says:
>but an axiom. It is much more difficult to convince people to
>change their axioms than to accept a fact.
true... in fact, people often retain their axioms even in the face of
seemingly contradicting facts.... but what i wanted to suggest is that
many people object to the idea of AI because they feel threatened by the
possibility that life is something completely under human control.
having to accommodate the obvious fact that humans can destroy human
life, they postulate (:axiom) a soul, an afterlife for the soul,and that
this belongs to a spiritual realm and cannot be destroyed by humans.
this is a negative approach, if you ask me.. i am much more attracted to
the idea that there is a soul in Natural Intelligence, there will be a
soul in the AI(if and when it is created) and it (the soul) will be
created by the humans.
Manuel is absolutely right in pointing that even if AI is created
the controversy will go on. the same phenomenon has occurred in many
other situations where science infringed on religious/metaphysical
dogma. to name a few instances: the Geocentric-Heliocentric theories,
Darwin's theory of Evolution (the debate actually goes back to Lamarck,
Cuvier et.al.) and the Inorganic/Organic Chemistry debate. notice that
their chronological order more or less agrees with the shift from the
material to the mental (dare we say spiritual?). anyway, IF (and it is a
big IF) AI is ever created, certainly nothing will be resolved about the
Human Condition. but, i think, it is useful to put this AI debate in
historical perspective, and recognize it as just another phase in the
process of the growth of science.
OF COURSE this is just an interpretation
OF COURSE this is not Science
OF COURSE Science is just an interpretation
Thanasis Kehagias
------------------------------
Date: 2 Jun 88 16:10:13 GMT
From: bwk@mitre-bedford.arpa (Barry W. Kort)
Subject: Re: Language-related capabilities (was Re: Human-human
communication)
In his rejoinder to Tom Holroyd's posting, K. Watkins writes:
>Question: Is the difficulty of accurate linguistic expression of emotion at
>all related to the idea that emotional beings and computers/computer programs
>are mutually exclusive categories?
>
>If so, why does the possibility of sensory input to computers make so much
>more sense to the AI community than the possibility of emotional output? Or
>does that community see little value in such output? In any case, I don't see
>much evidence that anyone is trying to make it more possible. Why not?
These are interesting questions, and I hope we can mine some gold along
this vein.
I don't think that it is an accident that emotional states are difficult
to capture in conventional language. My emotions run high when I find
myself in a situation where words fail me. If I can name my emotional
state, I can avoid the necessity of acting it out nonverbally. Trouble
is, I don't know the names of all possible emotional states, least of
all the ones I have not visited before.
Nevertheless, I think it is useful for computer programs to express
emotions. A diagnostic message is a form of emotional expression.
The computer is saying, "Something's wrong. I'm stuck and I don't
know what to do." And sure enough, the computer doesn't do what
you had in mind. (By the way, my favorite diagnostic message is
the one that says, "Your program bombed and I'm not telling you
why. It's your problem, not mine.")
So, as I see it, there is a possibility of emotional output. It is
the behavior exhibited under abnormal circumstances. It is what the
computer does when it doesn't know what to do or how to do what you asked.
--Barry Kort
------------------------------
Date: Fri 3 Jun 88 13:06:50-PDT
From: Conrad Bock <BOCK@INTELLICORP.ARPA>
Subject: Objective standards for agreement on theories
Those concerned about whether theories are given by reality or are social
constructions might look at Clark Glymour's book ``Theory and Evidence''.
It is the basis for his later book, mentioned already already on the AIList,
that gives software that determines the ``best theory'' for a set of data.
Glymour states that his intention has been to balence some of the completely
relativist positions that arose with Quine (who said: ``to be is to be
denoted''). His theoretical work has many case studies from Copernicus
to Freud that apply his algorithm to show that it picks the ``winning''
theory.
Conrad Bock
------------------------------
Date: Fri, 3 Jun 88 16:15:25 EDT
From: "Bruce E. Nevin" <bnevin@cch.bbn.com>
Subject: denial presupposes free will
DH> AIList Digest 7.13
DH> From: oodis01!uplherc!sp7040!obie!wsccs!dharvey@tis.llnl.gov (David
DH> Harvey)
DH> Subject: Re: More Free Will
DH> While it is rather disturbing (to me at least) that we may not be
DH> responsible for our choices, it is even more disturbing that by our
DH> choices we are destroying the world. For heaven's sake, Reagan and
DH> friends for years banned a Canadian film on Acid Rain because it was
DH> political propaganda. Never mind the fact that we are denuding forests
DH> at an alarming rate.
You ought to read Gregory Bateson on the inherently adverse effects of
human purposive behavior. He develops the theme in several of the
papers and lectures reprinted in _Steps to an Ecology of Mind_,
especially in the last section on social and ecological issues.
DH> . . . if we with our free will (you said it,
DH> not me) aren't doing such a great job it is time to consider other
DH> courses of action. By considering them, we are NOT adopting them as
DH> some religious dogma, but intelligently using them to see what will
DH> happen.
Awfully hard to deny the existence of free will without using language
that presupposes its existence. Consider your use of "consider,"
"adopt," "intelligently using," and "see what will happen." This sounds
like purposive behavior, aka free will. If you can find a way to make
these claims without presupposing what you're denying, you'll be on
better footing.
Bruce Nevin
bn@cch.bbn.com
<usual_disclaimer>
------------------------------
Date: Fri, 3 Jun 88 16:17:14 EDT
From: "Bruce E. Nevin" <bnevin@cch.bbn.com>
Subject: punishment metaphor
In article <1209@cadre.dsl.PITTSBURGH.EDU> Gordon E. Banks writes:
>>Are there any serious examples of re-programming systems, i.e. a system
>>that redesigns itself in response to punishment.
>
>Certainly! The back-propagation connectionist systems are "punished"
>for giving an incorrect response to their input by having the weight
>strengths leading to the wrong answer decreased.
This is an error of logical typing. It may be that punishment results
in something analogous to reduction of weightings in a living organism.
Supposing that hypothesis to have been proven to everyone's
satisfaction, direct manipulation of such analogs to connectionist
weightings (could they be found and manipulated) would not in themselves
be punishment.
An analog to punishment would be if the machine reduces its own
weightings (or reduces some and increases others) in response to being
kicked, or to being demeaned, or to being caught making an error
("Bad hypercube! BAD! No user interactions for you for the rest of the
morning!").
Bruce Nevin
bn@cch.bbn.com
<usual_disclaimer>
------------------------------
Date: Fri, 3 Jun 88 16:18:57 EDT
From: "Bruce E. Nevin" <bnevin@cch.bbn.com>
Subject: McDermott model of free will
DM> Date: 30 May 88 16:40:28 GMT
DM> From: dvm@yale-zoo.arpa (Drew Mcdermott)
DM> Subject: Free will
DM> More on the self-modeling theory of free will:
DM> . . .
DM> What's pointless is trying to simulate the present period
DM> of time. Is an argument needed here? Draw a mental picture: The robot
DM> starts to simulate, and finds itself simulating ... the start of a
DM> simulation. What on earth could it mean for a system to figure out
DM> what it's doing by simulating itself?
Introspect about the process of riding a bicycle and you shortly fall
over. Model for yourself the process of speaking and you are shortly
tongue-tied. It is possible to simulate what one just was doing, but
only by leaving off the doing for the simulation, resuming the doing,
resuming the simulation, and so on.
What might be proposed is a parallel ("shadow mode") simulation, but
it's always going to be a jog out of step, not much help in real time.
What might be proposed is an ongoing modelling of what is >supposed< to
be going on at present. Behavior then is governed by the model unless
and until interaction in the environment that contradicts the model
exceeds some threshold (another layer of modelling), whereupon an
alternative model is substituted, or the best-fit model is modified
(more work), or the agent deals with the environment directly (a lot of
work indeed).
A great deal of human culture (social construction of reality) may have
the function of introducing and supporting sufficient redundancy to
enable this. Social conventions have their uses. We support one
another in a set of simplifications that we can agree upon and that the
world lets us get away with. (Often there are damaging ecological
consequences.) We make our environment more routine, more like that of
the robot in Drew McDermott's last paragraph ("Free will is not due to
ignorance.")
It's as if free will must be budgeted: if everything is a matter for
decision nothing can happen. The bumbler is I suppose the pathological
extension in that direction, the introspective bicyclist in all things.
For the opposite pathology (the appeal of totalitarianism), see Eric
Fromm, _Escape from Freedom_.
Bruce Nevin
bn@cch.bbn.com
<usual_disclaimer>
------------------------------
Date: 3 Jun 88 16:18 PDT
From: hayes.pa@Xerox.COM
Subject: Re: Free Will
Drew McDermott has written a lucidly convincing account of an AI approach to
what could be meant by `free will'. Now, can we please move the rest of this
stuff - in particular, anything which brings in such topics as: a decent world
to live in , Hitler and Stalin , Spanking , an omniscient god[sic] , ethics,
Hoyle's "Black Cloud", sin, and laws, and purpose, and the rest of what
Vedanta/Budhists would call the "Illusion", Dualism or the soul - to somewhere
else; maybe talk. philosophy, but anywhere except here.
Pat Hayes
------------------------------
End of AIList Digest
********************