Copy Link
Add to Bookmark
Report
AIList Digest Volume 7 Issue 013
AIList Digest Thursday, 2 Jun 1988 Volume 7 : Issue 13
Today's Topics:
More Free Will
----------------------------------------------------------------------
Date: 16 May 88 23:06:34 GMT
From: oodis01!uplherc!sp7040!obie!wsccs!dharvey@tis.llnl.gov (David
Harvey)
Subject: Re: More Free Will
In article <3200017@uiucdcsm>, channic@uiucdcsm.cs.uiuc.edu writes:
> ...
> Since it can't be denied, let's go one step further. Free will has created
> civilization as we know it. People, using their individual free wills,
> chose to make the world the way it is. Minsky chose to write his book,
> I chose to disagree, someone chose to design computers, plumbing, buildings,
> automobiles, symphonies and everything that makes life enjoyable.
> Our reality is our choice, the product of our free will operating on our
> value system.
> ...
If free will has created civilization as we know it, then it must be
accepted with mixed emotions. This means that Hitler, Stalin, some of
the Catholic Popes during the middle ages and others have created a
great deal of havoc that was not good. One of the prime reasons for AI
is to perhaps develop systems that prevent things like this from
happening. If we with our free will (you said it, not me) can't seem to
create a decent world to live in, perhaps a machine without free will
operating within prescribed boundaries may do a better job. We sure
haven't done too well.
>
> ...................... I believe all people, especially leading
> scientific minds, have the wisdom to use their undeniable free will
^^^^^^^^^^^^
> to making choices in values which will promote world harmony. ...
> ...
> Free will explained as an additive blend of determinism and chance
> directly attacks the concept of individual responsibility. Can any
> machine, based on this theory of free will, possibly benefit society
> enough to counteract the detrimental effect of a philosophy which
> implies that we aren't accountable for our choices?
> ...
>
>
> Tom Channic
> University of Illinois
> channic@uiucdcs.cs.uiuc.edu
> {decvax|ihnp4}!pur-ee!uiucdcs!channic
You choose to believe that free will is undeniable. The very fact that
many people do deny it is sufficient to prove that it is deniable. It
is like the existence of God; impossible to prove, and either accepted
or rejected by each individual.
While it is rather disturbing (to me at least) that we may not be
responsible for our choices, it is even more disturbing that by our
choices we are destroying the world. For heaven's sake, Reagan and
friends for years banned a Canadian film on Acid Rain because it was
political propaganda. Never mind the fact that we are denuding forests
at an alarming rate. To repeat, if we with our free will (you said it,
not me) aren't doing such a great job it is time to consider other
courses of action. By considering them, we are NOT adopting them as
some religious dogma, but intelligently using them to see what will
happen.
David A Harvey
Utah Institute of Technology (Weber State College)
dharvey@wsccs
------------------------------
Date: 26 May 88 08:12:29 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net (Gilbert
Cockton)
Subject: Re: Free Will & Self-Awareness
In article <5569@venera.isi.edu> Stephen Smoliar writes:
>>I cite 4 years reading of comp.ai.digest seminar abstracts as evidence.
>>
>Now that Gilbert Cockton has revealed the source of his knowledge of artificial
>intelligence
OK, OK, then I call every AI text I've ever read as well. Let's see
Nielson, Charniak and the other one, Rich, Schank and Abelson,
Semantic Information Processing (old, but ...), etc. (I use AI
programming concepts quite often, I just don't fall into the delusion
that they have any bearing on mind).
The test is easy, look at the references. Do the same for AAAI and
IJCAI papers. The subject area seems pretty introspective to me.
If you looked at an Education conference proceedings, attended by people who
deal with human intelligence day in day out (rather than hack LISP), you
would find a wide range of references, not just specialist Education references.
You will find a broad understanding of humanity, whereas in AI one can
often find none, just logical and mathematical references. I still
fail to see how this sort of intellectual background can ever be
regarded as adequate for the study of human reasoning. On what
grounds does AI ignore so many intellectual traditions?
As for scientific method, the conclusions you drew from a single
statement confirm my beliefs about the role of imagination in AI.
--
Gilbert Cockton, Department of Computing Science, The University, Glasgow
gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert
The proper object of the study of humanity is humans, not machines
------------------------------
Date: 26 May 88 09:46:01 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net (Gilbert
Cockton)
Subject: Re: AIList Digest V7 #6 Reply to McCarthy from a minor menace
>There are three ways of improving the world.
>(1) to kill somebody
>(2) to forbid something
>(3) to invent something new.
This is a strange combination indeed. We could at least add
(4) to understand ourselves.
Not something I would ever put in the same list as murder, but I'm
sure there's a cast iron irrefutable cold logical reason behind it.
But perhaps this is what AI is doing, trying to improve our understanding
of ourselves. But it may not do this because of
(2) it forbids something
that is, any approach, any insight, which does not have a computable expression.
This, for me, is anathema to academic liberal traditions and thus
>as long as Mrs. Thatcher is around; I wouldn't be surprised if
>Cockton could persuade Tony Benn.
is completely inaccurate. Knowledge of contemporary politics needs to
be added to the AI ignorance list. Mrs. Thatcher has just cut a lot of IT
research in the UK, and AI is one area which is going to suffer. Tony Benn on
the other hand, was a member of the government which backed the Transputer
initiative. The Edinburgh Labour council has used the AI department's sign in
some promotional literature for industries considering locating in Edinburgh.
They see the department's expertise as a strength, which it is.
Conservatives such as Thatcher look for immediate value for money in research.
Socialists look for jobs. Academic liberals look for quality. I may only have
myself to blame if this has not been realised, but I have never advocated an
end to all research which goes under the heading of AI. I use some of it in my
own research, and would miss it. I have only sought to attack the arrogance of
the computational paradigm, the "pure" AI tradition where tinkerers play at the
study of humanity. Logicians, unlike statisticians, seem to lack the humility
required to serve other disciplines, rather than try to replace them. There is
a very valuable role for discrete mathematical modelling in human activities,
but like statistics, this modelling is a tool for domain specialists and not
an end in itself. Logic and pure maths, like statistics, is a good servant
but an appalling master.
>respond to precise criteria of what should be suppressed
Mindless application of the computational paradigm to
a) problems which have not yielded to stronger methods
b) problems which no other paradigm has yet provided any understanding of.
For b), recall my comment on statistics. If no domain specialism has
any empirical corpus of knowledge, AI has nothing to test itself
against. It is unfalsifiable, and thus likely to invent nothing.
On a), no one in AI should be ignorant of the difficulties in relating
formal logic to ordinary language, never mind non-verbal behaviour and
kinaesthetic reasoning. AI has to make a case for itself based on a
proper knowledge of existing alternative approaches and their problems.
It usually assumes it will succeed spectacularly where other very bright
and dedicated people have failed (see the intro. to Winograd and Flores).
> how they are regarded as applying to AI
"Pure" AI is the application of the computational paradigm to the study of
human behaviour. It is not the same as computational modelling in
psychology, as here empirical research cannot be ignored. AI, by isolating
itself from forms of criticism and insight, cannot share in the
development on an understanding of humanity, because its raison d'etre,
the adherence to a single paradigm, without question, without
self-criticism, without a humble relationship to non-computational
paradigms, prevents it ever disappearing in the face of its impotence.
>and what forms of suppression he considers legitimate.
It may be partly my fault if anyone has thought otherwise, but you should
realise that I respect your freedom of association, speech and publication.
If anyone has associated my arguments with ideologies which would sanction
repression of these freedoms, they are (perhaps understandably) mistaken.
There are three legitimate forms of "suppression"
a) freely willed diversion of funding to more appropriate disciplines
b) run down of AI departments with distribution of groups across
established human disciplines, with service research in maths. This
is how a true discipline works. It leads to proper humility,
scholarship and ecleticism.
c) proper attention to methodological issues (cf the Sussex
tradition), which will put an end to the sillier ideas.
AI needs to be more self-critical, like a real discipline.
Social activities such as (a) and (b) will only occur if the arguments
with which I agree (they are hardly original) get the better of "pure"
AI's defence that it has something to offer (in which case answer that guy's
request for three big breaks in AI research, you're not doing very well on
this one). It is not so much suppression, as withdrawal of encouragement.
>similar to Cockton's inhabit a very bad and ignorant book called "The Question
> of Artificial Intelligence" edited by Stephen Bloomfield, which I
>will review for "Annals of the History of Computing".
Could we have publishing information on both the book and the review please?
And why is it that AI attracts so many bad and ignorant books against it? If
you dive into AI topics, don't expect an easy time. Pure AI is attemping a
science of humanity and it deserves everything it gets. Sociobiology and
behaviourism attacted far more attention. Perhaps it's AI's turn. Every
generation has its narrow-minded theories which need broadening out.
AI is forming an image of humanity. It is a political act. Expect opposition.
Skinner got it, so will AI.
>The referee should prune the list of issues and references to a size that
> the discussants are willing to deal with.
And, of course, encoded in KRL! Let's not have anything which takes effort to
read, otherwise we might as well just go and study instead of program.
>The proposed topic is "AI and free will".
Then AI and knowledge-respresentation, then AI and Hermeneutics (anyone
read Winograd and Flores properly yet), then AI and epistemology, then ..
--
Gilbert Cockton, Department of Computing Science, The University, Glasgow
gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert
The proper object of the study of humanity is humans, not machines
--
Gilbert Cockton, Department of Computing Science, The University, Glasgow
gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert
The proper object of the study of humanity is humans, not machines
------------------------------
Date: 28 May 88 08:13:06 GMT
From: quintus!ok@sun.com (Richard A. O'Keefe)
Subject: AI and Sociology
I believe it was Gilbert Cockton who raised the question "why does AI
ignore Sociology?" Two kinds of answers have been given so far:
(1) AI is appallingly arrogant trash.
(2) Sociology is appallingly arrogant trash.
I want to suggest that there is a straightforward reason for the mutual
indifference (some Sociologists have taken AI research as _subject_
material, but AI ideas are not commonly adopted by Sociologists) which
is creditable to both disciplines.
Lakatos's view of a science is that it is not so much a set of theories as
a research programme: a way of deciding what questions to pursue, what
counts as an explanation, and a way of dealing with puzzles. For example,
he points out that Newton's theory of gravity is in principle unfalsifiable,
and that the content of the theory may be seen in the kinds of explanations
people try to come up with to show that apparent exceptions to the theory
are not real exceptions.
The key step here is deciding what to study.
Both in its application to Robotics and its application to Cognitive
Science, AI is about the mental processes of individuals.
As a methodological basis, Sociology looks for explanations in terms of
social conditions and "forces", and rejects "mentalistic" explanations.
Let me provide a concrete example. One topic of interest in AI is how
a program could make "scientific discoveries". AM and Eurisko are
famous. A friend with whom I have lost contact was working on a program
to try to predict the kinetics of gas phase reactions. Pat Langley's
"BACON" programs are well known.
Scientific discovery is also of interest to Sociology. One book on this
topic (the only one on my shelves at the moment) is
The social basis of scientific discoveries
Augustine Brannigan
Cambridge University Press, 1981
0 521 28163 6
I take this as an *example*. I do not claim that this is all there is
to Sociology, or that all Sociologists would agree with it, or that all
Sociological study is like this. All I can really claim is that I am
interested in scientific discovery from an AI point of view, and when I
went looking for Sociological background this is the kind of thing I found.
Brannigan spends chapter 2 attacking some specific "mentalistic" accounts
of scientific discovery, and in chapter 3 rubbishes the mentalistic
approach completely. If I understand him, his major complaint is that
accounts such as Koestler's "bisociation" fail to be accounts of
*scientific* *discovery*. Indeed, a section of chapter 3 is headed
"Mentalistic models confuse learning with discovery."
It turns out that he is not concerned with the question "how do scientific
discoveries happen", but with the question "what gets CALLED a scientific
discovery, and why?" Which is a very interesting question, but ignores
everything about scientific discovery which is of interest to AI people.
The very reason that AI people are interested in scientific discovery
(apart from immediately practical motives) is that it is a form of learning
in semi-formalised domains. If one of Pat Langley's programs discovers
something that happens not to be true (such as coming up with Phlogiston
instead of Oxygen) he is quite happy as long as human scientists might have
made the same mistake. As I read Brannigan's critical comments on the
"mentalistic" theories he was rubbishing, I started to get excited, seeing
how some of the suggestions might be programmable.
Page 35 of Brannigan:
"... in the social or behavioural sciences we tend to obfuscate the
social significance of familiar phenomena by explaining them in terms
of 'underlying' causes. Though this is not always the case, it is
true with discovery and learning."
This is to reject in principle attempts to explain discovery and learning
in terms of underlying causes.
"... the equivalence of learning and discovery is a _confusion_.
From a social perspective, 'to _learn_' means something quite
different from 'to _discover_'."
Emphasis his. He would classify a rediscovery as a mere learning,
which at the outset rejects as uninteresting precisely the aspects that
AI is interested in.
Something which is rather shocking from an AI perspective is found on
page 64:
"... the hallmark of this understanding is the ascription of learning
to some innate abilities of the individual. Common sensically,
learning is measured by the degree of success that one experiences
in performing certain novel tasks and recalling certain past events.
Mackay's ethnographic work suggests, on the contrary, that learning
consists in the institutional asciprtion of success whereby certain
ordered and identified as learning achievements to the exclusion of
other meaningful performances."
Page 66:
"Although as folk members of society we automatically interpret
individual discovery or learning as the outcome of a motivated
course of inference, sociologically we must consider the cognitive
and empirical grounds in terms of which such an achievement is
figured. From this position, cleverness in school is understood,
not as a function of innate mental powers, but as a function of
the context in which the achievements associated with cleverness
are made accountable and remarkable."
To put it bluntly, if we take statements made by some AI people or some
Sociologists at face value, they cast serious doubts on the sanity of
the speakers. But taken as announcements of a research programme to
be followed within the discipline, they make sense.
AI says "minds can be modelled by machines", which is, on the face of it,
crazy. But if we read this as "we propose to study the aspects of mind
which can be modelled by machines, and as a working assumption will suppose
that all of them can", it makes sense, and is not anti-human.
Note that any AI practicioner's claim that the mechanisability of mind is
a discovery of AI is false, that is an *assumption* of AI. You can't
prove something by assuming it!
Sociology says "humans are almost indefinitely plastic and are controlled
by social context rather than psychological or genetic factors", which is,
on the face of it, crazy. But if we read this as "we propose to study the
influence of the social context on human behaviour, and as a working
assumption will suppose that all human behaviour can be explained this way",
it makes sense, and is not as anti-human as it at first appears.
Note that any Sociologist's claim that determination by social forces is
a discovery of Sociology is false, that is an *assumption* of Sociology.
Both research programmes make sense and both are interesting.
However, they make incompatible decisions about what counts as interesting
and what counts as an explanation. So for AI to ignore the results of
Sociology is no more surprising and no more culpable than for carpenters
to ignore Musicology (both have some sort of relevance to violins, but
they are interested in different aspects).
What triggered this message at this particular date rather than next week
was an article by Gilbert Cockton in comp.ai.digest, in which he said
"But perhaps this is what AI is doing, trying to improve our
understanding of ourselves. But it may not do this because of
(2) it forbids something
that is, any approach, any insight, which does not have a computable
expression. This, for me, is anathema to academic liberal traditions ..."
But of course AI does no such thing. It merely announces that
computational approaches to the understanding are part of _its_ territory,
and that non-computational approaches are not. AI doesn't say that a
Sociologist can't explain learning (away) as a function of the social
context, only that when he does so he isn't doing AI.
A while back I sent a message in which I cited "Plans and Situated Actions"
as an example of some overlap between AI and Sociology. Another example
can be found in chapter 7 of
Induction -- Processes of Inference, Learning, and Discovery
Holland, Holyoak, Nisbett, and Thagard
MIT Press, 1986
0-262-08160-1
Perhaps we could have some other specific examples to show why AI should
or should not pay attention to Sociology?
------------------------------
Date: 28 May 88 20:58:32 GMT
From: oodis01!uplherc!sp7040!obie!wsccs!rargyle@tis.llnl.gov (Bob
Argyle)
Subject: Re: Free Will & Self Awareness
In article <5323@xanth.cs.odu.edu>, Warren E. Taylor writes:
[stuff deleted]
> Adults understand what a child needs. A child, on his own, would quickly kill
> himself.
...
> Flame away.
> Warren.
so stop interferring with that child's free will! [W.C.Fields] :-)
We genetically are programmed to protect that child (it may be a
relative...); not so programmed however for protecting any
computers running an AI program. AI seems the perfect place to test the
freewill doctrine without the observer interferring with the 'experiment.'
At least one contributor to the discussion has called for an end to AI
because of the effects on impressionable undergraduates being told that
there isn't any free will.
Send Columbus out and if he falls off the edge, so much the better.
IF we get some data on what 'free will' actually is out of AI, then let
us discuss what it means. It seems we either have free will or we
don't; finding out seems indicated after is it 3000 years of talk.vague.
So is the sun orbitting around the earth? this impressionable
undergraduate wants to see some hard data.
Bob @ WSCCS
------------------------------
Date: 28 May 88 21:05:27 GMT
From: mcvax!ukc!its63b!aiva!jeff@uunet.uu.net (Jeff Dalton)
Subject: Re: DVM's request for definitions
In article <894@maize.engin.umich.edu> Brian Holtz writes:
>If you accept definition (1) (as I do), then the only alternative to
>determinism is dualism, which I don't see too many people defending.
Dualism wouldn't necessarily give free will: it would just transfer
the question to the spiritual. Perhaps that is just as deterministic
as the material.
------------------------------
Date: 29 May 88 10:26:38 GMT
From: g.gp.cs.cmu.edu!kck@pt.cs.cmu.edu (Karl Kluge)
Subject: Gilbert Cockton and AI
In response to various posts...
> From: gilbert@cs.glasgow.ac.uk (Gilbert Cockton)
> AI depends on being able to use written language (physical symbol
> hypothesis) to represent the whole human and physical universe. AI
> and any degree of literate-ignorance are incompatible. Humans, by
> contrast, may be ignorant in a literate sense, but knowlegeable in
> their activities. AI fails as this unformalised knowledge is
> violated in formalisation, just as the Mona Lisa is indescribable.
> Philosophically, this is a brand of scepticism. I'm not arguing that
> nothing is knowable, just that public, formal knolwedge accounts for
> a small part of our effective everyday knowledge (see Heider).
This shows the extent of your misunderstanding of the premises
underlying AI. In particular, you appear to have a gross
misunderstanding of the "physical symbol hypothesis" (sic).
First, few AI researchers (if any) would deny that there are certain
motor functions or low level perceptual processes which are not
symbolic in nature.
Second, the implicit (and unproven) assumption in the above quote is
that knowledge which is not public is also not formal, and that the
inability to access the contents of an arbitrary symbol structure in the
mind implies the absence of such symbol structures. Nowhere does the
Physical Symbol System Hypothesis imply that all symbol structures are
accessible by the conscious mind, or that all symbols in the symbols
structures will match concepts that map onto words in language.
> Because so little of our effective knowledge is formalised, we learn
> in social contexts, not from books. I presume AI is full of relative
> loners who have learnt more of what they publicly interact with from
> books rather than from people. Well I didn't, and I prefer
> interaction to reading.
You presume an awful lot. Comments like that show the intellectual level
of your critique of AI.
> The question is, do most people WANT a computational model of human
> behaviour? In these days of near 100% public funding of research,
> this is no longer a question that can be ducked in the name of
> academic freedom.
Mr. Cockton, Nature does not give a damn whether or not people WANT a
computational model of human behavior any more than it gave a damn
whether or not people wanted a heliocentric Solar System.
> I am always suspicious of any academic activity which has to request
> that it becomes a philosophical no-go area. I know of no other area of
> activity which is so dependent on such a wide range of unwarranted
> assumptions.
AI is founded on only one basic assumption, that there are no models of
computation more powerful (in a well defined sense) than the Turing
machine, and in particular that the brain is no more powerful a
computational mechanism than the Turing machine. If you have some
scientific evidence that this is a false assumption, please put it on
the table.
> My point was that MENTAL determinism and MORAL responsibility are
> incompatible. I cite the whole ethos of Western (and Muslim? and??)
> justice as evidence.
Western justice *presuposes* moral responsibility, but that in no way
serves *as evidence for* moral responsibility. Even if there is no moral
responsib- ility, there will still always be responsibility as a causal
agent. If a faulty electric blanket starts a fire, the blanket is not
morally responsible. That wouldn't stop anyone from unplugging it to
prevent future fires.
> If AI research has to assume something which undermines fundamental
> values, it better have a good answer beyond academic freedom, which
> would also justify unrestricted embryo research, forced separation of
> twins into controlled upbringings, unrestricted use of pain in learning
> research, ...
What sort of howling non-sequitor is this supposed to be? Many people
feel that Darwinian evolution "has to assume something which undermines
fundam- ental values", that isn't an excuse to hide one's head in the
sand and ignore evolution, or to cut funding for research into
evolutionary biology.
> > I regard artificial intelligence as an excellent scientific approach
> > to the pursuit of this ideal . . . one which enables me to test
> > flights of my imagination with concrete experimentation.
> I don't think a Physicist or an experimental psychologist would agree
> with you. AI is DUBIOUS, because so many DOUBT that anyone in AI has a
> elaborated view of truth and falsehood in AI research. So tell me, as
> a scientist, how we should judge AI research? In established
> sciences, the grounds are clear. Certainly, nothing in AI to date
> counts as a controlled experiment, using a representative population,
> with all irrelevant variables under control. Given the way AI programs
> are written, there is no way of even knowing what the independent
> variable is, and how it is being driven. I don't think you know what
> experimental method is, or what a clearly formulated hypothesis is
> either. You lose your science badge.
Well, what kind of AI research are you looking to judge? If you're
looking at something like SOAR or ACT*, which claim to be computational
models of human intelligence, then comparisons of the performance of the
architecture with data on human performance in given task domains can be
(and are) made.
If you are looking at research which attempts to perform tasks we
usually think of as requiring "intelligence", such as image
understanding, without claiming to be a model of human performance of
the task, then one can ask to what extent does the work capture the
underlying structure of the task? how does the approach scale? how
robust is it? and any of a number of other questions.
> Don't you appreciate that free will (some degree of choice) is
> essential to humanist ideals. Read about the Renaissance which spawned
> the Science on whose shirt tails AI rides. Perhaps then you will
> understand your intellectual heritage.
Mr. Cockton, it is more than a little arrogant to assume that anyone who
disagrees with you is some sort of unread, unwashed social misfit, as
you do in the above quote and the earlier quote about the level of
social interaction of AI reearchers. If you want your concerns about AI
taken seriously, then come down off your high horse.
Karl Kluge (kck@g.cs.cmu.edu)
People have opinions, not organizations. Ergo, the opinions expressed above
must be mine, and not those of CMU.
------------------------------
Date: 30 May 88 08:34:58 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net (Gilbert
Cockton)
Subject: Bad AI: A Clarification
In article <1242@crete.cs.glasgow.ac.uk> Gilbert Cockton blurts:
>Mindless application of the computational paradigm to
> a) problems which have not yielded to stronger methods
> b) problems which no other paradigm has yet provided any understanding of.
This is poorly expressed and misleading. Between "problems" and "which" insert
"concerning human existence". As this stands, it looks like I want to withdraw
encouragement from ALL computer research. Apologies to anyone who's taken this
seriously enough to follow-up, or was just annoyed (but you shouldn't be anyway)
Bad AI is research into human behaviour and reasoning, usually conducted by
mathematicians or computer scientists who are as well-qualified for the study
of humanity as is an archaeologist with a luminous watch for the study of
radiation (of course I understand radiation, I've got a luminous watch,
haven't I? ;-))
AI research seems to fall into two groups:
a) machine intelligence;
b) simulation of human behaviour.
No problem with a), apart from the use of the now vacuous term "intelligence",
which psychometricians have failed miserably to pin down. No problem with b)
if the researcher has a command of the study of humanity, hence the
respectability of computational modelling in psychology. Also, mathematicians
and computer scientists have no handicaps, and many advantages when the human
behaviour in b) is equation solving, symbolic mathematics, theorem proving and
configuring VAXES. They are domain experts here. Problems only arise when they
confuse their excellent and most ingenious programs with human reasoning.
1) because maths and logic has little to do with normal everyday reasoning
(i.e. most reasoning is not consciously mathematical, symbolic,
denotational, driven by inference rules). Maths procedures are not
equivalent to any human reasoning. There is an overlap, but it's small
2) because they have no training in the difficulties involved in studying
human behaviour, unlike professional psychologists, sociologists,
political scientists and economists. At best, they are informed amateurs,
and it is sad that their research is funded when research in established
disciplines is not. Explaining this political phenomena requires a simple
appeal to the hype of "pure" AI and the gullibility of its sponsors, as
well as to the honesty of established disciplines who know that coming to
understand ourselves is difficult, fraught with methodological problems.
Hence the appeal of the boy scout enthusiasm of the LISP hacker.
So, the reason for not encouraging AI is twofold. Firstly, any research which
does not address human reasoning directly is either pure computer science, or
a domain application of computing. There is no need for a separate body of
research called AI (or cybernetics for that matter). There are just
computational techniques. Full stop. It would be nice if they followed
good software engineering practices and structured development methods as
well. Secondly, where research does address human reasoning directly, it
should be under the watchful eye of competent disciplines. Neither mathematics
or computer science are competent disciplines. Supporting "pure" AI research
by logic or LISP hackers makes as much sense as putting a group of historians,
anthropologists and linguists in charge of a fusion experiment. The word is
"skill". Research requires skill. Research into humanity requires special
skills. Computer scientists and mathematicians are not taught these skills.
When hardware was expensive, it made sense to concentrate research using
computational approaches to our behaviour. The result was AI jounals,
AI conferences, and a cosy AI community insulated from the intellectual
demands of the real human disciplines. I hope, with MacLisp and all
the other cheap AI environments, that control of the computational
paradigm is lost by the technical experts and passes to those who
understand what it is to study ourselves. AI will disappear, but the
work won't. Indeed it will get better, and having to submit to an AI
conference rather than a psychology or related conference (for research
into ourselves), or a computing or application area conference (for
machine 'intelligence') will be a true reflection of the quality of the work.
--
Gilbert Cockton, Department of Computing Science, The University, Glasgow
gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert
The proper object of the study of humanity is humans, not machines
------------------------------
Date: 30 May 88 08:42:35 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net (Gilbert
Cockton)
Subject: Re: Free Will & Self Awareness
In article <1209@cadre.dsl.PITTSBURGH.EDU> Gordon E. Banks writes:
>>Are there any serious examples of re-programming systems, i.e. a system
>>that redesigns itself in response to punishment.
>
>Certainly! The back-propagation connectionist systems are "punished"
>for giving an incorrect response to their input by having the weight
>strengths leading to the wrong answer decreased.
I was expecting this one. What a marvellous way with words has mens
technica. To restore at least one of the many subtle connotations of
the word "punishment", I would ask
Are there any serious examples of resistant re-programming systems, i.e. a
system that redesigns itself only in response to JUST punishment, but
resists torture and other attempts to coerce it into falsehood?
I suspect the next AI pastime will be lying to the Connectionist
machine (Hey, I've got this one to blow raspberries when it sees the
word "Reagan", and that one now adds 2 and 2 to get 5!).
Who gets the key to these machines? I'm already having visions of
Luddite workers corrupting their connectionist robots :-) Will the
first machine simulation be of some fundamentalist religious fanatic?
--
Gilbert Cockton, Department of Computing Science, The University, Glasgow
gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert
The proper object of the study of humanity is humans, not machines
------------------------------
Date: 30 May 88 09:29:38 GMT
From: TAURUS.BITNET!shani@ucbvax.berkeley.edu
Subject: Re: More Free Will
In article <532@wsccs.UUCP>, dharvey@wsccs.BITNET writes:
> If free will has created civilization as we know it, then it must be
> accepted with mixed emotions. This means that Hitler, Stalin, some of
> the Catholic Popes during the middle ages and others have created a
> great deal of havoc that was not good. One of the prime reasons for AI
> is to perhaps develop systems that prevent things like this from
> happening. If we with our free will (you said it, not me) can't seem to
> create a decent world to live in, perhaps a machine without free will
> operating within prescribed boundaries may do a better job. We sure
> haven't done too well.
Oh no! wer'e back where we started!
Gee! I think that the problem with weather this world is decent or not is in
your misconception, not with the system. The whole point (Which I said more
then once, I think) is that THERE ISN'T SUCH A THING LIKE OBJECTIVE GOOD OR
OBJECTIVE EVIL OR OBJECTIVE ANY VALUE!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
because anything that is objective is completly indifferent to our values
and other meanings that we human beings give to things! Therfore, HOW ON
EARTH can AI define things that does not exist?!?!?!
Hmm... on second thought, you may use AI to detect self-contradicting ideas,
but realy, it doesn't worth the mony as you can do that with a bit of common
sense... your all idea seem to mean that people will have to accept things only
because the machin have said this... how do you know that the bad guys will not
use false machines to mislead the innosence and poor again???
How about investing in making PEOPLE think, instead???
O.S.
------------------------------
Date: 30 May 88 09:33:23 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net (Gilbert
Cockton)
Subject: Re: AI and Sociology
Firstly, thanks very much to Richard O'Keefe for taking time to put
together his posting. It is a very valuable contribution. One
objection though:
In article <1033@cresswell.quintus.UUCP> Richard A. O'Keefe writes:
>But of course AI does no such thing. It merely announces that
>computational approaches to the understanding are part of _its_ territory,
>and that non-computational approaches are not.
This may be OK for Lakatos, but not for me. Potty as some of the
ideas I've presented may seem, they are all well rehearsed elsewhere.
It is quite improper to cut out a territory which deliberately ignores
others. In this sense, psychology and sociology are guilty like AI,
but not nearly so much, as they have territories rather than a
territory. Still, the separation of sociology from psychology is
regrettable, but areas like social psychology and cognitive sociology
do bridge the two, as do applied areas such as education and management.
Where are the bridges to "pure" AI? Answer that if you can.
The place such arguments appear most are in curriculum theory (and
also some political theory, especially Illich/"Tools for Conviviality"
and democrats concerned about technical imperialism). The argument
for an integrated approach to the humanities stems from the knowledge
that academic disciplines will always adopt a narrow perspective, and
that only a range of disciplines can properly address an issue. AI can be
mulitidisciplinary, but it is, for me, unique in its insistence on a single
paradigm which MUST distort the researcher's view of humanity, as well as the
research consumer's view on a bad day. Indefensible.
Some sociologists have been no better, and research here has also lost support
as a result. I do not subscribe to the view that everything has nothing but a
social explanation. Certainly the reason the soles stay on my shoes has
nothing much to do with my social context. Many societies can control the
quality of their shoe production, but vary on nearly everything else.
Undoubtedly my mood states and my reasoning performance have a physiological
basis as amenable to causal doctrines as my motor car. But I am part of a
social context, and you cannot fully explain my behaviour without appeal to it.
Again, I challenge AI's rejection of social criticisms of its paradigm. We
become what we are through socialisation, not programming (although some
teaching IS close to programming, especially in mathematics). Thus a machine
can never become what we are, because it cannot experience socialisation in the
same way as a human being. Thus a machine can never reason like us, as it can
never absorb its model of reality in a proper social context. Again, there are
well documented examples of the effect of social neglect on children. Machines
will not suffer in the same way, as they only benefit from programming, and
not all forms of human company. Anyone who thinks that programming is social
interaction is really missing out on something (probably social interaction :-))
RECOMMENDED READING
Jerome Bruner on MACOS (Man: A Course of Study), for the reasoning
behind interdisciplinary education.
Skinner's "Beyond Freedom and Dignity" and the collected essays in
response to it, for an understanding of where behaviourism takes you
("pure" AI is neo-behaviourist, it's about little s-r modelling).
P. Berger and T. Luckman's "Social Construction of Reality" and
I. Goffman's "Presentation of Self in everyday life" for the social
aspects of reality.
Feigenbaum and McCorduck's "Fifth Generation" for why AI gets such a bad name
(Apparently the US invented computers single handed, presumably while John
Wayne was taking the Normandy beaches in the film :-))
--
Gilbert Cockton, Department of Computing Science, The University, Glasgow
gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert
The proper object of the study of humanity is humans, not machines
------------------------------
Date: 30 May 88 09:51:15 GMT
From: shani%TAURUS.BITNET@jade.berkeley.edu.user@host.BITNET
Reply-to: <shani%TAURUS.BITNET@CORNELLC.CCS.CORNELL.EDU>
Subject: Re: AIList Digest V7 #4 [bwk@mitre-bedford.arpa: Re: Free
Will & Self
[Barry writes]
> Assume that I possess a value system which permits me to rank my
> personal preferences regarding the likely outcome of the courses
> of action open to me. Suppose, also, that I have a (possibly crude)
> estimate of your value system. If I were myopic (or maybe just stupid)
> I would choose my course of action to maximize my payoff without regard
> to you. But my knowledge of your value system creates an interesting
> opportunity for me. I can use my imagination to conceive a course
> of action which increases both of our utility functions. Free will
> empowers me to choose a Win-Win alternative. Without free will, I am
> predestined to engage in acts that hurt others. Since I disvalue hurting
> others, I thank God that I am endowed with free will.
>
Like allways... the perfect Lawful good claim (Did mr. gaygax think of you
when he wrote the section about LG ? :-) )
Oh barry boy! can't you stop mixing the objective facts with your OUN PERSONAL
values? why do you insist on imposing your oun point of vew on everything?
Now... I suggest that you will sit down and make a list of all things you
disvalue and still people sometimes do. Out of the dispair, maybe you will
finaly realise that your point of vew is but one of more then 4 billion points
of vew. There is nothing wrong with that! your point is perfect and does not
contradict common sense or anything, but please! it is your point of vew,
according to your values! others that do not disvalue hurting as you do are
still free-willing as much as you are. Bsides, suppose you had a value system
which you where absoloutly sure that it is good for everyone, dont you think
that by giving it to people you will hurt them? all in all, the name of the
game is 'find your oun meaning'....
O.S.
------------------------------
Date: 30 May 88 10:41:52 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net (Gilbert
Cockton)
Subject: What Next!
>specifically, Marxist political science - is the key to making progress.
>And I expect Gilbert will be happier.
Pat, this is so ignorant. I doubt that you have much command of political
thought at all. Marxism regards itself as a science. Following Engels, it
became deterministic. Cybernetics is strong in Russia, largely because it
fits in so well with Anti-Duhring style philosophy of science. I do not.
The only intellectual connections between the ideas I have repeated on
social aspects of reality and Marx are:
i) Early dialetical materialism in the "German Ideology",
certainly not beloved by traditional Marxist-Leninists.
ii) Marx, with Durkheim and Weber, was one of the founding
fathers of sociology. Thus any sociology has some connection
with him, just as logic can't escape Principia Mathematica
For a proud defender of logic, this was worse than no response at all.
Logic is anything but a bourgeois illusion. It is an academic artefact
which I find hard to link to ownership of the mode of production.
How many factory owners are logicians? And why not?
--
Gilbert Cockton, Department of Computing Science, The University, Glasgow
gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert
The proper object of the study of humanity is humans, not machines
------------------------------
Date: Mon, 30 May 88 12:13:20 BST
From: Gilbert Cockton <gilbert%cs.glasgow.ac.uk@NSS.Cs.Ucl.AC.UK>
Subject: Immortality, Home-made BDI states and Systems Theory (3
snippets)
In article <8805250631.AA09406@BLOOM-BEACON.MIT.EDU>
>Step (4) gives us virtual immortality, since whenever our current
>intelligence-carrying hardware (human body? computer? etc.) is about to
>give up (because of a disease, old age ...) we can transfer the
>intelligence to another piece of hardware. there are some more delicate
>problems here, but you get the idea.
Historically, I think there's going to be something in this. There is
no doubt that we can embody in a computer program something that we
would not sensibly embody in a book. In this sense, computers are
going to alter what we can pass on from one generation to another.
But there are also similarities with books. Books get out of date, so
will computer programs. As I've said before, we've got one hell of a
maintenance problem with large knowledge-based programs. Is it really
going to be more economical than people? See the latest figures on
automation in the car industry, where training costs are going through
the roof as robots move from single functions to programmable tasks.
GM's most heavily automated plant (Hamtramck, Michigan) is less productive
than a much less automated one at Fremont Ca. (Economist, 21/5/88, pp103-104).
In art. <8805250631.AA09382@BLOOM-BEACON.MIT.EDU>
>
>Let my current beliefs, desires, and intentions be called my BDI state.
These change. Have you no responsibility for the way they change? Do
you just wake up one morning a different person, or are you consciously
involved in major changes of perspective? Or do you never change?
In article <19880527050240.9.NICK@MACH.AI.MIT.EDU>
>Gilbert Cockton: Even one reference to a critique of Systems Theory would be
>helpful if it includes a bibliography.
I recommended the work of Anthony Giddens (Kings College, Cambridge).
There are sections on systems theory in his "Studies in Social and Political
Theory" (either Hutchinson or Macmillan or Polity Press, can't remember which).
A book which didn't impress me a long time ago was Apple's "Ideology
and Education" or something like that. He's an American Marxist, but
I remember references to critiques of systems theory in between his
polemic. I'll try to find a full reference.
Systems theory is valuable compared to classical science. To me,
systems theory and simulation as a scientific method go hand in hand.
It falls down in its overuse of biological concepts (which with
mathematics represent the two scientific influences on many post-war
approaches to humanity. Sociobiological game theory, ugh!)
Another useful book is David Harvey's "Science, Ideology and Human
Geography" (Longman?), which followed his systems theory/postivist
"Explanation in Geography". You'll see both sides of systems theory
in his work.
Finally, I am surprised at the response to my original comments on
free will and AI. The point is still being missed that our
current society needs free will, whether or not it can be established
philosophically that free will exists or not. But I have changed my
mind about "AI's" concern about the issue, both in the orderliness
of John McCarthy's representation of a 1969 paper (missed it due
to starting secondary school :-)), and in Drew McDermott's awareness
of the importance of the issue and its relation to modern science
and dualism, plus all the other traffic on the issue. I only wish
AI theorists could get to grips with the socialisation question as well,
and understand more sympathetically why dualism persists (by law in the case
of the UK school curriculum).
Hope you're enjoying all this as much as I am :-)
Gilbert.
------------------------------
Date: 30 May 88 12:01:44 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net (Gilbert
Cockton)
Subject: Re: The Social Construction of Reality
In article <218@proxftl.UUCP> bill@proxftl.UUCP (T. William Wells) writes:
>If you have any questions about the game the consensus reality
>freaks are playing, you can send me e-mail.
Caught out at last! Mail this CIA address for details of the Marxist attempt
to undermine the All-American AI net. Digest no relativist epistemologies
without clearance from this mailbox. All is revealed by Nancy Reagan's
astrologer, a secondhand version of BACON (St. Francis to the scientists)
Come on T. Willy, you just don't like social constructs that's all. :-)
Of course there's a resistance in the physical world to any old social
construct. But blow me, aren't people different? Take Kant, a German
while alive but a horse when dead. Guilty under the Laws of Physics. Why,
even digestability seems to be a social construct, just look at all that
anti-americanism over the good old US hamburger. And as for language, well,
that's just got no respect at all for the laws of physics. Now wonder those
damned commies get to twist the true All-American meaning out of words.
Remember. Don't go out at night without positivism in your pocket.
Report all subcultural tendencies to the appropriate authority.
--
Gilbert Cockton, Department of Computing Science, The University, Glasgow
gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert
The proper object of the study of humanity is humans, not machines
------------------------------
End of AIList Digest
********************