Copy Link
Add to Bookmark
Report

AIList Digest Volume 6 Issue 088

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest             Monday, 2 May 1988       Volume 6 : Issue 88 

Today's Topics:
Opinion - AI Goals & Free Will & Sociology

----------------------------------------------------------------------

Date: 26 Apr 88 15:06:48 GMT
From: trwrb!aero!venera.isi.edu!smoliar@ucbvax.Berkeley.EDU (Stephen Smoliar)
Subject: Re: Expert Systems in the Railroad Industry.

In article <73@edai.ed.ac.uk> ceb@edai (Colin Bridgewater) writes:
> Just to get my two penn'orth in, whatever happened to dynamic programming
>for scheduling, cargo-space optimisation and inventory control etc ? This
>well-worn technique is quite adequate for the majority of purposes envisaged
>by EL. I mention this to raise a wider issue which was possibly not in the
>mind of the original sender, namely that of the desire to throw ever more
>complex solution procedures at the simplest of problems....
>
> Why should we want to implement an expert system, when adequate techniques
>exist already ? That is, is the application of expert system technology
>appropriate to the magnitude and complexity of the problem ? Should we be
>advocating the application of such 'high-tech' solutions to all and sundry ?
>I have no doubt that such systems could be made to work, don't get me wrong
>on that, I just question whether the level of technology required in order to
>do so is justified. Surely it is better to apply the simplest solutions when-
>ever possible.

There is one issue of "appropriate technology" which appears to have been
overlooked in Colin's argument; and that is the matter of computational
tractability. In many practical domains, while it is certainly possible
to build mathematical models which may then be processed by dynamic
programming, those models are too unwieldy to yield much useful information
in any reasonable period of time. Often what makes an expert an expert is
the ability to recognize that a complex general-purpose model may be
considerably simplified through abstraction without significantly sacrificing
fidelity. The mathematical nature of the model, in and of itself, cannot
provide us with information of how to perform such abstractions. That is
often why we need experts; and, in such cases, if that expertise can be
properly modeled by an expert system, a computationally intractable approach
can be turned into a practical one.

------------------------------

Date: 28 Apr 88 09:15:47 GMT
From: mcvax!ukc!dcl-cs!simon@uunet.uu.net (Simon Brooke)
Subject: Re: The future of AI [was Re: Time Magazine -- Computers of
the Future]

In article <445@novavax.UUCP> maddoxt@novavax.UUCP (Thomas Maddox) writes:
(flaming against an article submitted by Gilbert Cockton)

> "Rigorous sociology/contemporary anthropology"? Ha ha ha ha
>ha ha ha ha, &c.

What do the third and subsequent iterations of the symbol 'ha' add to the
meaning of this statement? Are we to assume the author doubts the rigour
of Sociology, or the contemporary nature of anthropology?

>And some of the most interesting investigations of topics once dominated
>by the humanities, such as theory of mind, are taking place in AI labs.

This is, of course, true - some of it is. Just as some of the most
interesting advances in Artificial Intelligence take place in Philosophy
and Linguistics departments. This is what one would expect, after all; for
what is AI but an experimental branch of Philosophy?

>sociologists produce a great deal of nonsense, and indeed the social
>"sciences" in toto are afflicted by conceptual confusion at every
>level. Ideologues, special interest groups, purveyors of outworn
>dogma (Marxists, Freudians, et alia) continue to plague the social
>sciences in a way that would be almost unimaginable in the sciences,

Gosh! Isn't it nice, now and again, to read the words of someone whose
knowledge of a field is so deep and thorough that they can some it up in
one short paragraph!

It is, of course, true that some embarassingly poor work is published in
Sociology, just as in any other discipline; perhaps indeed there is more
poor sociology, simply because sociology is more difficult to do well than
any other type of study - most of the phenomena of sociology occurs in the
interaction between individuals, and this interaction cannot readily be
accessed by an observer who is not party to the interaction. Yet if you
are part of the interaction, it will not proceed as it would with someone
else...

Again, sociological investigation, because it looks at us in a
rigorous way which we are not used to, often leads to conclusions which
seem counter-intuitive - they cut through our self-deceits and hypocrisies.
So we prefer to abuse the messenger rather than listen to the message.

For the rest:

He who knows not an knows not he knows not......

A dictum which I will conveniently forget next time I feel like shooting
my mouth off.

** Simon Brooke *********************************************************
* e-mail : simon@uk.ac.lancs.comp *
* surface: Dept of Computing, University of Lancaster, LA 1 4 YW, UK. *
* *
* Thought for today: Most prologs chew everything very slowly anyway, *
***just being polite I guess*********************************************

------------------------------

Date: 28 Apr 88 10:53:57 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net (Gilbert
Cockton)
Subject: Re: The future of AI [was Re: Time Magazine -- Computers of
the Future]

In article <445@novavax.UUCP> maddoxt@novavax.UUCP (Thomas Maddox) writes:
>By comparison, sociologists produce a great deal of nonsense, and indeed the
>social "sciences" in toto are afflicted by conceptual confusion at every
>level. Ideologues, special interest groups, purveyors of outworn
>dogma (Marxists, Freudians, et alia) continue to plague the social
>sciences in a way that would be almost unimaginable in the sciences,
>even in a field as slippery, ill-defined, and protean as AI.
There are more of them :-) But if you looked at the work of U.K. sociologists
like Townsend and Halsey on Age, Poverty, Health and social mobility, you might
find something less concerned with theory and more with rigorous investigation.

I find the conflict in the humanities and behvioural "sciences" far more healthy
than the uncritical following of fashions of paradigms in science. Whilst the
former areas encourage an understanding of methodology and epistemology, the
sciences assume their core methods are correct and get on with it. A lot boils
down to personality (Liam Hudson, Contrary Imaginations). The reason that
ideology and methodological pluralism would be unimaginable in the sciences may
have something to do with the nature (and please, not the LACK) of the
scientific imagination compared to the humanist imagination. Note that
materialism, determinism, statistical inference and positivism are no less
outworn dogmas and ideologies than are Marxism, Freudianism, etc. My
experience is that someone from a humanist critical tradition will have a better
understanding of the assumptions behind methodologies than will scientists and
even more so, engineers. Out of such understandings came the rejection of first
Medieval Catholicism, then Seventeeth Century materialism, Twentieth Century
Behaviourism and Systems Theory, and now the "pure" AI position. Assumptions
behind AI are similar to many which have been around since the warm humility of
Renaissance Humanism cooled into the mechanical fascination of the Baroque.

>So talk about "philistine technical vacuums" if you wish, but
>remember that by and large people know which emperor has no clothes.
So who is it who is deciding strategy for most Western social programmes?
Clothes or no clothes, social administrators have an empire which extends
beyond academia and many of them draw on sociological concepts and results in
their work. It is in their complete ignorance of socialisation that AI workers
fall down in their study of machine learning. Most human learning always takes
place in a social context, with only the private interests of marginal
adolescents and adults taking place in isolation - but here they draw on problem
solving capabilities which were nutured in a social context. The starkest
examples of the nature and role of primary socialisation come from those few
unfortunate children who had been isolated from birth. They are savage animals.
If parents had to interact with their children in FOPC or connectionist inputs,
the same would be true, until the children were taken into care.

>Also, if you want to say "one dead end after another," you might adduce actual
>dead ends pursued by AI research and contrast them with non-dead ends.

DEAD ENDS
Computational Lingusitics, continuous speech understanding, intelligent vision,
reliable expert systems which do not require endless maintenance, human
problem solving, the physical symbol system hypothesis, knowledge representation
formalisms using computable models. Largely areas where some other paradigm
within another discipline can make progress as the lead weight of computability
is not suffocating research. Generally due to knowledge representation problems
- even the Novel has problems here :-) If you can't write it in a text-book
(e.g. clinical diagnosis, teaching techniques, advocacy), you'll never get it
on a machine - impossible in superset (NL) => inpossible in subset (FOPC,
computationally denotable/constructable). A problem in AI is trying to solve
other people's problems, where those other people know more about the problem
than you ever will - they live it day in day out.

NON-DEAD ENDS
Much work done under the name of AI is good - low-to-medium level vision,
restricted natural language, knowledge-based programming formalisms,
theorem-proving and highly-constrained technical planning problems. Indeed,
most technical knowledge, being artificial and symbolic from the outset, is an
obvious candidate for AI modelling and there is nothing in the humanist
tradition which would doubt the viability of this work. Here knowledge
representation is easy, because the domain will generally be so boring (but
economically/environmentally/security critical) that no-one wants to argue
about it. Much technical expertise executed by humans is best suited to
machines. In HCI research, sensible work on intelligent (=supportive) user
interfaces is getting somewhere, but then coming up with a computer model of a
computer system is hardly a major challenge in knowledge representation
techniques. Coming up with a computer model of a user is also possible, as long
as we don't try to model anything controversial, but stick to observable
behaviour and user-negotiated input.

The main objection to AI is when it claims to approach our humanity.

It cannot.

------------------------------

Date: Sat, 30 Apr 88 23:39:49 EDT
From: Marvin Minsky <MINSKY@AI.AI.MIT.EDU>
Subject: AIList V6 #86 - Philosophy

Yamauchi, Cockton, and others on AILIST have been discussing freedom
of will as though no AI researchers have discussed it seriously. May
I ask you to read pages 30.2, 30.6 and 30.7 of The Society of Mind. I
claim to have a good explanation of the free-will phenomenon. I agree
with Gilbert Cockton that it is not the lack of answers that should be
criticised, but the contemporary ignorance of the subject. (As for
why my own answer evaded philosophers for millenia, My hypothesis is
that philosophers have not been very insightful about actual
psychological phenomena - which is why it had to wait for Freud - or,
perhaps, Poincare - to produce convincing discussions about the
importance of unconscious thinking.)

Cockton also sagely points out that
a rule-based or other mechanical account of cognition and decision
making is at odds with the doctrine of free will which underpins most
Western morality. ... Scientists who seek moral, ethical,
epistemological or methodological vacuums are only marginalising
themselves into positions where social forces will rightly constrain
their work.

I only disagree with Cockton's insertion of "rightly". Like
E.O.Wilson, I prefer follow ideas even where they lead to potentially
unpopular conclusions. Indeed, I feel it is only proper for those
social forces to try to constrain my work. When the researchers feel
constrained to censor their own work, then everyone may end up the
poorer in the end.

I'm not even sure this is a disagreement. A closer look might show that
this is what Cockton is actually saying, too.

------------------------------

Date: 1 May 88 06:50:41 GMT
From: TAURUS.BITNET!shani@ucbvax.Berkeley.EDU
Subject: Re: Free Will & Self-Awareness

In article <912@cresswell.quintus.UUCP>, ok@quintus.BITNET writes:

> compatible with strong determinism. (The ones I've seen are riddled with
> logical errors, but most philosophical arguments I've seen are.)

That is correct, but there are few which arn't and that is mainly because
they mannaged to avoid self-contradictions and mixing of concepts...

O.S.

------------------------------

Date: 30 Apr 88 16:37:20 GMT
From: uflorida!novavax!maddoxt@gatech.edu (Thomas Maddox)
Subject: Social science gibber [Was Re: Various Future of AI

Summary: Here we have a prime specimen of the species
Keywords: AI, Sociology, manners.

In article <502@dcl-csvax.comp.lancs.ac.uk> simon@comp.lancs.ac.uk
(Simon Brooke) writes:
>In article <445@novavax.UUCP> maddoxt@novavax.UUCP (Thomas Maddox) writes:
>
>> "Rigorous sociology/contemporary anthropology"? Ha ha ha ha
>>ha ha ha ha, &c.
>
>What do the third and subsequent iterations of the symbol 'ha' add to the
>meaning of this statement? Are we to assume the author doubts the rigour
>of Sociology, or the contemporary nature of anthropology?

Yeah, I think you could assume both, pal. Repeated "ha"s
added for emphasis, in case some lamebrain (sociologist? if the shoe
fits . . . ) wandered through and needed help.

>>And some of the most interesting investigations of topics once dominated
>>by the humanities, such as theory of mind, are taking place in AI labs.
>
>This is, of course, true - some of it is. Just as some of the most
>interesting advances in Artificial Intelligence take place in Philosophy
>and Linguistics departments. This is what one would expect, after all; for
>what is AI but an experimental branch of Philosophy?

"AI but an experimental branch of Philosophy," eh? Let's see,
now: according to that view, I believe *every* branch of what we
usually call science could be construed in this way . . . or not. In
short, the statement is almost perfectly empty. Or maybe the secret
is in the use of the word "Philosophy." That must be a special variant
of common or run-of-the-mill "philosophy," capitalized for occult reasons
known only to its initiates.
Also, I have no quarrel with these "most interesting advances"
that are coming out of philosophy and linguistic departments.
Philosophy and linguistics, you might notice, *not* sociology.

Let's read on. He's quoting me now:

>>sociologists produce a great deal of nonsense, and indeed the social
>>"sciences" in toto are afflicted by conceptual confusion at every
>>level. Ideologues, special interest groups, purveyors of outworn
>>dogma (Marxists, Freudians, et alia) continue to plague the social
>>sciences in a way that would be almost unimaginable in the sciences,

Then he returns to his own lovely prose:

>Gosh! Isn't it nice, now and again, to read the words of someone whose
>knowledge of a field is so deep and thorough that they can some it up in
>one short paragraph!

"Some it up in one short paragraph"? No, really, I can't
"some" it up; don't even know what doing so means. However, if you
are trying in your inept fashion to say, "sum it up," thanks. I
thought it was a pretty good paragraph myself.

>It is, of course, true that some embarassingly poor work is published in
>Sociology, just as in any other discipline; perhaps indeed there is more
>poor sociology, simply because sociology is more difficult to do well than
>any other type of study - most of the phenomena of sociology occurs in the
>interaction between individuals, and this interaction cannot readily be
>accessed by an observer who is not party to the interaction. Yet if you
>are part of the interaction, it will not proceed as it would with someone
>else...

We're told "most of the phenomena . . . occurs" [subject-verb
agreement], further that "this interaction cannot readily be accessed
by an observer" [unnecessary jargon borrowed from another field and
used for the appearance of scientific rigor]. I guarantee it, this
guy *must* be a social scientist, sociologist or not.

>Again, sociological investigation, because it looks at us in a
>rigorous way which we are not used to, often leads to conclusions which
>seem counter-intuitive - they cut through our self-deceits and hypocrisies.
>So we prefer to abuse the messenger rather than listen to the message.

"Sociological investigation . . . looks at us in a rigorous way
which we are not used to," the man says. On his evidence, it's
through a glass darkly, which, alas, we are all quite used to. The
notion of sociology as a bringer of ugly truths is particularly
amusing, though, and I thank him for it.
I should add that I felt some remorse for my slap at
sociology, because the essential plight of the social sciences is
quite desperate. However, when I read the message quoted above, my
remorse evaporated. I would simply add that many sociologists,
whatever the ultimate value of their work, *can* read, write, and
think.
Also, present polemics aside, my original diatribe came as a
response to a particularly self-satisfied posting from (apparently) a
sociologist attacking AI research as uninformed, puerile, &c. It
seemed (and seems) to me that anyone in such an inherently weak field
should be rather careful in his criticism: he's in the position of a
man throwing bricks at passers-by through his own front window.
So let me reiterate: AI research produces valuable and
interesting work; sociology produces much, much less.

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT