Copy Link
Add to Bookmark
Report

AIList Digest Volume 7 Issue 025

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest            Friday, 10 Jun 1988       Volume 7 : Issue 25 

Today's Topics:

Philosophy:
The Social Construction of Reality
Bad AI: A Clarification
Emotion (was Re: Language-related capabilities)
Me and Karl Kluge (no flames, no insults, no abuse)
Constructive Question (Texts and social context)
who else isn't a science
Consensus and Reality
human-human communication
The Social Construction of Reality
construal of induction
Hypostatization

----------------------------------------------------------------------

Date: 7 Jun 88 10:52:20 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net (Gilbert
Cockton)
Subject: Re: The Social Construction of Reality

In article <450001@hplsdar.HP.COM> jdg@hplsdar.HP.COM (Jason Goldman) writes:
>Ad Hominem
Et alia sunt?

When people adopt a controversial position for which there is no convincing
proof, the only scientific explanation is the individual's ideology. The
dislike of ad hominem arguments among scientists is a sign of their self-imposed
dualism: personality and the environment stop outside the cranium of
scientists, but penetrate the crania of everyone else.

Odi profanum vulgum, et arceo ...
--
Gilbert Cockton, Department of Computing Science, The University, Glasgow
gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

The proper object of the study of humanity is humans, not machines

------------------------------

Date: 7 Jun 88 22:58:28 GMT
From: mcvax!ukc!its63b!aiva!jeff@uunet.uu.net (Jeff Dalton)
Subject: Re: Bad AI: A Clarification

In article <1299@crete.cs.glasgow.ac.uk> Gilbert Cockton writes:

>AI research seems to fall into two groups:
> a) machine intelligence;
> b) simulation of human behaviour.
>No problem with a), apart from the use of the now vacuous term "intelligence",

But later you say:

>So, the reason for not encouraging AI is twofold. Firstly, any research which
>does not address human reasoning directly is either pure computer science, or
>a domain application of computing.

Vision? Robotics? Everything that uses computers can be called pure or
applied CS. So what?

>There is no need for a separate body of
>research called AI (or cybernetics for that matter). There are just
>computational techniques. Full stop.

What happened to "machine intelligence"? It *is* a separate (but
not totally separate) body of research. What is the point of arguing
about which research areas deserve names of their own?

BTW, there's no *need* for many things we nonetheless think good.

>It would be nice if they followed good software engineering practices and
>structured development methods as well.

Are you trying to see how many insults can fit into one paragraph?

Are you really trying to oppose "bad AI" or are you opportunistically
using it to attack AI as a whole? Why not criticise specific work you
think is flawed instead of making largely unsupported allegations in
an attempt to discredit the entire field?

------------------------------

Date: 8 Jun 88 03:11:26 GMT
From: mcvax!ukc!its63b!aipna!rjc@uunet.uu.net (Richard Caley)
Subject: Emotion (was Re: Language-related capabilities)


In article <700@amethyst.ma.arizona.edu>, K Watkins writes:
> If so, why does the possibility of sensory input to computers make so much
> more sense to the AI community than the possibility of emotional output? Or
> does that community see little value in such output? In any case, I don't see
> much evidence that anyone is trying to make it more possible. Why not?

Eduard Hovy's thesis "Generating Natural Language Under Pragmatic
Constraints"
( Yale 1987 ) describes his attempt to produce a system
which uses certain "emotional" factors to select the contants and
form of a text.

The factors considered are the speaker's attitude towards the information
to be conveyed, its relationship with the audience and the audiences views.

------------------------------

Date: 8 Jun 88 03:45:21 GMT
From: mcvax!ukc!its63b!aipna!rjc@uunet.uu.net (Richard Caley)
Subject: Re: Me and Karl Kluge (no flames, no insults, no abuse)


In <1312@crete.cs.glasgow.ac.uk>, Gilbert Cockton writes
> OK then. Point to an AI text book that covers Task Analysis? Point to
> work other than SOAR and ACT* where the Task Domain has been formally
> studied before the computer implementation?

Natural language processing. Much ( by no means all ) builds on the work
of some school of linguistics.

> How the hell can you model what you've never studied? Fiction.

One stands on the shoulders of giants. Nobody has time to research
their subject from the ground up.

> Wonder that's why some
> AI people have to run humanity down. It improves the chance of ELIZA
> being better than us.

Straw man. I've never heard anyone try to claim ELIZA was better than
_anything_. Also I don't see 'AI people' running humanity down, it may be
that you consider your image of human beings to be 'better' than that which
you see AI as putting forward, well I'm sure that many 'AI people' would not
agree.

> Once again, what the hell can a computer
> program tell us about ourselves?

According to your earlier postings, if ( strong ) AI was successful it
would tell us that we have no free will, or at least that we can not assume
we have it. I don't agree with this but it is _your_ argument and something
which a computer program could tell us.

> Secondly, what can it tell us that we
> couldn't find out by studying people instead?

What do the theories of physics tell us that we couldn't find out by
studying objects.

> The proper object of the study of humanity is humans, not machines

Well, there go all the physical sciences, botany, music, mathematics . . .

------------------------------

Date: 8 Jun 88 09:38:16 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net (Gilbert
Cockton)
Subject: Re: Constructive Question (Texts and social context)

In article <1053@cresswell.quintus.UUCP> Richard A. O'Keefe writes:
>If I may offer a constructive question of my own: how does socialisation
>differ from other sorts of learning? What is so terrible about learning
>cultural knowledge from a book? (Books are, after all, the only contact
>we have with the dead.)

Books are not the only contact with the dead. Texts are only one
type of archaeological artefact. There's nothing `terrible' about book
learning, although note that the interpretation of religious texts is
often controlled within a social context. Texts are artefacts which
were created within a social context, this is why they date. When you
lose the social context, even if the underlying "knowledge" has not
changed (whatever that could mean), you lose much of the meaning and
become out of step with current meanings. See Winograd and Flores on
the hermeneutic tradition, or any sociological approach to literature
(e.g. Walter Benjamin and Terry Eagleton, both flavours of Marxist if
anyone wants to be warned).

Every generation has to rewrite, not only its history, but also its science.
Text production in science is not all driven by accumulation of new knowledge.

There are many differences between socialisation and book knowledge,
although the relationship is ALMOST a set/subset one. Books are part
of our social context, but private readings can create new contexts
and new readings (hence the resistance to vernacular Bibles by the
Medieval Catholic church). Universities and colleges provide another
social context for the reading of texts. The legal profession provides
another (lucrative) context for "opinions" on the meanings of texts
relative to some situation. This, of course, is one of the many
social injustices from which AI will free us. Lawyers will not
interpret the law for profit, machines sold by AI firms will.

IKBS programs are essentially private readings which freeze, despite
the animation of knowledge via their inference mechanisms (just a
fancy index really :-)). They are only sensitive to manual reprogramming,
a controlled intervention. They are unable to reshape their knowledge
to fit the current interaction AS HUMANS DO. They are insensitive,
intolerant, arrogant, overbearing, single-minded and ruthless. Oh,
and they usually don't work either :-) :-)
--
Gilbert Cockton, Department of Computing Science, The University, Glasgow
gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

The proper object of the study of humanity is humans, not machines

------------------------------

Date: 8 Jun 88 09:58:23 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net (Gilbert
Cockton)
Subject: Re: Bad AI: A Clarification

In article <451@aiva.ed.ac.uk> jeff@uk.ac.ed.aiva (Jeff Dalton) writes:
>Are you really trying to oppose "bad AI" or are you opportunistically
>using it to attack AI as a whole? Why not criticise specific work you
>think is flawed instead of making largely unsupported allegations in
>an attempt to discredit the entire field?

No, I've made it clear that I only object to the comfortable,
protected privilege which AI gives to computational models of
Humanity. Any research into computer applications is OK by me, as long
as it's getting somewhere. If it's not, it's because of a lack of basic
research. I contend that there is no such thing as basic research in AI.
Anything which could be called basic research is the province of other
disciplines, who make more progress with less funding per investigation (no
expensive workstations etc.).

I do not think there is a field of AI. There is a strange combination
of topic areas covered at IJCAI etc. It's a historical accident, not
an epistemic imperative.

My concern is with the study of Humanity and the images of Humanity
created by AI in order to exist. Picking on specific areas of work is
irrelevant. Until PDP, there was a logical determinism, far more
mechanical than anything in the physical world, behind every AI model
of human behaviour. Believe what you want to get robots and vision to
work. It doesn't matter, because what counts is the fact that your
robotic and vision systems do the well-defined and documented tasks
which they are constructively and provably designed to do. There is
no need to waste energy here keeping in step with human behaviour.
But when I read misanthropic views of Humanity in AI, I will reply.
What's the problem?
--
Gilbert Cockton, Department of Computing Science, The University, Glasgow
gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

The proper object of the study of humanity is humans, not machines

------------------------------

Date: 8 Jun 88 11:52:11 GMT
From: mcvax!ukc!strath-cs!glasgow!gilbert@uunet.uu.net (Gilbert
Cockton)
Subject: Re: Bad AI: A Clarification

In article <451@aiva.ed.ac.uk> jeff@uk.ac.ed.aiva (Jeff Dalton) writes:
>What is the point of arguing about which research areas deserve names
>of their own?
A lot. Categories imply boundaries. Don't try to tell me that the
division of research into disciplines has no relevance. A lot comes
with those names.
>
>>It would be nice if they followed good software engineering practices and
>>structured development methods as well.
>
>Are you trying to see how many insults can fit into one paragraph?
No. This applies to my research area of HCI too. No-one in UK HCI
research, as far as I know, objects to the criticism that research
methodologies are useless until they are integrated with existing
system development approaches. HCI researchers accept this as a
problem. On software engineering too, HCI will have to deliver its
goods according to established practices. To achieve this, some HCI
research must be done in Computer Science departments in collaboration
with industry. There is no other way of finishing off the research
properly.

You've either missed or forgotten a series of postings over the last
two years about this problem in AI. Project managers want to manage
IKBS projects like existing projects. Organisations have a large
investment in their development infrastructures. The problem with
AI techniques is that they don't fit in with existing practices, nor
are there any mature IKBS structured development techniques (and I
only know of one ESPRIT project where they are being developed).
You must also not be talking to the same UK software houses as I am, as
(parts of) UK industry feel that big IKBS projects are a recipe for
burnt fingers, unless they can be managed like any other software project.
--
Gilbert Cockton, Department of Computing Science, The University, Glasgow
gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

The proper object of the study of humanity is humans, not machines

------------------------------

Date: 8 Jun 88 19:53:41 GMT
From: gandalf!seth@locus.ucla.edu (Seth R. Goldman)
Subject: Re: who else isn't a science

In article <3c84f2a9.224b@apollo.uucp> Peter Nelson writes:
>
> I don't see why everyone gets hung up on mimicking natural
> intelligence. The point is to solve real-world problems. Make
> machines understand continous speech, translate technical articles,
> put together mechanical devices from parts that can be visually
> recognized, pick out high priority targets in self-guiding missiles,
> etc. To the extent that we understand natural systems and can use
> that knowledge, great! Otherwise, improvise!

It depends what your goals are. Since AI is a part of computer science
there is naturally a large group of people concerned with finding
solutions to real problems using a more engineering type of approach.
This is the practical end of the field. The rest of us are interested
in learning something about human behavior/intelligence and use the
computer as a tool to build models and explore various theories. Some
are interested in modelling the hardware of the brain (good luck) and
some are interested in modelling the behavior (more luck). It is these
research efforts which eventually produce technology that can be applied
to practical problems. You need both sides to have a productive field.

------------------------------

Date: 9 Jun 88 04:13:21 GMT
From: bungia!datapg!sewilco@umn-cs.arpa (Scot E. Wilcoxon)
Subject: Re: who else isn't a science

In article <3c84f2a9.224b@apollo.uucp> Peter Nelson writes:
...
> I don't see why everyone gets hung up on mimicking natural
> intelligence. The point is to solve real-world problems. Make
...
> etc. To the extent that we understand natural systems and can use
> that knowledge, great! Otherwise, improvise!

The discussion has been zigzagging between this viewpoint and another.
This is the "thought engineering" side, while others have been trying to
define the "thought science" side. The "thought science" definition is
concerned with how carbon-based creatures actually think. The "thought
engineering"
definition is concerned with techniques which produce
desired results.

There are many cases where engineering has produced solutions which are
different than the definition provided by an existing technique.
Duplicating the motions of a flying bird or dish-washing human does not
directly lead to our present standards of fixed-wing airplanes and
mechanical dishwashers.

-- Scot E. Wilcoxon sewilco@DataPg.MN.ORG
{amdahl|hpda}!bungia!datapg!sewilco Data Progress UNIX masts &
rigging +1 612-825-2607 uunet!datapg!sewilco

------------------------------

Date: Thu, 9 Jun 88 09:31:41 EDT
From: "Bruce E. Nevin" <bnevin@cch.bbn.com>
Subject: Consensus and Reality

In AIList Digest 7.24, Pat Hayes <hayes.pa@Xerox.COM> writes:
PH> A question: if one doubts the existence of the physical world in which
PH> we live, what gives one such confidence in the existence of the other

I can't speak for Simon Brooke, but personally I don't think anyone
seriously doubts the existence of the physical world in which we live.
Something is going on here. The question is, what.

One reason for our present difficulty in this forum reaching consensus
about what "Reality" is, is that we are using the term in two senses:
The anti-consensus view is that there is an absolute Reality and that is
what we relate to and interact with. The consensus view is that what we
"know" about whatever it is that is going on here is limited and constrained
in many ways, yet we relate to our categorizations of the world expressing
that "knowledge" as though they were in fact the Reality itself. When a
consensual realist expresses doubt about the existence of something
generally taken to be real, I believe it is doubt about the status of a
mental/social construct, rather than doubt about the very existence of
anything to which the construct might more or less correspond. From one
very valid perspective there is no CRT screen in front of you, only an
ensemble of molecules. Not a very useful perspective for present
purposes. The point is that neither perspective denies the reality of
that to which the other refers as real, and neither is itself that
reality.

What is being overlooked by those who react with such allergic violence
to the notion of consensual reality is that there is a good relationship
between the two senses or understandings of the word "real": namely,
precisely that which makes science an evolving thing. John McCarthy
<JMC@SAIL.Stanford.EDU> has expressed it very well:

JM> Indeed science is a social activity and all information comes in
JM> through the senses. A cautious view of what we can learn would like to
JM> keep science close to observation and would pay attention to the consensus
JM> aspects of what we believe. However, our world is not constructed in
JM> a way that co-operates with such desires. Its basic aspects are far
JM> from observation, the truth about it is often hard to formulate in
JM> our languages, and some aspects of the truth may even be impossible to
JM> formulate. The consensus is often muddled or wrong.

The control on consensus is that our agreements about what is going on
must be such that the world lets us get away with them. But given our
propensity for ignoring (that is, agreeing to ignore) what doesn't fit,
that gives us lots of wiggle room. Cross-cultural and psychological
data abound. For a current example in science, consider all the
phenomena that are now respectable science and that previously were
ignored because they could not be described with linear functions.

But nature too is evolving, quite plausibly in ways not limited to the
biological and human spheres. The universe appears to be less like a
deterministic machine than a creative, unpredictable enterprise. I am
thinking now of Ilya Prigogine's _Order Out of Chaos_. "We must give up
the myth of complete knowledge that has haunted Western science for
three centuries. Both in the hard sciences and the so-called soft
sciences, we have only a window knowledge of the world we want to
describe."
The very laws of nature continue to reconfigure at higher
levels of complexity. "Nature has no bottom line." (Prigogine, as
quoted in Brain/Mind Bulletin 11.15, 9/8/86. I don't have the book at
hand.)

Now perhaps I am misconstruing McCarthy's words, since he starts out
saying:

JM> The trouble with a consensual or any other subjective concept of reality
JM> is that it is scientifically implausible.

Since everything else in that message is consistent with the view
presented here, I believe he is overlooking the relationship between the
two aspects of what is real: the absolute Ding an Sich, and those
agreements that we hold about reality so long as we can get away with
it. In this relationship, consensual reality is not scientifically
implausible; it is, at its most refined, science itself.

JM> It will be even worse if we try to program to regard reality as
JM> consensual, since such a view is worse than false; it's incoherent.

I suggest looking at the following for a system that by all accounts
works pretty well:

Pask, Gordon. 1986. Conversational Systems. A chapter in _Human
Productivity Enhancement_, vol. 1, ed. J. Zeidner. Praeger, NY.

For the coherent philosophy, a start and references may be found in
another chapter in the same book:

Gregory, Dik. 1986. Philosophy and Practice in Knowledge
Representation. (In book cited above).

Winograd & Flores _Understanding Computers and Cognition_ arrive at a
very similar understanding by a different route. (Pask by way of
McCulloch, von Foerster, and his own development of Conversation Theory;
Winograd & Flores by way of Maturana & Varela (students of McCulloch)
and hermeneutics.)

JM> To deal with this matter I advocate a new branch of philosophy I call
JM> metaepistemology. It studies abstractly the relation between the
JM> structure of a world and what an intelligent system within the world
JM> can learn about it. This will depend on how the system is connected
JM> to the rest of the world and what the system regards as meaningful
JM> propositions about the world and what it accepts as evidence for these
JM> propositions.

Sounds close to Pask's conversation theory. There is also a new field
being advocated by Paul McLean (brain researcher), called epistemics.
It is said to concern how we can know our "knowing organs," the brain
and mind. "While epistemology examines knowing from the outside in,
epistemics looks at it from the inside out."
(William Gray, quoted in
Brain/Mind Bulletin 7.6 (3/8/82).

Bruce Nevin
bn@cch.bbn.com
<usual_disclaimer>

------------------------------

Date: Thu, 9 Jun 88 09:35:25 EDT
From: "Bruce E. Nevin" <bnevin@cch.bbn.com>
Subject: Re: human-human communication

In AIList Digest 7.24, Hunter Barr <bbn.com!pineapple.bbn.com!barr@bbn.com
or maybe hbarr@pineapple.bbn.com> says regarding Human-human communication:

HB> I will now express in language:
HB> "How to recognize the color red on sight (or any other color)..":

HB> Find a person who knows the meaning of the word "red." Ask her to
HB> point out objects which are red, and objects which are not,
HB> distinguishing between them as she goes along. If you are physically
HB> able to distinguish colors, you will soon get the hang of it.

This evades the question by stating in language how to find out for
yourself what I can't tell you in language.

Bruce Nevin
bn@cch.bbn.com
<usual_disclaimer>

------------------------------

Date: 9 Jun 88 15:28:45 GMT
From: bbn.com!pineapple.bbn.com!barr@bbn.com (Hunter Barr)
Subject: Re: The Social Construction of Reality

In article <1332@crete.cs.glasgow.ac.uk> Gilbert Cockton writes:
>Odi profanum vulgum, et arceo ...


Favete lingua!


I couldn't resist :-).
______
HUNTER

------------------------------

Date: Thu, 9 Jun 88 14:15:33 EDT
From: Raul.Valdes-Perez@B.GP.CS.CMU.EDU
Subject: construal of induction

Alen Shapiro states:

>There are basically 2 types of inductive systems
>
>a) those that build an internal model by example (and classify future
> examples against that model) and
>b) those that generate some kind of rule which, when run, will classify
> future examples
...
>I do not include those systems that are not able to generalise in either
>a or b since strictly they are not inductive!!

The concept of induction has various construals, it seems. The one I am
comfortable with is that induction refers to any form of ampliative
reasoning, i.e. reasoning that draws conclusions which could be false
despite the premises being true. This construal is advanced by Wesley
Salmon in the little book Foundations of Scientific Inference. Accordingly,
any inference is, by definition, inductive xor deductive.

I realize that this distinction is not universal. For example, some would
distinguish categories of induction. I would appreciate reading comments
on this topic in AILIST.

Raul Valdes-Perez
CMU CS Dept.

------------------------------

Date: Thu 9 Jun 88 13:51:46-PDT
From: Conrad Bock <BOCK@INTELLICORP.ARPA>
Subject: Hypostatization


I agree with Pat Hayes that the problem of the existence of the world is
not as important as it used to be, but I think the more general question
about the relation of mind and world is still worthwhile. As Hayes
pointed out, such questions are entirely worthless if we stay close to
observation and never forget, as McCarthy suggests, that we are
postulating theoretical entities and their properties from our input
output data. Such an observational attitude is always aware that there
are no ``labels'' on our inputs and outputs that tell us where they come
from or where they go.

Sadly, such keen powers of observation are constantly endangered by the
mind's own activity. After inventing a theoretical entity, the mind
begins to treat it as raw observation, that is, the entities become part
of the world as far as the mind is concerned. The mind, in a sense,
becomes divided from its own creations. If the mind is lucky, new
observations will push it out of its complacency, but it is precisely
the mind's attachment to its creations that dulls the ability to
observe.

Hayes is correct that some forms of Western religion are particularly
prone to this process (called ``hypostatization''), but some eastern
religions are very careful about it. Kant devastated traditional
metaphysics by drawing attention to it. Freud and Marx were directly
concerned with hypostatization, though only Marx had the philosophical
training to know that that was what he was doing.

I'd interested to know from someone familiar with the learning
literature whether hypostatization is a problem there. It would take
the form of assuming the structure of the world that is being learned
about before it is learned.


Conrad Bock

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT