Copy Link
Add to Bookmark
Report

AIList Digest Volume 7 Issue 008

eZine's profile picture
Published in 
AIList Digest
 · 11 months ago

AIList Digest            Friday, 27 May 1988        Volume 7 : Issue 8 

Today's Topics:

Philosophy and Social science
Re: Arguments against AI are arguments against human formalisms
Re: Re: Re: Exciting work in AI -- (stats vs AI learning)
non-AI theories about symbols
the mind of society
Alternative to Probability (was Re: this is philosophy ??!!?)
Assumptions in dialog (was Re: Acting irrationally)

----------------------------------------------------------------------

Date: 4 May 88 14:28:25 GMT
From: attcan!houdi!marty1@uunet.uu.net (M.BRILLIANT)
Subject: Re: this is philosophy ??!!?

In article <1588@pt.cs.cmu.edu>, Anurag Acharya writes:
> Gilbert Cockton writes:
> ...
> > Your system should prevaricate, stall, duck the
> >issue, deny there's a problem, pray, write to an agony aunt, ask its
> >mum, wait a while, get its friends to ring it up and ask it out ...
>
> Whatever does all that stuff have to do with intelligence per se ?
> ....

Pardon me for abstracting out of context. Also for daring to comment
when I am not an AI researcher, only an engineer waiting for a useful
result.

But I see that as an illuminating bit of dialogue. Cockton wants to
emulate the real human decision maker, and I cannot say with certainty
that he's wrong. Acharya wants to avoid the pitfalls of human
fallibility, and I cannot say with certainty that he's wrong either.

I wish we could see these arguments as a conflict between researchers
who want to model the human mind, and researchers who want to make more
useful computer programs. Then we could acknowledge that both schools
belong in AI, and stop arguing over which should drive out the other.

M. B. Brilliant Marty
AT&T-BL HO 3D-520 (201)-949-1858
Holmdel, NJ 07733 ihnp4!houdi!marty1

Disclaimer: Opinions stated herein are mine unless and until my employer
explicitly claims them; then I lose all rights to them.

------------------------------

Date: 9 May 88 14:12:39 GMT
From: trwrb!aero!venera.isi.edu!smoliar@bloom-beacon.MIT.EDU
(Stephen Smoliar)
Subject: Re: this is philosophy ??!!?

In article <May.6.18.48.07.1988.29690@cars.rutgers.edu> byerly@cars.rutgers.edu
(Boyce Byerly ) writes:
>
>Perhaps the logical deduction of western philosophy needs to take a
>back seat for a bit and let less sensitive, more probalistic
>rationalities drive for a while.
>
I have a favoire paper which I always like to recommend when folks like Boyce
propose putting probabilistic reasoning "in the driver's seat:"

Alvan R. Feinstein
Clinical biostatistics XXXIX. The haze of Bayes, the aerial palaces
of decision analysis, and the computerized Ouija board.
CLINICAL PHARMACOLOGY AND THERAPUTICS
Vol. 21, No. 4
pp. 482-496

This is an excellent (as well as entertaining) exposition of many of the
pitfalls of such reasoning written by a Professor of Medicine and Epidemiology
at the Yale University School of Medicine. I do not wish this endorsement to
be interpreted as a wholesale condemnation of the use of probabilities . . .
just a warning that they can lead to just as much trouble as an attempt to
reduce the entire world of first-order predicate calculus. We DEFINITELY
need abstractions better than such logical constructs to deal with issues
such as uncertainty and belief, but it is most unclear that probability
theory is going to provide those abstractions. More likely, we should
be investigating the shortcomings of natural deduction as a set of rules
which represent the control of reasoning and consider, instead, possibilities
of alternative rules, as well as the possibility that there is no one rule
set which is used universally but that different sets of rules are engaged
under different circumstances.

------------------------------

Date: 9 May 88 14:56:25 GMT
From: attcan!lsuc!spectrix!yunexus!geac!geacrd!cbs@uunet.uu.net
(Chris Syed)
Subject: Re: Social science gibber [Was Re: Various Future of AI

This is a comment upon parts of two recent submissions, one by
Simon Brooke and another from Jeff Dalton.

Brooke writes:

> AI has two major concerns: the nature of knowledge, and the nature of
> mind. These have been the central subject matter of philosophy since
> Aristotle, at any rate. The methods used by AI workers to address these
> problems include logic - again drawn from Philosophy. So to summarise:
> AI addresses philosophical problems using (among other things)
> philosophers tools. Or to put it differently, Philosophy plus hardware -
> plus a little computer science - equals what we now know as AI. The fact
> that some workers in the field don't know this is a shameful idictment on
> the standards of teaching in AI departments.

If anyone doubts these claims, s/he might try reading something on Horne
clause logic. {I know, Horne probably dosen't have an 'e' on it}. And,
as Brooke says, a dose of Thomas Kuhn seems called for. It is no accident
that languages such as Prolog seem to appeal to philosophers.
In fact, poking one's head into a Phil common room these days is much like
trotting down to the Comp Sci dept. All them philosophers is talking
like programmers these days. And no wonder - at last they can simulate
minds. Meanwhile, try Minsky's _Community of Mind_ for a peek at the
crossover from the other direction. By the by, it's relatively hard to
find a Phil student, even at the graduate level, who can claim much
knowledge of Aristotle these days (quod absit)! Nevertheless, dosen't
some AI research have more mundane concerns than the study of mind?
Like how do we zap all those incoming warheads whilst avoiding wasting
time on the drones?

Jeff Dalton writes:

> Speaking of outworn dogmas, AI seems to be plagued by behaviorists,
> or at least people who seem to think that having the right behavior
> is all that is of interest: hence the popularity of the Turing Test.

I'm not sure that the Turing Test is quite in fashion these days, though
there is notion of a 'Total Turing Test' (Daniel C. Dennet, I think?).
Behaviourism, I must admit, gives me a itch (positively reinforcing, I'm
sure). But I wonder just what 'the right behaviour' _is_, anyway? It
seems to me that children (from a Lockean 'tabula rasa' point of view),
learn & react differently from adults (with all that emotional baggage
they carry around). One aspect of _adult_ behaviour I'm not sure
AI should try to mimic is our nasty propensity to fear admitting one's
wrong. AI research offers Philosophy a way to strip out all the
social and cultural surrounds and explore reasoning in a vaccuum...
to experiment upon artificial children. But adult humans cannot observe,
judge, nor act without all that claptrap. As an Irishman from MIT once
observed, "a unique excellence is always a tragic flaw". Maybe it
depends on what you're after?

{uunet!mnetor,yunexus,utgpu} o ~
!geac!geacrd!cbs (Chris Syed) ~ \-----\---/
GEM: CHRIS:66 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
"There can be no virtue in obeying the law of gravity." - J.E.McTaggart.

------------------------------

Date: 9 May 88 21:31:36 GMT
From: mcvax!ukc!its63b!aiva!jeff@uunet.uu.net (Jeff Dalton)
Subject: Re: Arguments against AI are arguments against human
formalisms

In article <1103@crete.cs.glasgow.ac.uk> Gilbert Cockton writes:
> BTW, Robots aren't AI. Robots are robots.

I'm reminded of the Lighthill report that caused a significant loss of
AI funding in the UK about 10 years ago. The technique is to divide
up AI, attack some parts as useless or worse, and then say the others
are "not AI". The claim then is that "AI", properly used, turns out
to encompass only things best not done at all. All of the so-called
AI that's worth supporting (and funding) belongs to other disciplines
and so should be done there.

Another example of this approach can be found earlier in the message:

> Note how scholars like John Anderson restrict themselves to proper
> psycholgical data. I regard Anderson as a psychologist, not as an AI
> worker.

A problem with this attack is that it is not at all clear that
AI should be defined so narrowly as to exclude, for example, *all*
robotics. That robots are robots does not preclude some of them
being programmed using AI techniques. Nor would an artificial
intelligence embodied in a robot automatically fail to be AI.

The attack seeks to set the terms of debate so that the defenders
cannot win. Any respectable result cited will turn out to be "not
AI"
. Any argument that AI is possible will be sat on by something
like the following (from <1069@crete.cs.glasgow.ac.uk>):

Before the 5th Generation scare, AI in the UK had been sat on for
dodging too many methodological issues. Whilst, like the AI pioneers,
they "could see no reasons WHY NOT [add list of major controversial
positions"
, Lighthill could see no reasons WHY in their work.

In short, the burden of proof would be such that it could not be met.
The researcher who wanted to persue AI would have to show the research
would succeed before undertaking it.

Fortunately, there is no good reason to accept the narrow definition
of AI, and anyone seeking to reject the normal use of the term should
accept the burden of proof. AI is not confined to attempts at human-
level intelligence, passing the Turing test, or other similar things
now far beyond its reach.

Moreover, the actual argument against human-level AI, once we strip
away all the misdirection, makes claims that are at least questionable.

> The argument against AI depends on being able to use written language
> (physical symbol hypothesis) to represent the whole human and physical
> universe. AI and any degree of literate-ignorance are incompatible.
> Humans, by contrast, may be ignorant in a literate sense, but
> knowlegeable in their activities. AI fails as this unformalised
> knowledge is violated in formalisation, just as the Mona Lisa is
> indescribable.

The claim that AI requires zero "literate-ignorance", for example, is
far from proven, as is the implication that humans can call on
abilities of a kind completely inaccessable to machines. For some
reasons to suppose that humans and machines are not on opposite sides
of some uncrossable line, see (again) Dennett's Elbow Room.

------------------------------

Date: 9 May 88 21:46:31 GMT
From: mcvax!ukc!its63b!aiva!jeff@uunet.uu.net (Jeff Dalton)
Subject: Re: Sorry, no philosophy allowed here.

In article <1069@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk
(Gilbert Cockton) says:
> If you can't write it down, you cannot possibly program it.

Not so. I can write programs that I could not write down on paper
because I can use other programs to so some of the work. So I might
write programs that are too long, or too complex, to write on paper.

------------------------------

Date: Mon, 9 May 88 20:11:02 PDT
From: larry@VLSI.JPL.NASA.GOV
Subject: Philosophy: Informatique & Marxism

--The following bounced when I tried two different ways to send it directly.

Gilbert Cockton: Even one reference to a critique of Systems Theory would be
helpful if it includes a bibliography. If you can't find one without too much
trouble, please send at least a few sentences explaining the flaw(s) you see.
I would dearly love to be able to advance beyond it, but don't yet see an
alternative.

--The difficulty of formalising knowledge.

Cockton makes a good point here. The situation is even worse than he
indicates. Many, perhaps most or all decisions seem to be made
subconsciously, apparently by an analog vector-sum operation rather than
logical, step-by-step process. We then rationalize our decision, usually so
quickly & easily that we are never aware of the real reasons. This makes
knowledge and procedure capture so difficult that I suspect most AI
researchers will try (ultimately unsuccessfully) to ignore it.

--Marxism.

Economics is interesting in that it produced a cybernetic explanation of
production, value, & exchange (at least as early as the 17th century) long
before there was cybernetics. Marx (and Engels) spent a lot of time studying
& explaining this process, and much of their work is still useful. Other
parts have not been supported by advances in history & knowledge.

The workers (at least in America) are predominantly opposed to revolution and
support capitalism, despite much discontent about the way bosses treat them.
Part of this may be because our society holds out the hope that many of us can
become bosses ourselves. Another reason is that many workers, either directly
or through retirement plans, have become owners. Technology has also
differentiated the kinds of workers from mostly physical laborer to skilled
workers of many different types who sympathize with their own subclass rather
than workers in general.

Further, once workers feel they have reached an acceptable subsistence,
oftentimes they develop other motivations for work having nothing to do with
material concerns. People from a middle-class or higher background often
stereotype the "lowest proletariat" as beer-drinking slobs whose only interest
is food, football, and sex. Coming from a working class background (farmers
and factory laborers), I know that "doing a good job" is a powerful motivator
for many workers. The "higher proletariat" (who are further from the
desperate concern for survival) show this characteristic even more strongly.
Most engineers I know work for reasons having nothing to do with money; the
same is true of many academics and artists. (This is NOT to say money is
unimportant to them.)

Just as the practice of economics has deviated further & further from the
classical Marxist viewpoint, so has theory. Materialism, for instance, has
changed drastically in a world where energy is at least as important as
matter, which has itself become increasingly strange. Too, the science of
"substance" has been joined by a young, confused, but increasingly vigorous,
fertile and rigorous science of "form," variously called (or parts of it)
computer science, cybernetics, communications theory, information science,
informatique, etc. This has interesting implications for theories of monetary
value and the definition of capital, implications that Marx did not see (&
probably could not, trapped in his time as he was).

Informatique has practical implications of which most of us on this list are
well aware. One of the most interesting economically is the future of
guardian-angel programs that help us work: potentially putting us out of a
job, elevating our job beyond old limits, or (as any powerful tool can)
harming us. And in one of the greatest ironies of all, AI researchers working
in natural language and robotics have come to realize the enormous
sophistication of "common labor" and the difficulties and expense of
duplicating it mechanically.
Larry @ jpl-vlsi

------------------------------

Date: Tue, 10 May 88 09:44 EDT
From: Stephen Robbins <Stever@WAIKATO.S4CC.Symbolics.COM>
Subject: AIList V6 #97 - Philosophy

Date: 6 May 88 22:48:09 GMT
From: paul.rutgers.edu!cars.rutgers.edu!byerly@rutgers.edu (Boyce Byerly )
Subject: Re: this is philosophy ??!!?

2) In representing human knowledge and discourse, it fails because it
does not recognize or deal with contradiction. In a rigorously
logical system, if

P ==> Q
~Q
P
Then we have the tautology ~Q and Q.

If you don't believe human beings can have the above deriveably
contradictory structures in their logical environments, I suggest you
spend a few hours listening to some of our great political leaders :-)

There's also an issue of classification, with people. How do they even \know/
something's a Q or ~Q?

One of the most fascinating (to me) moments in the programming class I teach is
when I hand out a sheet of word problems for people to solve in LISP. If I
call them "mini program specs," the class grabs them and gets right to work.

If I call them "word problems," I start to get grown men and women telling me
that "they can't \do/ word problems." Despite their belief that they \can/
program!

It seems to be a classification issue.

------------------------------

Date: 6 May 88 23:05:54 GMT
From: oliveb!tymix!calvin!baba@sun.com (Duane Hentrich)
Subject: Re: Free Will & Self Awareness

In article <31024@linus.UUCP> bwk@mbunix (Barry Kort) writes:
>Why then, when a human engages in undesirable behavior, do we resort
>to such unenlightened corrective measures as yelling, hitting, or
>deprivation of life-affirming resources?

Probably some vague ideas that negative reinforcement works well. Or
role-modeling parents who did the same thing.

For the same reason that the Enter/Carriage Return key on many keyboards
is hit repeatedly and with great force, i.e. frustration with an
inefficient/ineffective interface which doesn't produce the desired results.

Yeah. I've noticed this: if something doesn't work, people do it longer,
harder, and faster. "Force it!" seems to be the underlying attitude. But in my
experience, slowing down and trying carefully works much better. "Force it!"
hasn't even been a particularly good heuristic for me.

Actually, I wonder if it's people-in-general, or primarily a Western
phenomenon.

-- Stephen

------------------------------

Date: 10 May 88 17:31:38 GMT
From: dogie!mish@speedy.wisc.edu
Subject: Re: Sorry, no philosophy allowed here.

In article <414@aiva.ed.ac.uk>, jeff@aiva.ed.ac.uk (Jeff Dalton) writes...

>In article <1069@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk
>(Gilbert Cockton) says:
>> If you can't write it down, you cannot possibly program it.
>
>Not so. I can write programs that I could not write down on paper
>because I can use other programs to so some of the work. So I might
>write programs that are too long, or too complex, to write on paper.

YACC Lives! I've written many a program that included code 'written' by
some other program (namely YACC).
The point is that the computer allows us to extend what we know. I may
not have actually written the code, but I knew how to tell the computer to
write the code. In doing so, I created a program that I never (well, almost
never) could have written myself even though I knew how
_____________________ ____________________________ ___________________
\ / \ /
Tom \ / Bit: mish@wiscmacc \ / Univ. Of Wis.
Mish X Arpa: mish@vms.macc.wisc.edu X Madison
Jr. / \ Phone: (608) 262-8525 / \ MACC
_____________________/ \____________________________/ \___________________

------------------------------

Date: 10 May 1988 14:10 (Tuesday)
From: munnari!nswitgould.oz.au!wray@uunet.UU.NET (Wray Buntine)
Subject: Re: Re: Re: Exciting work in AI -- (stats vs AI learning)

It is worthwhile me clarifying Stuart Crawford's
(Advanced Decision Systems, stuart@ads.com, Re: Exciting work in AI)
recent comments on my own discussion of Quinlan's work, because it brings
out a distinction between purely statistical approaches and
approaches from the AI area for learning prediction rules from noisy data.

This prediction problem is OF COURSE an applied statistics one.
(My original comment never presumed otherwise---the reason I
posted the comment to AIList in the first place was to point this out)

But, it is NOT ALWAYS a purely applied statistics problem (hence my
comments about Quinlan's "improvements").

1. In knowledge acquisition, we usually don't have a purely statistical
problem, we often have
a small amount of data ,
a knowledgeable but only moderately articulate expert .
To apply a purely statistical approach to the data alone is clearly
ignoring a very good source of information: the "expert".
To expect the expert to sprout forth relevant information is naive.
We have to produce a curious mix of applied statistics and cognitive
psych to get good results. With comprehensibility of statistical results,
prevelant in learning work labelled as AI, we can draw
the expert into giving feedback on statistical results (potential rules).
This is a devious but demonstrably successful means of capturing some of
his additional information.
There are other advantages of comprehensibility in the "knowledge
acquisition"
context that again arise for non-statistical reasons.

2. Suffice it to say, trees may be more comprehensible than rules sometimes
(when they're small they certainly give a better picture of the overall
result), but when they're large they aren't always.
Transforming trees to rules is not simply a process of picking a branch
and calling it a rule. A set of disjunctive rules can be logically
equivalent to a minimal size tree that is LARGER BY AN ORDER OF MAGNITUDE.
In a recent application (reported in CAIA-88)
the expert flatly refused to go over trees, but on being shown rules
found errors in the data preparation, errors in the problem formulation, and
provided substantial extra information (the rules jogged his memory), ....
merely because he could easily comprehend what he was looking at.
Need I say, subsequent results were far superior.

In summary, when learning prediction rules from noisy data, AI approaches
complement straight statistical ones in knowledge acquisition contexts, for
reasons outside the domain of statistics. In our experience, and the
experience of many others, this can be necessary to produce results.


Wray Buntine
wray@nswitgould.oz
School of Computing Science
University of Technology, Sydney
PO Box 123, Broadway
Australia, 2007

------------------------------

Date: 10 May 88 11:56 PDT
From: hayes.pa@Xerox.COM
Subject: Re: AIList V6 #98 - Philosophy

Nancyk has made a valuable contribution to the debate about logic and AI,
raising the discussion to a new level. Of course, once one sees that to believe
that ( for example ) if P and Q are both true, then P&Q is true, is merely an
arifact of concealed organismal Anthropology, a relic of bougeois ideology, the
whole matter becomes much clearer. We who have thought that psychology might be
relevant to AI have missed the point: of course, political science -
specifically, Marxist political science - is the key to making progress. Let us
all properly understand the difference between the Organismus-Umwelt
Zusammenhang and the Mensch-Umwelt Zusammenhang, and let our science take
sides with the working people, and we will be in a wholly new area. And I
expect Gilbert will be happier. Thanks, Nancy.

Pat Hayes

------------------------------

Date: 11 May 88 04:39:31 GMT
From: glacier!jbn@labrea.stanford.edu (John B. Nagle)
Subject: Re: Arguments against AI are arguments against human
formalisms

In article <1103@crete.cs.glasgow.ac.uk> Gilbert Cockton writes:
> BTW, Robots aren't AI. Robots are robots.

Rod Brooks has written "Robotics is a superset of AI". Robots have
all the problems of stationary artificial intelligences, plus many more.
Several of the big names in AI did work in robotics back in the early days of
AI. McCarthy, Minsky, Winograd, and Shannon all did robotics work at one
time. But they did it in a day when the difficulty of the problems to be
faced was not recognized. There was great optimism in the early days,
but even such seemingly simple problems such as grasping turned out to be
very hard. Non-trivial problems such as general automatic assembly or
automatic driving under any but the most benign conditions turned out to
be totally out of reach with the techniques available.

Progress has been made, but by inches. Nevertheless, I suspect that
over the next few years, robotics will start to make a contribution to
the more classic AI problems, as the techniques being developed for geometric
reasoning and sensor fusion start to become the basis for new approaches
to artificial intelligence.

I consider robotics a very promising field at this point in time.
But I must offer a caution. Working in robotics is risky. Failure is
so obvious. This can be bad for your career.

John Nagle

------------------------------

Date: Wed 11 May 88 14:18:27-PDT
From: Conrad Bock <BOCK@INTELLICORP.ARPA>
Subject: non-AI theories about symbols


For those interested in non-AI theories about symbols, the following is
a very quick summary of Freud and Marx.

In Freud, the prohibition on incest forces children to express their
sexuality through symbolic means. Sexual desire is repressed in the
unconscious, leaving the symbols to be the center of people's attention.
People begin to be concerned with things for which there seems no
justification.

Marx observed that the act of exchanging objects in an economy forces us
to abstract from our individual labors to a symbol of labor in general
(money). The abstraction becomes embodied in capital, which people
constantly try to accumulate, forgetting about the value of the products
themselves.

Both Marx and Freud use term `fetish' to refer to the process in which
symbols (of sex and labor) begin to form systems that operate
autonomously. In Freud's fetishism, someone may be obsessed with feet
instead of actual love; in Marx, people are interested in money instead
of actual work. In both cases, we lose control of something of our own
creation (the symbol) and it dominates us.

Conrad Bock

------------------------------

Date: 12 May 88 19:18:17 GMT
From: centro.soar.cs.cmu.edu!acha@pt.cs.cmu.edu (Anurag Acharya)
Subject: Re: this is philosophy ??!!?

In article <86@edai.ed.ac.uk> rjc@edai.ed.ac.uk (Richard Caley) writes:
>> Imagine Mr. Cockton, you are standing on the 36th floor of a building
>> and you and your mates decide that you are Superman and can jump out
>> without getting hurt.
>Then there is something going wrong in the negotiations within the
group!!

Oh, yes! There definitely is! But it is still is a "negotiation" and it
is "social"!. Since 'reality' and 'truth' are being defined as
"negotiated outcomes of social processes", there are no constraints on
what these outcomes may be. I can see no reason why a group couldn't
conclude that ( esp. since physical world constraints are not
necessarilly a part of these "negotiations").

>Saying that Y is the result of process X does not imply that any result
>from X is a valid Y. In particular 'reality is the outcome
>of social negotiation' does not imply that "real world" (whatever that is)
>constraints do not have an effect.

Do we have "valid" and "invalid" realities around ?

>If we decided that I was Superman then presumably there is good evidence
>for that assumption, since it is pretty hard to swallow. _In_such_a_case_
>I might jump. Being a careful soul I would probably try some smaller drops
>first!

Why would it be pretty hard to swallow ? And why do you need "good"
evidence ? For that matter, what IS good evidence - that ten guys (
possibly deranged or malicious ) say so ? Have you thought why would you
consider getting some real hard data by trying out smaller drops ? It is
because Physical World just won't go away and the only real evidence
that even you would accept are actual outcomes of physical events.
Physical world is the final arbiter of "reality" and "truth" no matter
what process you use to decide on your course of action.


>To say you would not jump would be to say that you would not accept that
>you were Superman no matter _how_ good the evidence.

If you accept consensus of a group of people as "evidence", does the
degree of goodness depend on the number of people, or what ?

> Unless you say that the
>concept of you being Superman is impossible ( say logically inconsistent with
>your basic assumptions about the world ), which is ruled out by the
>presuppositions of the example ( since if this was so you would never come
>to the consensus that you were him ), then you _must_ accept that sufficient
>evidence would cause you to believe and hence be prepared to jump.

Ah, well.. if you reject logical consistency as a valid basis for
argument then you could come to any conclusion/consensus in the world
you please - you could conclude that you (simultaneously) were and were
not Superman! Then, do you jump out or not ? ( or maybe teeter at the
edge :-)) On the other hand, if you accept logical consistency as a
valid basis for argument - you have no need for a crowd to back you up.

Come on, does anyone really believe that if he and his pals reach a consensus on
some aspect of the world - the world would change to suit them ? That is the
conclusion I keep getting out of all these nebulous and hazy stuff about
'reality' being a function of 'social processes'.
--
Anurag Acharya Arpanet: acharya@centro.soar.cs.cmu.edu

"There's no sense in being precise when you don't even know what you're
talking about"
-- John von Neumann

------------------------------

Date: Fri, 13 May 88 06:00:14 EDT
From: "Bruce E. Nevin" <bnevin@cch.bbn.com>
Subject: the mind of society

A confession first about _The Society of Mind_: I have not yet read all
of this remarkable linearized hypertext document. I do not think this
matters for what I am saying here, but I certainly could be mistaken and
would welcome being corrected.

SOM gives a useful and even illuminating account of what might be going
on in mental process. I think this account misses the mark in the
following way:

While SOM demonstrates in an intellectually convincing way that the
experience of being a unitary self or an ego is illusory, it still
assumes that individual persons are separate. It even appears to
identify minds with brains in a one-one correspondence. But the logic
of interacting intelligent agents applies in obvious ways to interacting
persons. What then, for example, of the mind of society?

(I bring to this training and experience in family therapy. Much of the
subconscious mental processes and communications of family members serve
to constitute and maintain the family as a homeostatic system--analogous
to the traffic of SOM agents maintaining the ego, see below.
Individuals--especially those out of communication with family members--
recreate relationships from their families in their relationships
outside the family; and vice versa, as any parent knows. I concur with
Gilbert Cockton's remarks about social context. If in alienation we
pretend social context doesn't matter, we just extend the scope of that
which we ignore, i.e. that which we relegate to subconsciousness.)

Consider the cybernetic/constructivist view that mind is not
transcendent but rather is immanent (an emergent property) in the
cybernetic loop structure of what is going on. I believe SOM is
consistent with this view as far as it goes, but that the SOM account
could and should go farther--cf e.g. writings of Gregory Bateson, Paul
Watzlawick, Maturana, Varela, and others; and in the AI world, some
reaching in this direction in Winograd & Flores _Understanding
Computers_.

Minsky cites Freud ("or possibly Poincare'") as introducing serious
consideration of subconscious thought. However, the Buddhists, for
example, are pretty astute students of the mind, and have been
investigating these matters quite systematically for a long time.

What often happens when walking, jogging, meditating, laughing, just
taking a deep sighing breath, etc is that there are temporarily no
decisions to be made, no distinctions to be discriminated, and the
reactive mind, the jostling crowd of interacting agents, quiets down a
bit. It then becomes possible to observe the mental process more
"objectively". The activity doesn't stop ("impermanence" or ceaseless
change is the byword here). It may become apparent that this activity
is and always was out of control. What brings an activity of an agent
(using SOM terms) above the threshold between subconscious and conscious
mental activity? What lets that continuing activity slip out of
awareness? Not only are the activities out of control--most of them
being most of the time below the surface of the ocean, so to speak, out
of awareness--but even the constant process of ongoing mental and
emotional states, images, and processes coming to the surface and
disappearing again below the surface turns out also to be out of
control.

This temporary abeyance in the need to make decisions has an obvious
relation to Minsky's speculation about "how we stop deciding" (in
AIList V6 #98):

MM> I claim that we feel free when we decide to not try further to
MM> understand how we make the decisions: the sense of freedom comes from a
MM> particular act - in which one part of the mind STOPs deciding, and
MM> accepts what another part has done. I think the "mystery" of free will
MM> is clarified only when we realize that it is not a form of decision
MM> making at all - but another kind of action or attitude entirely, namely,
MM> of how we stop deciding.

The report here is that if you stop deciding voluntarily--hold the
discrimination process in abeyance for the duration of sitting in
meditation, for example, not an easy task--there is more to be
discovered than the subjective feeling of freedom. Indeed, it can seem
the antithesis of freedom and free will, at times!

So the agents of SOM are continually churning away, and if they're
predetermined it's not in any way that amounts to prediction and control
as far as personal awareness is concerned. And material from this
ongoing chatter continually rises into awareness and passes away out of
awareness, utterly out of personal control. (If you don't believe me,
look for yourself. It is a humbling experience for one wedded to
intellectual rigor and all the rest, I can tell you.)

Evidently, this fact of impermanence is due to there being no ego there
to do any controlling. Thus, one comes experientially to the same
conclusion reached by the intellectual argumentation in SOM: that there
is no self or ego to control the mind. Unless perhaps it be an emergent
property of the loop structures among the various agents of SOM.
Cybernetics has to do with control, after all. (Are emergent properties
illusory? Ilya Prigogine probably says no. But the Buddhists say the
whole ball of wax is illusory, all mental process from top to bottom.)

Here, the relation between "free will" and creativity becomes more
accessible. Try substituting "creativity" for "free will" in all the
discussion thus far on this topic and see what it sounds like. It may
not be so easy to sustain the claim that "there is no creativity because
everything is either determined or random."
And although there is a
profound relation between creativity and "reaching into the random" (cf
Bateson's discussions of evolution and learning wrt double-bind theory),
that relation may say more about randomness than it does about
creativity.

If the elementary unit of information and of mind is a difference that
makes a difference (Bateson), then we characterize as random that in
which we can find no differences that make a difference. Randomness is
dependent on perspective. Changes in perspective and access to new
perspectives can instantly convert the random to the non-random or
structured. As we have seen in recent years, "chaos" is not random,
since we can discern in its nonlinearity differences that make a
difference. (Indeed, a cardinal feature of nonlinearity as I understand
it is that small differences of input can make very large differences of
output, the socalled "butterfly effect".) From the point of view of
personal creativity, "reaching into the random" often means reaching
into an irrelevant realm for analogy or metaphor, which has its own
structure unrelated in any obvious or known way to the problem domain.
(Cf. De Bono.) "Man's reach shall e'er exceed his grasp,/Else what's a
meta for?"
(Bateson, paraphrasing Browning.)

It is interesting that the Buddhist experience of the Void--no ego, no
self, no distinctions to be found (only those made by mind)--is logically
equivalent to the view associated with Vedanta, Qabala, neoplatonism,
and other traditions, that there is but one Self, and that the personal
ego is an illusory reflection of That i.e. of God. ("No distinctions
because there is no self vs other"
is logically equivalent to "no
distinctions because there is but one Self and no other"
, the
singularity of zero.) SOM votes with the Buddhists, if it matters, once
you drop the presumed one-one correspondence of minds with brains.

On the neoplatonist view, there is one Will, and it is absolutely free.
It has many centers of expression. You are a center of expression for
that Will, as am I. Fully expressing your particular share of (or
perspective on) that Will is the most rewarding and fulfilling thing you
can do for yourself; it is in fact your heart's desire, that which you
want to be more than anything else. It is also the most rewarding and
beneficial thing you can possibly do for others; this follows directly
from the premise that there is but one Will. (This is thus a
perspective that is high in synergy, using Ruth Benedict's 1948 sense of
that much buzzed term.) You as a person are of course free not to
discover and not to do that which is your heart's desire, so the
artifactual, illusory ego has free will too. It's "desire" is to
continue to exist, that is, to convince you and everyone else that it is
real and not an illusion.

Whether you buy this or not, you can still appreciate and use the
important distinction between cleverness (self acting to achieve desired
arrangement of objectified other) and wisdom (acting out of the
recognition of self and other as one whole). I would add my voice to
others asking that we develop not just artificial cleverness, but
artificial wisdom. Winograd & Flores again point in this direction.

Bruce Nevin
bn@cch.bbn.com
<usual_disclaimer>

------------------------------

Date: 13 May 88 14:19:00 GMT
From: vu0112@bingvaxu.cc.binghamton.edu (Cliff Joslyn)
Subject: Alternative to Probability (was Re: this is philosophy ??!!?)

In article <5459@venera.isi.edu> Stephen Smoliar writes:

>We DEFINITELY need abstractions better than such logical constructs to
>deal with issues such as uncertainty and belief, but it is most unclear
>that probability theory is going to provide those abstractions. More
>likely, we should be investigating the shortcomings of natural deduction
>as a set of rules which represent the control of reasoning and consider,
>instead, possibilities of alternative rules, as well as the possibility
>that there is no one rule set which is used universally but that
>different sets of rules are engaged under different circumstances.


Absolutely right.

Furthermore, such a theory exists: Fuzzy Systems Theory. Over the past
fifteen years, through the work of Zadeh, Prade, Dubois, Shafer, Gaines,
Baldwin, Klir, and many others, we now understand that probability
measures in particular are a very special case of Fuzzy Measures in
general. Belief, Plausibility, Possibility, Necessity, and Basic
Probability Measures all provide alternative, and very powerful,
formalisms for representing uncertainty and indeterminism.

The traditional concept of 'information' itself is also recognized as a
special case. Other formalisms include measures of Fuzziness,
Uncertainty, Dissonance, Confusion, and Nonspecificity.

These methods are having a very wide impact in AI, especially with
regards to the representation of uncertainty in artificial reasoning.

Primary references:
Klir, George, _Fuzzy Sets, Uncertainty, and Information_,
Prentice Hall, 1988, Prentice Hall.

Dubois, D., Prade, H. _Fuzzy Sets and Systems: Theory and
Applications_, 1980, Academic.

Shafer, G., _A Mathematical Theory of Evidence_, 1976,
Princeton.

Zadeh, L.A. "The Role of Fuzzy Logic in the Management of
Uncertainty in Expert Systems,"
in Gupta MM et. al.,
_Approximatte Reasoning in Expert Systems_, 1985, U Cal
Berkley.


Journals:
_Fuzzy Sets and Systems_

_International J. of Approximate Reasoning_

_International J. of Man-Machine Studies_

_Information Science_

_International J. of General Systems_
--
O---------------------------------------------------------------------->
| Cliff Joslyn, Cybernetician at Large
| Systems Science, SUNY Binghamton, vu0112@bingvaxu.cc.binghamton.edu
V All the world is biscuit shaped. . .

------------------------------

Date: 16 May 88 10:29 PDT
From: hayes.pa@Xerox.COM
Subject: Re: AIList Digest V7 #1

I was fascinated by the correspondence between Gabe Nault and Mott Given in
vol7#1, concerning "an artifical intelligence language ... something more than
lisp or xlisp."
Can anyone suggest a list of features which a programming
language must have which would qualify it as an "artificial intelligence
language"
?

Pat Hayes

------------------------------

Date: 16 May 88 18:26:11 GMT
From: bwk@mitre-bedford.arpa (Barry W. Kort)
Subject: Re: AIList V6 #86 - Philosophy

David Sher has injected some new grist into the discussion of
"responsibility" for machines and intelligent systems.

I tend to delegate responsibility to machines known as "feedback
control systems"
. I entrust them to maintain the temperature of
my house, oven, and hot water. I entrust them to maintain my
highway speed (cruise control). When these systems malfunction,
things can go awry in a big way. I think we would have no trouble
saying that such feedback control systems "fail", and their failure
is the cause of undesirable consequences.

The only interesting issue is our reaction. I say fix them (or
improve their reliability) and get on with it. Blame and punishment
are pointless. If a system is unable to respond, doesn't it make
more sense to restore its ability than to merely label it "irresponsible"?

--Barry Kort

------------------------------

Date: Mon, 16 May 88 19:00 MST
From: DanPrice@HIS-PHOENIX-MULTICS.ARPA
Subject: Sociology vs Science Debate

In regard to the Sociology vs Science debate it seems to me that in
business & politics the bigger & more important a decision; the less
rational is the process used to arrive at that decision. Or to put it
another way, emotion overrides logic every time! Question: What is the
relationship between emotion and intelligence and how do we program
emotion into our logical ai machines?? Do we have an ai system that can
tell whether a respondant is behaving in an emotional or a logical way??

------------------------------

Date: 26 May 88 16:27:03 GMT
From: krulwich-bruce@yale-zoo.arpa (Bruce Krulwich)
Subject: Assumptions in dialog (was Re: Acting irrationally)

In article <180@proxftl.UUCP> tomh@proxftl.UUCP (Tom Holroyd) writes:
>True communication can only occur when both parties understand what all
>the symbols used to communicate mean. This doesn't mean you have to
>explicitly define what you mean by "tree" every time you use the word
>tree, but it's a good idea to define it once, especially if it's something
>more complex than "tree" (with due respect to all sentient hardwood).

This can't be true. True communications occurs whenever the two party's
understanding of the words used overlap in the areas in which they are in
fact being used. Every word that a speaker uses will have a large set of
information associated with it in the speaker's mind, but only a small
subset of that information will actually be needed to understand what the
speaker is saying. The trick is for the listener to (1) have the necessary
information as a subset of the information that he has about the word
(which is what you are considering above), and (2) correctly choose that
subset from the information he has (which is a form of the indexing problem).

>>The problem comes in deciding WHAT needs to be explicitly articulated
>>and what can be left in the "implicit background." That is a problem
>>which we, as humans, seem to deal with rather poorly, which is why
>>there is so much yeeling and hitting in the world.

Au contraire, humans do this quite well. True, there are problems,
but most everyday communication is quite successful. Computers at this
point can't succeed at this (the indexing problem) anywhere near as well
as people do (yet).

>Here's a simple rule: explicitly articulate everything, at least once.
>The problem, as I see it, is that there are a lot of people who, for
>one reason or another, keep some information secret (perhaps the
>information isn't known).

You are vastly underestimating the amount of knowledge you (and everybody)
have for every word, entity, and concept you know about. More likely is
the idea that people have a pretty good idea of what other people know (see
Wilks' recent work, for example). Again, this breaks down seemingly often,
but 99% of the time people seem to be correct. Just because it doesn't
take any effort to understand simple sentences like "John hit Mary" doesn't
mean that there isn't alot of information selection and assumption making
going on.


Bruce Krulwich

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT