Copy Link
Add to Bookmark
Report
AIList Digest Volume 5 Issue 249
AIList Digest Monday, 26 Oct 1987 Volume 5 : Issue 249
Today's Topics:
Comments - The Success of AI
----------------------------------------------------------------------
Date: 23 Oct 87 13:23:05 GMT
From: cbmvax!snark!eric@rutgers.edu (Eric S. Raymond)
Subject: Re: The Success of AI
In article <1342@tulum.swatsun.UUCP>, scott@swatsun (Jay Scott) writes:
>[quoting me:]
>> In *each case*, these problem areas were defined out of the AI field as soon
>> as they spawned halfway-usable technologies and acquired their own research
>> communities.
>
> Here's one speculation: People see intelligence as mysterious, intrinsically
> non-understandable. So anything understood can't be part of intelligence,
> and can't be part of AI. I assume this was what Eric had in mind in a
> previous article, when he mentioned "hidden vitalist premises".
Yes, that is precisely what I intended.
> Any other good ideas?
Maybe :-). A friend once told me that she'd read that human institutions reach
a critical size at 250 people; that that is the largest social unit for which
a single member can keep a reasonable grasp on the capabilities and style of
everyone else in the group. This insight explains the allegedly remarkably
consistent size of pre-industrial villages in areas where enough settlement
land is available so that people can move elsewhere when they want.
There is supposedly one well-known company that has found that the productivity
gains from holding their work units down to this size more than justify the
diseconomies of scale from small plants.
This idea gets some confirmation from my experience of SF fandom, a totally
voluntarist subculture that has, historically, thrown off sub-communities
like yeast buds (SCA, Trek fandom, the Darkovans, the Dr. Who people, etc.
etc.). We even have a name for these 'buds'; they're called "fringe fandoms"
and the people in them "fringefen" (the correct plural of "SF fan" is, by
ancient tradition "SF fen").
In this context, the theory needs a little generalizing; what seems
to count for that magic 250 is not the number of self-described "Xites", but
rather the smaller number of *organizers* and *regulars*; the people that
maintain the subculture's communications networks and set its style.
Now: let's assume a parallel division in science between "stars" (the people
who do, or are seen to be doing, the important work) and "spear carriers"
(the people who fill in the corners, tie down the details, go after the
last decimal places, and get most of the grants ;-)). We then have:
RAYMOND'S HYPOTHESIS:
A scientific field with more than 250 "stars" will tend to fragment
into subspecialties more and more strongly as the size increases.
It would be interesting to look at other classes of voluntarist subcultures
(like, say, fringe political parties) to see if a similar pattern holds.
--
Eric S. Raymond
UUCP: {{seismo,ihnp4,rutgers}!cbmvax,sdcrdcf!burdvax,vu-vlsi}!snark!eric
Post: 22 South Warren Avenue, Malvern, PA 19355 Phone: (215)-296-5718
------------------------------
Date: 23 Oct 87 21:23:31 GMT
From: ihnp4!chinet!nucsrl!coray@ucbvax.Berkeley.EDU (Elizabeth)
Subject: Re: The success of AI (misunderstandings)
in reponse to: spe@SPICE.CS.CMU.EDU (Sean Engelson) / 9:21 am Oct 22, 1987 /
> This is reasonable because the human body is finite in extent,
> and thus there is a finite amount of information to discover,
> thus it can be discovered in finite (although possibly very large) time.
I am planning on gracefully failing my qualifiers in just two weeks, and
one of the questions I plan to fail will have to do with decidability.
Because now I know that I will blithely point out that language is finite in
extent and thus there is only a finite amount of information which it
can convey, so why worry about unprovable true theorems? We'll just
prove all the true ones (in possibly very large finite time?) and then
see if the theorem of interest is in this finite set.
Grade -2.
------------------------------
Date: Saturday, 24 October 1987, 18:41-EDT
From: nick@MC.LCS.MIT.EDU
Subject: The success of AI (misunderstandings)
In article <193@PT.CS.CMU.EDU> spe@spice.cs.cmu.edu (Sean Engelson) writes:
>Given a sufficiently powerful computer, I could, in theory, simulate
>the human body and brain to any desired degree of accuracy.
You are in good company. Laplace thought much the same thing
about the entire physical universe.
However, some results in chaos theory appear to imply that
complex real systems may not be predictable even in principle. In a
dynamic system with sufficiently 'sensitive dependence on intial
conditions' arbitrarily large separations can appear (in the state
space) between points that were initially arbitrarily close. No
conceivable system of measurement can get around the fact that the
behavior of the system itself 'systematically' erodes our information
about its state.
For a good intro to chaos theory, see the article by Farmer,
Packard, et. al. in Scientific American December 86..
------------------------------
Date: 24 Oct 87 20:41:29 GMT
From: ihnp4!homxb!whuts!mtune!codas!usfvax2!pdn!alan@ucbvax.Berkeley.E
DU (Alan Lovejoy)
Subject: Re: The Success of AI
In article <193@PT.CS.CMU.EDU> spe@spice.cs.cmu.edu (Sean Engelson) writes:
/Given a sufficiently powerful computer, I could, in theory, simulate
/the human body and brain to any desired degree of accuracy...
/...if I can simulate the body in a computer, the
/computer is a sufficiently powerful model of computation to model the
/human mind...
The ultimate in "machine emulation"!!!!
Why does this remind me of Chomsky's concept of 'weak' and 'strong'
equivalence between grammars? Hmmm...
--alan@pdn
------------------------------
Date: 24 Oct 87 20:52:32 GMT
From: ihnp4!homxb!whuts!mtune!codas!usfvax2!pdn!alan@ucbvax.Berkeley.E
DU (Alan Lovejoy)
Subject: Re: The Success of AI
In article <224@bernina.UUCP> srp@bernina.UUCP (Scott Presnell) writes:
/In article <193@PT.CS.CMU.EDU> spe@spice.cs.cmu.edu (Sean Engelson) writes:
/>Given a sufficiently powerful computer, I could, in theory, simulate
/>the human body and brain to any desired degree of accuracy.
/
/Horse shit. The problem is you don't even know exactly what you are
/simulating! ...
/For instance, dreams, are they logical?, do they fall in a pattern?, a computer
/has got to have them to be a real simulation of a body/mind, but you cannot
/simulate what you cannot accurately describe.
Simulated horse shit! I can write a simulator for the IBM-PC to run on
a Macintosh-II, without knowing or understanding all the IBM-PC programs
that will ever run on it. The same is in principle possible when the
machine being emulated is a human body.
/Let's get down to a specific case:
/I propose that given any amount of computing power, you could not presently,
/and probably will never be able to simulate me: Scott R. Presnell.
/My wife can be the judge.
Which wife? The one being simulated by the computer as part of the
simulated environment in which you are being simulated? How would you
or she know which "world" you belonged to?
--alan@pdn
------------------------------
Date: 24 Oct 87 21:08:27 GMT
From: ihnp4!homxb!whuts!mtune!codas!usfvax2!pdn!alan@ucbvax.Berkeley.E
DU (Alan Lovejoy)
Subject: Re: The Success of AI
In article <1993@gryphon.CTS.COM> tsmith@gryphon.CTS.COM (Tim Smith) writes:
/In article <193@PT.CS.CMU.EDU> spe@spice.cs.cmu.edu (Sean Engelson) writes:
/+=====
/| Given a sufficiently powerful computer, I could, in theory, simulate
/| the human body and brain to any desired degree of accuracy. This
/You might, for example, claim that with a
/very large number of computers, all just at the edge of the
/speed boundaries dictated by the laws of physics in the most
/advanced materials imaginable, you could simulate a human body
/and mind--but not in real time. But the simulation would have to
/be in real time, because humans live in real time, doing things
/that are critically time dependent (perceiving speech, for
/example).
You make the invalid assumption that "simulation" means that those of
us in the real universe can not distinguish the simulated object or
process from the real thing. It is just as valid to deal with
simulations that enable one to make accurate predictions about what
would happen in the real world in some well-specified scenario, even
if the simulation doesn't look anything like what is simulates in the
physical sense. What matters is the logical equivalence or similarity
in an abstract reality.
/Similarly, humans think the way they do partially because of
/their size, because of the environment they live in, because of
/the speed at which they move, live, and think.
If the environment of an object is simulated in addition to the object
itself, one need merely synchronize the object with the simulated
environment as to speed, size, etc.
--alan@pdn
------------------------------
Date: 23 Oct 87 16:22:45 GMT
From: mcvax!ukc!its63b!hwcs!hci!gilbert@uunet.uu.net (Gilbert
Cockton)
Subject: Re: The Success of AI
In article <193@PT.CS.CMU.EDU> spe@spice.cs.cmu.edu (Sean Engelson) writes:
>Given a sufficiently powerful computer, I could, in theory, simulate
>the human body and brain to any desired degree of accuracy. This
>gedanken-experiment
keinen gedanken mein Herr!
In **WHICH THEORY**? Cut out this use of theoretical to mean "given
arbitrary fantasies". Theories have real substance, and you are
obliged to elaborate on the theory before alluding to it.
Given a sufficiently powerful computer, could I, in theory, get
everyone on the net to like my postings? Rhetorical of course, so spare
me any abusive replies :-). The point again, is that I would have to
elaborate the theory and test it out to be sure. Furthermore, I could
not expect everyone to be convinced, that in the event of highly
unlikely (impossible I believe) universal acceptance of my postings,
that my theory really was the explanation. In short, even if one dropped
fantasy for science, people in general are not going to be convinced.
> if I can simulate the body in a computer, the computer is a
> sufficiently powerful model of computation to model the mind.
Of course. Now simulate it. And of course, you won't be slowed down by reading
up on all the unanswered objections to the **belief** that computable formalisms
can model mind. In short, this is no contribution to the argument.
>we must also accept that a computer can have a mind, if only by the
>inefficient expedient of simulating a body containing a mind.
Ahem. Socialisation.
AI people rarely have a handle on this at all. I take it that your
computer simulation of the body is going to go down to the park with
you to see the ducks, go down to playgroup, start primary school and
work through to a degree, mixing all the time with a wide range of
people, reading books, watching TV and visiting interesting places?
Look, people are people because they interact as people with people.
Now, who's going to want to interact with your computer as if it were
a person?
Need I go on?
--
Gilbert Cockton, Scottish HCI Centre, Ben Line Building, Edinburgh, EH1 1TN
JANET: gilbert@uk.ac.hw.hci ARPA: gilbert%hci.hw.ac.uk@cs.ucl.ac.uk
UUCP: ..{backbone}!mcvax!ukc!hwcs!hci!gilbert
------------------------------
Date: 23 Oct 87 13:13:59 GMT
From: mcvax!ukc!its63b!hwcs!hci!gilbert@uunet.uu.net (Gilbert
Cockton)
Subject: Re: The Success of AI
In article <1922@gryphon.CTS.COM> tsmith@gryphon.CTS.COM (Tim Smith) writes:
(the best posting on this issue I've seen)
>It wasn't until computers came along that there was a
>metaphor for the brain powerful enough to be taken seriously.
Hence the circularity in much AI appeal to cognitive psychology.
As the latter is now riddled with information processing concepts, the
impulsive observer will be quick to conclude from cog. psy. research
that cognition works like a computer. Wrong conclusion - many cognitive
psychologists talk about mind *as if it were* a computer. Likeness,
especially presumed likeness, is not the same as essence, assuming
noumenal objects exist of course.
>There is no reason, in principle, that a very powerful
>digital computer cannot imitate a mind
Apologies for picking up on this, given the writer's (deleted)
qualification and probable sarcasm about arguments of this form. This
may appear perverse, but what on earth are these arguments of the form
"nothing in principle prevents"? They are used much by the "pure" AI
misanthropes, but I can never find any substance in such arguments.
Which principles? How can we argue from these principles to
possibility/impossibility. After all, is there anything of any genuine
interest to non-logicians which is logically impossible, rather than
semantically contradictory (a married bachelor for example)?
Again, I pick this up because AI zealots reach for this argument all
the time, and it isn't an argument at all.
(PS - no flames on "misanthrope" or "zealot", one can be studying an
AI topic without losing one's humanism or one's sense of moderation.
I am only characterising those who are misanthropic zealots, a specialisation
and not a generalisation.)
>The success rate in AI research (as well as most of cognitive
>science) in the past 20 years is not very encouraging.
Despite all that taxpayers' money :-)
> A better concept of "mind" is what is needed now.
Well said. "Better" concepts related to mind than those found in cog. sci.
already exist. The starting point is the elaboration of the observable human
phenomena which we are attempting to unify within a study of mind. These
phenomena have been studied since the dawn of time. There are many
monumental works of schlarship which unify the phenomena grouped into
well-defined subfields. The only problem for AI workers surveying all
these masterpieces is that none of the authors are committed to
computational models. Indeed, they would no doubt laugh at anyone who
suggested that their work could be reduced to a Turing Machine compatible
notation.
> This is not to say that AI research should halt
But AI research could at least be disciplined to study the existing work
on the phenomena they seek to study. Exploratory, anarchic,
uninformed, self-indulgent research at public expense could be stopped.
(and not just in AI, although I've never seen such a lack of
discipline and scholarship anywhere else outside of popular history
and futorology, neither of which attract public funds).
> or that computers are not useful in studying human
> intelligence. (They are indispensable.)
Yes (no). They have proved useful in many areas of study. They have
never been used at all in others, beacuse they have not been able to
offer anything worthy of attention.
> For one example of this new way of thinking, see the recent book by the
> linguist George Lakoff, entitled "Women, Fire, and Dangerous Things."
Does he use computers?
>I believe the great success of AI has been in showing that
>the old dualistic separation of mind and body is totally
>inadequate to serve as a basis for an understanding of human intelligence.
How can you attribute the end of dualism to AI research. This is a
historical statement which should be backed up by references to
specific pieces of work in AI. I doubt that anything emerging from AI
(rather than the disciplines of Cognitive Science)
--
Gilbert Cockton, Scottish HCI Centre, Ben Line Building, Edinburgh, EH1 1TN
JANET: gilbert@uk.ac.hw.hci ARPA: gilbert%hci.hw.ac.uk@cs.ucl.ac.uk
UUCP: ..{backbone}!mcvax!ukc!hwcs!hci!gilbert
------------------------------
End of AIList Digest
********************