Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 244

eZine's profile picture
Published in 
AIList Digest
 · 11 months ago

AIList Digest           Saturday, 24 Oct 1987     Volume 5 : Issue 244 

Today's Topics:
Business - Expert Systems Company Financing,
Neuromorphic Systems - Terminology & Textbook,
Philosophy - Flawed Human Minds

----------------------------------------------------------------------

Date: 18 Oct 87 17:52:45 GMT
From: trwrb!aero!venera.isi.edu!smoliar@ucbvax.Berkeley.EDU (Stephen
Smoliar)
Subject: Re: Expert Systems Company Financing...


In the early eighteenth century a man of intense religious fervour named
Johann Ernst Elias Bessler claimed that God had revealed to him the secret
of the perpetual motion machine. He would tour villages in the costume of
a magician and offer demonstrations of his devices. Ultimately, he
attracted the attention of Count Karl von Hessen-Cassel, who undertook
to serve as a sponsor. At Hessen-Cassel's expense, Bessler built one
of these machines based on a wheel which was twelve feet in diameter.
Hessen-Cassel then invited many of the leading scientific minds of his
time to evaluate the project. In the course of this evaluation, the
machine apparently ran without stopping for 54 days. Ultimately, Bessler
was exposed as a fraud; and several scientific reputations were destroyed
as a consequence.

While the historical record of this affair is fragmented, there are several
rather interesting points which I would claim are at least remotely related
to the current discussion about similar sponsorship of artificial intelligence.

1. The evaluating scientists were not allowed to inspect the
inner workings of Bessler's machine. Bessler claimed they would
be blinded by the divine revelation (or words to that effect).
Hessen-Cassel apparently did see the inner workings and was
not blinded. Nevertheless, the evaluating committee agreed
to accept this constraint.

2. For all the time that Hessen-Cassel possessed this machine,
he never tried to do anything practical with it. Bessler's
previous demonstrations with smaller-scale machines always
climaxed with the machine being used to lift some impressive
weight. While Hessen-Cassel was in possession of a potentially
significant labor-saving device, he seemed content to keep it
locked in a room of his castle.

3. Bessler was never exposed on the grounds of any scientific
argument. Willem Jakob Gravesande published a "proof" of why
the machine worked, and the flaw in this proof was
subsequently published by Jacque de Crousaz. However,
Bessler was undone when a servant girl confessed that
she was powering the machine from an adjoining room.
This was later discovered to be a false testimony, but
Bessler was distraught by the affair. Before anyone had
a chance to inspect its interior, he destroyed the machine.

I do not intend to imply that artificial intelligence is like perpetual
motion, at least to the extent that it is a theoretical impossibility.
However, I am struck by certain behavioral parallels between past and
present. My personal opinion is that Bessler was probably an extremely
skilled "hacker" (in mechanics) for his time, with his personal confidence
reinforced by his religious convictions. He probably pulled off a pretty
good piece of work even if his mind was entirely "in the bits" (so to
speak) and largely ignorant of prevailing theory. What is pathetic,
however, is that those who were asked to evaluate him were willing to
play the game by his own rules. Indeed, there is some indication that
their opinions may have been slanted by the promise of sharing in the
monetary gain which Bessler's invention might yield. Also, there is
this depressing observation that the evaluation never involved putting
the machine to work; they were content to just let it run on in a
locked chamber.

Current "success stories" about artificial intelligence are not quite
as contrived as that of Bessler's machine running in a locked room for
54 days; but they come closer than I would feel is comfortable. To a
great extent, the "field testing" of "applied" expert systems often takes
place in rather constrained circumstances. A less polite way of putting
this might be to say that the definition of "success" is in danger of
being modified POST HOC to accommodate the capabilities of the system
being evaluated. Thus, I feel that all reports of such stories should
be viewed with appropriate scientific scepticism.

On the other hand, there is a positive side of this historical retrospective.
Had Hessen-Cassel actually put Bessler's machine to work, it might have
been of considerable benefit to him . . . even if it did not run forever.
In other words, a machine capable of dissipating its energy slowly enough
to run for a very long time, while not being a true perpetual motion
machine, would still be a useful tool. By concentrating on a theoretical
goal, rather than a practical one, Hessen-Cassel lost an opportunity to
exploit a potentially valuable resource. Similarly, sponsorship of
artificial intelligence should probably pay more heed to advancement
along specific pragmatic fronts and less to whether or not machines
which exhibit that behavior deserve to be called "intelligent." If
we recognize what we have for what it is, we may get more out of it
than we might think.

ACKNOWLEDGEMENT: I would like to thank Jim Engelhardt for the extensive
research he has performed regarding the story of Bessler. He is in the
process of incorporating his research into a play which he is calling
THE PERPETUAL MOTION MAN. His research has been quite thorough, and
his insights are noteworthy.

------------------------------

Date: Mon, 19 Oct 87 09:29 EST
From: "William E. Hamilton, Jr."
Subject: RE: AIList V5 #239 - Neuromorphic Terminology, AI Successes,

Of course the human mind is flawed. The proof is quite straightforward.

1. The Bible asserts that the mind of man is wicked (or evil, depending on
which translation you use)

2. Now, assuming you believe the Bible is true and that an evil mind is a
flawed mind (if you don't agree that an evil mind is flawed, you can
still find a collection of assertions about human behavior in the Bible
that, taken together, would indicate that the minds responsible for
such behavior are flawed), the assertion is proven.

3. But suppose you do not regard the Bible as true. Then the Bible is
flawed. However, the Bible has been on the world's best-seller
list for many years, and those who buy a flawed book must have flawed
minds. Therefore, there are millions of flawed minds out there.


Bill Hamilton
GM Research Labs

------------------------------

Date: 19 Oct 87 07:27:00 GMT
From: uxc.cso.uiuc.edu!osiris.cso.uiuc.edu!goldfain@a.cs.uiuc.edu
Subject: Re: Neural Networks - Pointers to good


I agree with the respondent whose user-name was listed as "smoliar" that this
haggling about earliest references is "silly". In fact, I don't understand
the need for any territorial fight over terminology here.

Do physiologists actually use the two-word term "neural network" in their
literature? "Neuron", and "neural tissue", surely, but do they actually use
"neural network" ? If not, then there is no ambiguity. Sure there is some
danger of confusion, but no more than I think is usual in cases of "learned
borrowing". The term "neural network" as used by "connectionist/AI"
researchers caught on precisely because this model of computation is based on
the gross behavior of real, mammalian-brain neurons. It can be viewed in some
ways as a study of the human brain itself. Thus it is no greater an abuse of
terminology than, for example, "pipeline computers".

On the other hand, whatever became of the term "cybernetics" that Norbert
Weiner coined long ago? I thought its definition was quite suitable for
denoting this research. I doubt that "connectionist" is much help, in view of
the fact that the "connection machine" is more a project in pure parallelism
than intended as a neural model.

If I am wrong about any of this, please enlighten me.

------------------------------

Date: 21 Oct 87 15:49:02 GMT
From: ssc-vax!dickey@beaver.cs.washington.edu (Frederick J Dickey)
Subject: Re: Neural Networks - Pointers to good texts?


From postnews Wed Oct 21 07:49:49 1987
> >interpretation, let me make the following observation. In 1943, McCulloch
> >and Pitts published a paper entitled "A logical calculus of the ideas
> >immanent in neural nets". Minsky and Papert (Perceptrons) state that this
>
> Well . . . this is all rather silly. The PUBLISHED title of the classic
> paper by McCullogh and Pitts is "A Logigal Calculus of the Ideas Immanent
> in Nervous Activity." They NEVER use "neural net" as a technical term

Well, this is very interesting. When I read Calvin's original
posting I was struck by the claim that neural nets had been studied for
25 years. This surely seemed too small a figure to me. To check this
out without initiating a major research project, I grabbed a copy of
Minsky and Papert's "Perceptrons" which happened to be on my desk at the
time and opened to the bibliography. M&P give the title of the McCullough and
Pitts paper as "A logical calculus of the ideas immanent in neural nets".
I'm looking at it right now and that's what it says. Apparently, the citation
is wrong. Well, I stand corrected.

I might comment by the way that regardless of the merits of Calvin's claim
that artificial neural nets ought to be named something else, I think the
effort is doomed to failure. The reason being that we seem to have become
an excessively marketing-oriented society. Labels are attached to things to
stimulate desired responses in "consumers," not to clarify crtical
distinctions. The practical problem one faces in much of the industrial world
is attempting to gain support for one's "research." To do this, one presents
one's proposed program to a manager, i.e., one markets it. The noise level
in this process is so high that nothing less than hype makes it through.
My experience with managers leads me to believe that they may have heard
of neural nets. If I tried to start a "neuroid" project, they would say
"Isn't that the same thing as a neural net?" I can guarantee you that they
aren't interested in the distinctions between artifical and biological nets.
How can an aerospace company make a profit from biological nets? In other
words, to start a artifical neural net project, I have to call it a neural
net, show how it applies to some product, how it adds value to the product
(neural nets will make the product more powerful than a locomotive, faster
than a speeding bullet, and able to leap over tall buildings at a single
bound), and how all this can be done by next year at a half man-year level
of effort.

If I lived in an ivory tower (a not unpleasant domicile), I'd say that
Calvin is right on. Out here in the cinder block towers, he's out to lunch.
To summarize, I'm sympathic to his viewpoint, but my sympathy isn't going
to make much difference.

------------------------------

Date: 23 Oct 87 06:50:34 GMT
From: well!wcalvin@LLL-LCC.ARPA (William Calvin)
Subject: Why "neural nets" is a bad name


I admit that "nerve nets" and the variant "neural networks" are catchy
titles; we neurobiologists have used the terms quite a lot, though mostly
informally as in the annual meeting called the "Western Nerve Net". Each real
neural network tends to become its own subject name, as in "stomatogastric"
and "retina", with papers on properties that transcend particular anatomies
incorporated into sessions called "theoretical neurobiology" or some such (I'm
on the editorial board of the JOURNAL OF THEORETICAL NEUROBIOLOGY, often
concerned with networks).
A quarter-century ago was the era of the Perceptron, the first of the
network learning models. Various people were simulating network properties
using neuron-like "cells" and known anatomy; when I was a physics undergrad in
1959, I did an honors thesis on simulating the mammalian retina (using anatomy
based only on light-microscopy, using physiology of neurons borrowed from cat
spinal motorneurons, using sensory principles borrowed from horseshoe crab! A
far cry from the CRAY-1 simulations these days using modern retinal
neurobiology). And if you think that your simulations run slow: I did
overnight runs on an IBM 650, which had to fetch each instruction from a
rotating drum because it lacked core memory.
Now this was also the era when journalists called any digital computer a
"brain" -- and I've pointed out that calling any pseudo-neural network a
"neural network" is just as flaky as that 60s journalistic hype. Now brain
researchers were not seriously inconvenienced by the journalistic hype -- but
I think that blurring the lines is a bad idea now. Why?
Real neural networks will soon be a small part of a burgeoning field
which will have real applications, even consumer products. To identify those
with real brain research may seem innocuous to you now because of the frequent
overlap at present between pseudo-neural networks and simulations of real
neural circuitry. But these distributed networks of pseudo-neurons are going
to quickly develop a life of their own with an even more tenuous connection to
neuroscience. They need their own name, because borrowing is getting a bad
name. Let me briefly digress.
We are already seeing a lot of hype based on a truly nonexistent
connection to real neuroscience, such as those idiot "Half Brain or Whole
Brain" ads in the Wall Street Journal and New York Times, where "John-David,
Ph.D." describes himself as one of the "world's most recognized
neuroscientists" recently "recognized as a Fellow by the International
Institute of Neuroscience" (Nope, I've never heard of it either, and I was
a founding member of the Society for Neuroscience back in 1970). See James
Gorman's treatment in DISCOVER 11/87 p38. Is this just feel-good floatation-
tank pseudo-psychology dressed up to look like hard science, another scheme to
part half-brained fools from their money?
Scientists are going to start to get touchy about consumer products
borrowing an inferred plug from real science, just as the FDA has gotten
touchy about white coats in Carter's Little Liver Pills advertisements
attempting to convey medical approval. And you can bet that, if pseudo-neural
nets become as successful as I think they will, some advertising genius will
try to pass off a nonfunctional product as a neural network "resonating with
your brain", try to get some of that aura of real science and technology to
rub off on the sham. Do you really want your field trapped in the middle of
an FDA/FTC battle with the sham exploiters because it initiated the borrowing?
Borrowing a name for a technology from a basic science is not traditional:
civil engineers do not call themselves "physicists".
We neurobiologists are always having to distinguish the theoretical
possibilities, such as retrograde transport setting synaptic strengths, from
reality. Those theoretical possibilities may, of course, be excellent
shortcuts that Darwinian evolution never discovered. And so we'll see
distinctions having to be drawn: "backpropagation works in pseudo-neural
nets, but hasn't been seen so far in real neural nets." If you call the
technology by the same name as the basic science, you start confusing
students, journalists, and even experienced scientists trying to break into
the field -- just try reading that last quote with "pseudo" and "real" left
out.

William H. Calvin
University of Washington NJ-15
Seattle WA 98195
wcalvin@well.uucp wcalvin@uwalocke.bitnet
206/328-1192 206/543-1648

------------------------------

Date: 18 Oct 87 23:37:35 GMT
From: ctnews!pyramid!prls!philabs!gcm!dc@unix.sri.com (Dave Caswell)
Subject: Re: Flawed human minds

>> Factually, we know the mind is flawed because we observe that it does
>> not do what we expect of it.
>
Factually, the mind knows the mind is flawed because the mind observes the
mind not doing what the mind expects the mind to do.

------------------------------

Date: 20 Oct 87 17:43:26 GMT
From: ucsdhub!hp-sdd!ncr-sd!ncrlnk!ncrday!seradg!bryan@sdcsvax.ucsd.ed
u (Bryan Klopfenstein)
Subject: Re: Flawed human minds

In article <359@white.gcm> dc@white.UUCP (Dave Caswell) writes:
>>> Factually, we know the mind is flawed because we observe that it does
>>> not do what we expect of it.
>>
>Factually, the mind knows the mind is flawed because the mind observes the
>mind not doing what the mind expects the mind to do.

So, is the mind flawed because it expects the wrong thing, or is the mind
flawed because it observes incorrectly, or is the mind flawed because it does
not live up to its expectations? Or is this a ridiculous question and a flawed
mind does not have the capability to evaluate itself, thus making it unable to
determine whether or not is really is flawed?

--
Bryan Klopfenstein CSNET bryan@seradg.Dayton.NCR.COM
NCR Corporation ARPA bryan%seradg.Dayton.NCR.COM@relay.cs.net
VOICE (513) 865-8080
-- Standard Disclaimer Applies --

------------------------------

Date: 20 Oct 87 15:43:54 GMT
From: ihnp4!homxb!genesis!odyssey!gls@ucbvax.Berkeley.EDU
(g.l.sicherman)
Subject: Re: The Job Hunt

> Do we need a definition of anger? Anger, as I understand it, is an
> emotion that catalyzes physical actions but interferes with reason.
> I agree that Mr. X may rationalize his action, but I don't believe
> it was his best choice. ...
>
> ... I thought what we all needed was a little humility. If
> Col. G. L. Sicherman thinks either that he is perfect, or that I am
> perfect, I disagree. Tentatively.

If you go telling people what you think they all need, we may decide
that you're not very humble!

Arguing over whether people are "perfect" or "flawed" is like arguing
whether Eugene the Jeep is a rodent or a marsupial. Perfect for *what?*

And I agree that we need a definition of anger. "Catalyzes physical
actions?" The anger *produces* the actions. If you had no emotions,
you would never act.

> ... I believe that at least some emotional responses are
> maladaptive and would not exist in a perfect intelligence, while he
> apparently believes the human mind is perfect and cannot be improved
> upon.

Again, perfect for what? It sounds as if you regard emotions as a
part of intelligence. We don't agree on the basics yet.


"This rock, for instance, has an I.Q. of zero. Ouch!"
"What's the matter, Professor?"
"It bit me!"
--
Col. G. L. Sicherman
...!ihnp4!odyssey!gls

------------------------------

Date: Tue, 20 Oct 87 16:21:42 PDT
From: larry@VLSI.JPL.NASA.GOV
Subject: Flawed/Flawless

FLAWED/FLAWLESS: I can argue on both sides.

FLAWLESS: Quality can only be judged against some standard. Every person
has (perhaps only slightly) a different value system, as may even the same
person at different times. So what is a flaw to one may be a "feature" to
another. (Examples: a follower of Kali may consider torture, death, and
corruption holy; a worshipper of the Earth Mother may consider monogamy a
sin and infertility a crime.) The only objective standard is survival of
the largest number of one's species for the longest time, and even this
"standard" is hopelessly flawed(!) by human subjectivity.

FLAWED: Nevertheless, humans DO have standards, and not only for made
objects like automobiles, that are essential to our survival and happiness.
We want lovers who have compatible timing (social, sexual), sensitivity (at
least enough not to hog the blankets or too obviously eye the competition),
enough intelligence (so we can laugh at the same jokes) but not too much
(winning an occasional argument is necessary to our self-esteem), etc.
Notice that TOO MUCH intelligence may be considered as bad a flaw as too
little.

And more FLAWLESS: From an evolutionary standpoint what is a "virtue" in
one mileau may become deadly when the environment changes. Performing some
mental activity reliably may be of little use when chaos sweeps through our
lifeways. THEN divergent thinking--or even simple error--may be more likely
to solve problems. A perfect memory (popularly thought to accompany great
intelligence) can be a liability, holding one rigidly to standards or
knowledge no longer valid. It is also the enemy of abstract/general
thought, which depends on forgetting (or ignoring) inessentials. (Indeed,
differential forgetting may be one of those great ignored areas of fruitful
research.)

AI: What does all this have to do with artificial intelligence? Maybe
nothing, but I'll invent something. Say ... the relationship of emotions to
intelligence. First, sensations of pain and pleasure drive thought, in the
sense that they establish values for thinking and striving to achieve or
avoid some event or condition. Sensation triggers emotions and in turn
triggers sensations which may act as second-level motivators. They also may
trigger subsystems for readiness (or inhibition) of action. (Example:
hunger depletes blood sugar, triggering anger when a certain level of stress
is reached, which releases adrenalin which energizes the body. Anger also
may cause partly or fully random action, which is statistically better than
apathy for killing or harvesting something to eat.)

At least that's the more traditional outlook on emotions--though that
outlook may have changed in ten years or so since I did much reading in
psychology. Even if true, however, the above outlook doesn't establish a
necessary link of emotion with artificial thought; humans can supply the
goals and values for a Mars Rover, an activate command can trigger emergency
energy reserves. Some other more intimate association of emotion with
thought is needed.

Perhaps emotions affect thought in beneficial ways, say improving the
thinking mechanism itself. (The notion that it impedes correct thought is
too conventional and (worse) obvious to be interesting.) Or maybe emotion
IS thought in some way. It is, after all, only a conclusion based on
incomplete brain evidence that thought is electrical in nature. Suppose the
electrical action of the brain is secondary, that the biochemical action of
the brain is the primary mechanism of thought. This might square with the
observation that decision often happens in the subconscious, very rapidly,
and integrates several (or even dozens) of conflicting motives into a vector
sum. In other words, an analog computer may be a better model for human
thought than a digital one. (In the nature of things that answer is likely
too simple. Most likely, I'd guess, the brain is a hybrid of the two.)

Larry @ jpl-vlsi

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT