Copy Link
Add to Bookmark
Report

AIList Digest Volume 1 Issue 094

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest           Wednesday, 9 Nov 1983      Volume 1 : Issue 94 

Today's Topics:
Metaphysics - Functionalism vs Dualism,
Ethics - Implications of Consciousness,
Alert - Turing Biography,
Theory - Parallel vs. Sequential & Ultimate Speed,
Intelligence - Operational Definitions
----------------------------------------------------------------------

Date: Mon 7 Nov 83 18:30:07-PST
From: WYLAND@SRI-KL.ARPA
Subject: Functionalism vs Dualism in consciousness

The argument of functionalism versus dualism is
unresolvable because the models are based on different,
complementry paradigms:

* The functionalism model is based on the reductionist
approach, the approach of modern science, which explains
phenomena by logically relating them to controlled,
repeatable, publically verifiable experiments. The
explanations about falling bodies and chemical reactions are
in this catagory.

* The dualism model is based on the miraculous approach,
which explains phenomena as singular events, which are by
definition not controlled, not repeatable, not verifiable,
and not public - i.e., the events are observed by a specific
individual or group. The existance of UFO's, parapsychology,
and the existance of externalized consciosness (i.e. soul) is
in this catagory.

These two paradigms are the basis of the argument of
Science versus Religion, and are not resolvable EITHER WAY. The
reductionist model, based on the philosophy of Parminides and
others, assumes a constant, unchanging universe which we discover
through observation. Such a universe is, by definition,
repeatable and totally predictable: the concept that we could
know the total future if we knew the position and velocity of all
particles derives from this. The success of Science at
predicting the future is used as an argument for this paradigm.

The miraculous model assumes the reality of change, as
put forth by Heraclitus and others. It allows reality to be
changed by outside forces, which may or may not be knowable
and/or predictable. Changes caused by outside forces are, by
definition, singular events not caused by the normal chains of
causality. Our personal consciousness and (by extension,
perhaps) the existance of life in the universe are singular
events (as far as we know), and the basic axioms of any
reductionist model of the universe are, by definition,
unexplainable because they must come from outside the system.

The argument of functionalism versus dualism is not
resolvable in a final sense, but there are some working rules we
can use after considering both paradigms. Any definition of
intellegence, consciousness (as opposed to Consciousness), etc.
has to be based on the reductionist model: it is the only way we
can explain things in such a manner that we can predict results
and prove theories. On the other hand, the concept that all
sources of consciousness are mechanical is a religious position: a
catagorical assumption about reality. It was not that long ago
that science said that stones do not fall from the sky; all
it would take to make UFOs accepted as fact would be for one to
land and set up shop as a merchant dealing in rugs and spices
from Aldebaran and Vega.

------------------------------

Date: Tuesday, 8 November 1983 14:24:55 EST
From: Robert.Frederking@CMU-CS-CAD
Subject: Ethics and Definitions of Consciousness

Actually, I believe you'll find that slavery has existed both with
and without believing that the slave had a soul. In many ancient societies
slaves were of identically the same stock as yourself, they had just run
into serious economic difficulties. As I recall, slavery of the blacks in
the U.S. wasn't justified by their not having souls, but by claiming they
were better off (or similar drivel). The fact that denying other people had
souls was used at some time to justify it doesn't bother me, since all kinds
of other rationalizations have been used.

Now we are approaching the time when we will have intelligent
mechanical slaves. Are you advocating that it should be illegal to own
robots that can pass the Turing (or other similar) test? I think that a
very important thing to consider is that we can probably make a robot really
enjoy being a slave, by setting up the appropriate top-level goals. Should
this be illegal? I think not. Suppose we reach the point where we can
alter fetuses (see "Brave New World" by Aldous Huxley) to the point where
they *really* enjoy being slaves to whoever buys them. Should this be
illegal? I think so. What if we build fetuses from scratch? Harder to
say, but I suspect this should be illegal.

The most conservative (small "c") approach to the problem is to
grant human rights to anything that *might* qualify as intelligent. I think
this would be a mistake, unless you allow biological organisms a distinction
as outlined above. The next most conservative approach seems to me to leave
the situation where it is today: if it is physically an independent human
life, it has legal rights.

------------------------------

Date: 8 Nov 1983 09:26-EST
From: Jon.Webb@CMU-CS-IUS.ARPA
Subject: parallel vs. sequential

Parallel and sequential machines are not equivalent, even in abstract
models. For example, an absract parallel machine can generate truly
random numbers by starting two processes at the same time, which are
identical except that one sends the main processor a "0" and the other
sends a "1". The main processor accepts the first number it receives.
A Turing machine can generate only pseudo-random numbers.

However, I do not believe a parallel machine is more powerful (in the
formal sense) than a Turing machine with a true random-number
generator. I don't know of a proof of this; but it sounds like
something that work has been done on.

Jon

------------------------------

Date: Tuesday, 8-Nov-83 18:33:07-GMT
From: O'KEEFE HPS (on ERCC DEC-10) <okeefe.r.a.@edxa>
Reply-to: okeefe.r.a. <okeefe.r.a.%edxa@ucl-cs>
Subject: Ultimate limit on computing speed

--------
There was a short letter about this in CACM about 6 or 7 years ago.
I haven't got the reference, but the argument goes something like this.

1. In order to compute, you need a device with at least two states
that can change from one state to another.
2. Information theory (or quantum mechanics or something, I don't
remember which) shows that any state change must be accompanied
by a transfer of at least so much energy (a definite figure was
given).
3. Energy contributes to the stress-energy tensor just like mass and
momentum, so the device must be at least so big or it will undergo
gravitational collapse (again, a definite figure).
4. It takes light so long to cross the diameter of the device, and
this is the shortest possible delay before we can definitely say
that the device is in its new state.
5. Therefore any physically realisable device (assuming the validity
of general relativity, quantum mechanics, information theory ...)
cannot switch faster than (again a definite figure). I think the
final figure was 10^-43 seconds, but it's been a long time since
I read the letter.


I have found the discussion of "what is intelligence" boring,
confused, and unhelpful. If people feel unhappy working in AI because
we don't have an agreed definition of the I part (come to that, do we
*really* have an agreed definition of the A part either? if we come
across a planet inhabited by metallic creatures with CMOS brains that
were produced by natural processes, should their study belong to AI
or xenobiology, and does it matter?) why not just change the name of
the field, say to "Epistemics And Robotics". I don't give a tinker's
curse whether AI ever produces "intelligent" machines; there are tasks
that I would like to see computers doing in the service of humanity
that require the representation and appropriate deployment of large
amounts of knowledge. I would be just as happy calling this AI, MI,
or EAR.

I think some of the contributors to this group are suffering from
physics envy, and don't realise what an operational definition is. It
is a definition which tells you how to MEASURE something. Thus length
is operationally defined by saying "do such and such. Now, length is
the thing that you just measured." Of course there are problems here:
no amount of operational definition will justify any connection between
"length-measured-by-this-foot-rule-six-years-ago" and "length-measured-
by-laser-interferometer-yesterday". The basic irrelevance is that
an operational definition of say light (what your light meter measures)
doesn't tell you one little thing about how to MAKE some light. If we
had an operational definition of intelligence (in fact we have quite a
few, and like all operational definitions, nothing to connect them) there
is no reason to expect that to help us MAKE something intelligent.

------------------------------

Date: 7 Nov 83 20:50:48 PST (Monday)
From: Hoffman.es@PARC-MAXC.ARPA
Subject: Turing biography

Finally, there is a major biography of Alan Turing!

Alan Turing: The Enigma
by Andrew Hodges
$22.50 Simon & Schuster
ISBN 0-671-49207-1

The timing is right: His war-time work on the Enigma has now been
de-classified. His rather open homosexuality can be discussed in other
than damning terms these days. His mother passed away in 1976. (She
maintained that his death in 1954 was not suicide, but an accident, and
she never mentioned his sexuality nor his 1952 arrest.) And, of course,
the popular press is full of stories on AI, and they always bring up the
Turing Test.

The book is 529 pages, plus photographs, some diagrams, an author's note
and extensive bibliographic footnotes.

Doug Hofstadter's review of the book will appear in the New York Times
Book Review on November 13.

--Rodney Hoffman

------------------------------

Date: Mon, 7 Nov 83 15:40:46 CST
From: Robert.S.Kelley <kelleyr.rice@Rand-Relay>
Subject: Operational definitions of intelligence

p.s. I can't imagine that psychology has no operational definition of
intelligence (in fact, what is it?). So, if worst comes to worst, AI
can just borrow psychology's definition and improve on it.

Probably the most generally accepted definition of intelligence in
psychology comes from Abraham Maslow's remark (here paraphrased) that
"Intelligence is that quality which best distinguishes such persons as
Albert Einstein and Marie Curie from the inhabitants of a home for the
mentally retarded." A poorer definition is that intelligence is what
IQ tests measure. In fact psychologists have sought without success
for a more precise definition of intelligence (or even learning) for
over 100 years.
Rusty Kelley
(kelleyr.rice@RAND-RELAY)

------------------------------

Date: 7 Nov 83 10:17:05-PST (Mon)
From: harpo!eagle!mhuxl!ulysses!unc!mcnc!ecsvax!unbent @ Ucb-Vax
Subject: Inscrutable Intelligence
Article-I.D.: ecsvax.1488

I sympathize with the longing for an "operational definition" of
'intelligence'--especially since you've got to write *something* on
grant applications to justify all those hardware costs. (That's not a
problem we philosophers have. Sigh!) But I don't see any reason to
suppose that you're ever going to *get* one, nor, in the end, that you
really *need* one.

You're probably not going to get one because "intelligence" is
one of those "open textury", "clustery" kinds of notions. That is,
we know it when we see it (most of the time), but there are no necessary and
sufficient conditions that one can give in advance which instances of it
must satisfy. (This isn't an uncommon phenomenon. As my colleague Paul Ziff
once pointed out, when we say "A cheetah can outrun a man", we can recognize
that races between men and *lame* cheetahs, *hobbled* cheetahs, *three-legged*
cheetahs, cheetahs *running on ice*, etc. don't count as counterexamples to the
claim even if the man wins--when such cases are brought up. But we can't give
an exhaustive list of spurious counterexamples *in advance*.)

Why not rest content with saying that the object of the game is to get
computers to be able to do some of the things that *we* can do--e.g.,
recognize patterns, get a high score on the Miller Analogies Test,
carry on an interesting conversation? What one would like to say, I
know, is "do some of the things we do *the way we do them*--but the
problem there is that we have no very good idea *how* we do them. Maybe
if we can get a computer to do some of them, we'll get some ideas about
us--although I'm skeptical about that, too.

--Jay Rosenberg (ecsvax!unbent)

------------------------------

Date: Tue, 8 Nov 83 09:37:00 EST
From: ihnp4!houxa!rem@UCLA-LOCUS


THE MUELLER MEASURE

If an AI could be built to answer all questions we ask it to assure us
that it is ideally human (the Turing Test), it ought to
be smart enough to figure out questions to ask itself
that would prove that it is indeed artificial. Put another
way: If an AI could make humans think it is smarter than
a human by answering all questions posed to it in a
Turing-like manner, it still is dumber than a human because
it could not ask questions of a human to make us answer
the questions so that it satisfies its desire for us to
make it think we are more artificial than it is. Again:
If we build an AI so smart it can fool other people
by answering all questions in the Turing fashion, can
we build a computer, anti-Turing-like, that could make
us answer questions to fool other machines
into believing we are artificial?

Robert E. Mueller, Bell Labs, Holmdel, New Jersey

houxa!rem

------------------------------

Date: 9 November 1983 03:41 EST
From: Steven A. Swernofsky <SASW @ MIT-MC>
Subject: Turing test in everyday life

. . .
I know the entity at the other end of the line is not a computer
(because they recognize my voice -- someone correct me if this is not a
good test) but we might ask: how good would a computer program have to
be to fool someone into thinking that it is human, in this limited case?

[There is a system, in use, that can recognize affirmative and negative
replies to its questions.
. . . -- KIL]

No, I always test these callers by interrupting to ask them questions,
by restating what they said to me, and by avoiding "yes/no" responses.

I appears to me that the extremely limited domain, and the utter lack of
expertise which people expect from the caller, would make it very easy to
simulate a real person. Does the fact of a limited domain "disguise"
the intelligence of the caller, or does it imply that intelligence means
a lot less in a limited domain?

-- Steve

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT