Copy Link
Add to Bookmark
Report

AIList Digest Volume 1 Issue 090

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest            Saturday, 5 Nov 1983      Volume 1 : Issue 90 

Today's Topics:
Intelligence,
Looping Problem
----------------------------------------------------------------------

Date: Thu, 3 Nov 1983 23:46 EST
From: MINSKY%MIT-OZ@MIT-MC.ARPA
Subject: Inscrutable Intelligence


One potential reason to make a more precise "definition" of
intelligence is that such a definition might actually be useful
in making a program intelligent. If we could say "do that" to a
program while pointing to the definition, and if it "did that",
we would have an intelligent program. But I am far too
optimistic.

I think so. You keep repeating how good it would be to have a good
definition of intelligence and I keep saying it would be as useless as
the biologists' search for the definition of "life". Evidently
we're talking past each other so it's time to quit.

Last word: my reason for making the argument was that I have seen
absolutely no shred of good ideas in this forum, apparently because of
this definitional orientation. I admit the possibility that some
good mathematical insight could emerge from such discussions. But
I am personally sure it won't, in this particular area.

------------------------------

Date: Friday, 4 November 1983, 01:17-EST
From: jcma@MIT-MC
Subject: Inscrutable Intelligence

[Reply to Minsky.]


BOTTOM LINE: Have you heard of OPERATIONAL DEFINITIONS?

You are correct in pointing out that we need not have the ultimate definition
of intelligence. But, it certainly seems useful for the practical purposes of
investigating the phenomena of intelligence (whether natural or artificial) to
have at least an initial approximation, an operational definition.

Some people, (e.g., Winston), have proposed "people-like behavior" as their
operational definition for intelligence. Perhaps you can suggest an
incremental improvement over that rather vague definition.

If artficial intelligence can't come up with an operational definition of
intellgence, no matter how crude, it tends to undermine the credibility of the
discipline and encourage the view that AI researchers are flakey. Moreover,
it makes it very difficult to determine the degree to which a program exhibits
"intelligence."

If you were being asked to spend $millions on a field of inquiry, wouldn't you
find it strange (bordering on absurd) that the principle proponents couldn't
render an operational definition of the object of investigation?

p.s. I can't imagine that psychology has no operational definition of
intelligence (in fact, what is it?). So, if worst comes to worst, AI can just
borrow psychology's definition and improve on it.

------------------------------

Date: Fri, 4 Nov 1983 09:57 EST
From: Dan Carnese <DJC%MIT-OZ@MIT-MC.ARPA>
Subject: Inscrutable Intelligence

There's a wonderful quote from Wittgenstein that goes something like:

One of the most fundamental sources of philosophical bewilderment is to have
a substantive but be unable to find the thing that corresponds to it.

Perhaps the conclusion from all this is that AI is an unfortunate name for the
enterprise, since no clear definitions for I are available. That shouldn't
make it seem any less flakey than, say, "operations research" or "management
science"
or "industrial engineering" etc. etc. People outside a research area
care little what it is called; what it has done and is likely to do is
paramount.

Trying to find the ultimate definition for field-naming terms is a wonderful,
stimulating philosophical enterprise. However, one can make an empirical
argument that this activity has little impact on technical progress.

------------------------------

Date: 4 Nov 1983 8:01-PST
From: fc%usc-cse%USC-ECL@SRI-NIC
Subject: Re: AIList Digest V1 #89

This discussion on intelligence is starting to get very boring.
I think if you want a theoretical basis, you are going to have to
forget about defining intelligence and work on a higher level. Perhaps
finding representational schemes to represent intelligence would be a
more productive line of pursuit. There are such schemes in existence.
As far as I can tell, the people in this discussion have either scorned
them, or have never seen them. Perhaps you should go to the library for
a while and look at what all the great philosophers have said about the
nature of intelligence rather than rehashing all of their arguments in
a light and incomplete manner.
Fred

------------------------------

Date: 3 Nov 83 0:46:16-PST (Thu)
From: hplabs!hp-pcd!orstcs!hakanson @ Ucb-Vax
Subject: Re: Parallelism & Consciousness - (nf)
Article-I.D.: hp-pcd.2284


No, no, no. I understood the point as meaning that the faster intelligence
is merely MORE intelligent than the slower intelligence. Who's to say that
an amoeba is not intelligent? It might be. But we certainly can agree that
most of us are more intelligent than an amoeba, probably because we are
"faster" and can react more quickly to our environment. And some super-fast
intelligent machine coming along does NOT make us UNintelligent, it just
makes it more intelligent than we are. (According to the previous view
that faster = more intelligent, which I don't necessarily subscribe to.)

Marion Hakanson {hp-pcd,teklabs}!orstcs!hakanson (Usenet)
hakanson@{oregon-state,orstcs} (CSnet)

------------------------------

Date: 31 Oct 83 13:18:58-PST (Mon)
From: decvax!duke!unc!mcnc!ecsvax!unbent @ Ucb-Vax
Subject: re: transcendental recursion [& reply]
Article-I.D.: ecsvax.1457

i'm also new on this net, but this item seemed like
a good one to get my feet wet with.
if we're going to pursue the topic of consciousness
vs intelligence, i think it's important not to get
confused about consciousness vs *self*-consciousness at
the beginning. there's a perfectly clear sense in which
any *sentient* being is "conscious"--i.e., conscious *of*
changes in its environment. but i have yet to see any
good reason for supposing that cats, rats, bats, etc.
are *self*-conscious, e.g., conscious of their own
states of consciousness. "introspective" or "self-
monitoring"
capacity goes along with self-consciousness,
but i see no particular reason to suppose that it has
anything special to do with *consciousness* per se.
as long as i'm sticking my neck out, let me throw
in a cautionary note about confusing intelligence and
adaptability. cockroaches are as adaptable as all get
out, but not terribly intelligent; and we all know some
very intelligent folks who can't adapt to novelties at
all.
--jay rosenberg (escvax!unbent)

[I can't go along with the cockroach claim. They are a
successful species, but probably haven't changed much in
millions of years. Individual cockroaches are elusive,
but can they solve mazes or learn tricks? As for the
"intelligent folks": I previously stated my preference
for power tests over timed aptitude tests -- I happen to
be rather slow to change channels myself. If these people
are unable to adapt even given time, on what basis can we
say that they are intelligent? If they excel in particular
areas (e.g. idiot savants), we can qualify them as intelligent
within those specialties, just as we reduce our expectations
for symbolic algebra programs. If they reached states of
high competence through early learning, then lost the ability
to learn or adapt further, I will only grant that they >>were<<
intelligent. -- KIL]

------------------------------

Date: 3 Nov 83 0:46:00-PST (Thu)
From: hplabs!hp-pcd!orstcs!hakanson @ Ucb-Vax
Subject: Re: Semi-Summary of Halting Problem Disc [& Comment]


A couple weeks ago, I heard Marvin Minsky speak up at Seattle. Among other
things, he discussed this kind of "loop detection" in an AI program. He
mentioned that he has a paper just being published, which he calls his
"Joke Paper," which discusses the applications of humor to AI. According
to Minsky, humor will be a necessary part of any intelligent system.

If I understood correctly, he believes that there is (will be) a kind
of a "censor" which recognizes "bad situations" that the intelligent
entity has gotten itself into. This censor can then learn to recognize
the precursors of this bad situation if it starts to occur again, and
can intervene. This then is the reason why a joke isn't funny if you've
heard it before. And it is funny the first time because it's "absurd,"
the laughter being a kind of alarm mechanism.

Naturally, this doesn't really help with a particular implementation,
but I believe that I agree with the intuitions presented. It seems to
agree with the way I believe *I* think, anyway.

I hope I haven't misrepresented Minsky's ideas, and to be sure, you should
look for his paper. I don't recall him mentioning a title or publisher,
but he did say that the only reference he could find on humor was a book
by Freud, called "Jokes and the Unconscious."

(Gee, I hope his talk wasn't all a joke....)

Marion Hakanson {hp-pcd,teklabs}!orstcs!hakanson (Usenet)
hakanson@{oregon-state,orstcs} (CSnet)


[Minsky has previously mentioned this paper in AIList. You can get
a copy by writing to Minsky%MIT-OZ@MIT-MC. -- KIL]

------------------------------

Date: 31 Oct 83 7:52:43-PST (Mon)
From: hplabs!hao!seismo!ut-sally!ut-ngp!utastro!nather @ Ucb-Vax
Subject: Re: The Halting Problem
Article-I.D.: utastro.766

A common characteristic of humans that is not shared by the machines
we build and the programs we write is called "boredom." All of us get
bored running around the same loop again and again, especially if nothing
is seen to change in the process. We get bored and quit.

*---> WARNING!!! <---*

If we teach our programs to get bored, we will have solved the
infinite-looping problem, but we will lose our electronic slaves who now
work, uncomplainingly, on the same tedious jobs day in and day out. I'm
not sure it's worth the price.

Ed Nather
ihnp4!{kpno, ut-sally}!utastro!nather

------------------------------

Date: 31 Oct 83 20:03:21-PST (Mon)
From: harpo!eagle!hou5h!hou5g!hou5f!hou5e!hou5d!mat @ Ucb-Vax
Subject: Re: The Halting Problem
Article-I.D.: hou5d.725

If we teach our programs to get bored, we will have solved the
infinite-looping problem, but we will lose our electronic slaves who now
work, uncomplainingly, on the same tedious jobs day in and day out. I'm
not sure it's worth the price.

Hmm. I don't usually try to play in this league, but it seems to me that there
is a place for everything and every talent. Build one machine that gets bored
(in a controlled way, please) to work on Fermat's last Theorem. Build another
that doesn't to check tolerances on camshafts or weld hulls. This [solving
the looping problem] isn't like destroying one's virginity, you know.

Mark Terribile
Duke Of deNet

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT