Copy Link
Add to Bookmark
Report

AIList Digest Volume 1 Issue 084

eZine's profile picture
Published in 
AIList Digest
 · 11 months ago

AIList Digest            Friday, 28 Oct 1983       Volume 1 : Issue 84 

Today's Topics:
Metaphysics - Split Consciousness,
Halting Problem - Discussion,
Intelligence - Recursion & Parallelism & Consciousness
----------------------------------------------------------------------

Date: 24 Oct 83 20:45:29-PDT (Mon)
From: pur-ee!uiucdcs!uicsl!dinitz @ Ucb-Vax
Subject: Re: consciousness and the teleporter - (nf)
Article-I.D.: uiucdcs.3417


See also the 17th and final essay by Daniel Dennett in his book Brainstorms
[Bradford Books, 1978]. The essay is called "Where Am I," and investigates
exactly this question of "split consciousness."

------------------------------

Date: Thu 27 Oct 83 23:04:47-MDT
From: Stanley T. Shebs <SHEBS@UTAH-20.ARPA>
Subject: Semi-Summary of Halting Problem Discussion

Now that the discussion on the Halting Problem etc has died down,
I'd like to restate the original question, which seems to have been
misunderstood.

The question is this: consider a learning program, or any program
that is self-modifying in some way. What must I do to prevent it
from getting caught in an infinite loop, or a stack overflow, or
other unpleasantnesses? For an ordinary program, it's no problem
(heh-heh), the programmer just has to be careful, or prove his
program correct, or specify its operations axiomatically, or <insert
favorite software methodology here>. But what about a program
that is changing as it runs? How can *it* know when it's stuck
in a losing situation?

The best answers I saw were along the lines of an operating system
design, where a stuck process can be killed, or pushed to the bottom
of an agenda, or whatever. Workable, but unsatisfactory. In the case
of an infinite loop (that nastiest of possible errors), the program
can only guess that it has created a situation where infinite loops
can happen.

The most obvious alternative is to say that the program needs an "infinite
loop detector". Ted Jardine of Boeing tells a story where, once upon
a time, some company actually tried to do this - write a program that
would detect infinite loops in any other program. Of course, this is
ludicrous; it's a version of the Halting Problem. For loops in a
program under a given length, yes; arbitrary programs, no. So our
self-modifying program can manage only a partial solution, but that's
ok, because it only has to be able to analyze itself and its subprograms.

The question now becomes: can a program of length n detect infinite
loops in any program of length <= n ? I don't know; you can't just
have it simulate itself and watch for duplicated states showing up,
because the extra storage for the inbetween states would cause the
program to grow! and you have violated the initial conditions for the
question. Some sort of static analysis could detect special cases
(like the Life blinkers mentioned by somebody), but I doubt that
all cases could be done this way. Any theory types out there with
the answer?

Anyway, I *don't* think these are vacuous problems; I encountered them
when working on a learning capability for my parser, and "solved" them
by being very careful about rules that expanded the sentence, rather
than reducing (really just context-sensitive vs context-free).
Am facing it once again in my new project (a KR language derived from
RLL), and this time there's no way to sidestep! Any new ideas would
be greatly appreciated.

Stan Shebs

------------------------------

Date: Wed, 26 Oct 1983 16:30 EDT
From: BATALI%MIT-OZ@MIT-MC.ARPA
Subject: Trancendental Recursion


I've just joined this mailing list and I'm wondering about the recent
discussion of "consciousness." While it's an interesting issue, I
wonder how much relevance it has for AI. Thomas Nagel's article "What
is it like to be a bat?" argues that consciousness might never be the
proper subject of scientific inquiry because it is by its nature,
subjective (to the max, as it were) and science can deal with only
objective (or at least public) things.

Whatever the merits of this argument, it seems that a more profitable
object of our immediate quest might be intelligence. Now it may be the
case that the two are the same thing -- or it may be that consciousness
is just "what it is like" to be an intelligent system. On the other
hand, much of our "unconscious" or "subconscious" reasoning is very
intelligent. Consider the number of moves that a chess master doesn't
even consider -- they are rejected even before being brought to
consciousness. Yet the action of rejecting them is a very intelligent
thing to do. Certainly someone who didn't reject those moves would have
to waste time considering them and would be a worse (less intelligent?)
chess player. Conversly it seems reasonable to suppose that one cannot
be conscious unless intelligent.

"Intelligent" like "strong" is a dispositional term, which is to say it
indicates what an agent thus described might do or tend to do or be able
to do in certain situations. Whereas it is difficult to give a sharp
boundary between the intelligent and the non-intelligent, it is often
possible to say which of two possible actions would be the more
intelligent.

In most cases, it is possible to argue WHY the action is the more
intelligent. The argument will typically mention the goals of the
agent, its abilities, and its knowldge about the world. So it seems
that there is a fairly simple and common understanding of how the term
is applied: An action is intelligent just in case it well satisfies
some goals of the agent, given what the agent knows about the world. An
agent is intelligent just in case it performs actions that are
intelligent for it to perform.

A potential problem with this is that the proposed account requires that
the agent often be able to figure out some very difficult things on the
way to generating an intelligent action: Which goal should I satisfy?
What is the case in the world? Should I try to figure out a better
solution? Each of these subproblems, constituitive of intelligence,
seems to require intelligence.

But there is a way out, and it might bring us back to the issue of
consciousness. If the intelligent system is a program, there is no
problem with its applying itself recursively to its subproblems. So the
subproblems can also be solved intelligently. For this to work, though,
the program must understand itself and understand when and how to apply
itself to its subproblems. So at least some introspective ability seems
like it would be important for intelligence, and the better the system
was at introspective activities, the more intelligent it would be. The
recent theses of Doyle and Smith seem to indicate that a system could be
COMPLETELY introspective in the sense that all aspects of its operation
could be accessible and modifiable by the program itself.

But I don't know if it would be conscious or not.

------------------------------

Date: 26 Oct 1983 1537-PDT
From: Jay <JAY@USC-ECLC>
Subject: Re: Parallelism and Conciousness

Anything that can be done in parallel can be done sequentially.
Parallel computations can be faster, and can be easier to
understand/write. So if conciousness can be programmed, and if it is
as complex as it seems, then perhaps parallelism should be exploited.
No algorithm is inherently parallel.

j'

------------------------------

Date: Thu 27 Oct 83 14:01:59-EDT
From: RICKL%MIT-OZ@MIT-MC.ARPA
Subject: Parallelism & Consciousness


From: BUCKLEY@MIT-OZ
Subject: Parallelism and Consciousness

-- of what relevance is the issue of time-behavior of an algorithm to
the phenomenon of intelligence, i.e., can there be in principle such a
beast as a slow, super-intelligent program?

gracious, isn't this a bit chauvinistic? suppose that ai is eventually
successful in creating machine intelligence, consciousness, etc. on
nano-second speed machines of the future: we poor humans, operating
only at rates measured in seconds and above, will seem incredibly slow
to them. will they engage in debate about the relevance of our time-
behavior to our intelligence? if there cannot in principle be such a
thing as a slow, super-intelligent program, how can they avoid concluding
that we are not intelligent?
-=*=- rick

------------------------------


Mail-From: DUGHOF created at 27-Oct-83 14:14:27
Date: Thu 27 Oct 83 14:14:27-EDT
From: DUGHOF@MIT-OZ
Subject: Re: Parallelism & Consciousness
To: RICKL@MIT-OZ
In-Reply-To: Message from "RICKL@MIT-OZ" of Thu 27 Oct 83 14:04:28-EDT

About slow intelligence -- there is one and only one reason to have
intelligence, and that is to survive. That is where intelligence
came from, and that is what it is for. It will do no good to have
a "slow, super-intelligent program", for that is a contradiction in
terms. Intelligence has to be fast enough to keep up with the
world in real time. If the superintelligent AI program is kept in
some sort of shielded place so that its real-time environment is
essentially benevolent, then it will develop a different kind of
intelligence from one that has to operate under higher pressures,
in a faster-changing world. Everybody has had the experience of
wishing they'd made some clever retort to someone, but thinking of
it too late. Well, if you always thought of those clever remarks
on the spot, you'd be smarter than you are. If things that take
time (chess moves, writing good articles, developing good ideas)
took less time, then I'd be smarter. Intelligence and the passage
of time are not unrelated. You can slow your processor down and
then claim that your program's intelligence is unaffected, even
if it's running the same program. The world is marching ahead at
the same speed, and "pure, isolated intelligence" doesn't exist.

------------------------------

Date: Thu 27 Oct 83 14:57:18-EDT
From: RICKL%MIT-OZ@MIT-MC.ARPA
Subject: Re: Parallelism & Consciousness


From: DUGHOF@MIT-OZ
Subject: Re: Parallelism & Consciousness
In-Reply-To: Message from "RICKL@MIT-OZ" of Thu 27 Oct 83 14:04:28-EDT

About slow intelligence -- there is one and only one reason to have
intelligence, and that is to survive.... It will do no good to have
a "slow, super-intelligent program", for that is a contradiction in
terms. Intelligence has to be fast enough to keep up with the
world in real time.

are you claiming that if we someday develop super-fast super-intelligent
machines, then we will no longer be intelligent? this seems implicit in
your argument, and seems itself to be a contradiction in terms: we *were*
intelligent until something faster came along, and then after that we
weren't.

or if this isn't strong enough for you -- you seem to want intel-
ligence to depend critically on survival -- imagine that the super-fast
super-intelligent computers have a robot interface, are malevolent,
and hunt us humans to extinction in virtue of their superior speed
& reflexes. does the fact that we do not survive mean that we are not
intelligent? or does it mean that we are intelligent now, but could
suddenly become un-intelligent without we ourselves changing (in virtue
of the world around us changing)?

doubtless survival is important to the evolution of intelligence, & that
point is not really under debate. however, to say that whether something is
or is not intelligent is a property dependent on the relative speed of the
creatures sharing your world seems to make us un-intelligent as machines
and programs get better, and amoebas intelligent as long as they were
the fastest survivable thing around.

-=*=- rick

------------------------------

Date: Thu, 27 Oct 1983 15:26 EDT
From: STRAZ%MIT-OZ@MIT-MC.ARPA
Subject: Parallelism & Consciousness


Hofstadter:
About slow intelligence -- there is one and only one [...]

Lathrop:
doubtless survival is important to the evolution of intelligence, &
that point is not really under debate.

Me:
No, survival is not the point. It is for the organic forms that
evolved with little help from outside intelligences, but a computer
that exhibits a "slow, super-intelligence" in the protective
custody of humans can solve problems that humans might never
be able to solve (due to short attention span, lack of short-term
memory, tedium, etc.)

For example, a problem like where to best put another bridge/tunnel
in Boston is a painfully difficult thing to think about, but if
a computer comes up with a good answer (with explanatory justifications)
after thinking for a month, it would have fulfilled anyone's
definition of slow, superior intelligence.

------------------------------

Date: Thu, 27 Oct 1983 23:35 EDT
From: MINSKY%MIT-OZ@MIT-MC.ARPA
Subject: Parallelism & Consciousness


That's what you get for trying to define things too much.

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT