Copy Link
Add to Bookmark
Report

AIList Digest Volume 1 Issue 087

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest            Tuesday, 1 Nov 1983       Volume 1 : Issue 87 

Today's Topics:
Rational Psychology - Definition,
Parallel Systems,
Conciousness & Intelligence,
Halting Problem,
Molecular Computers
----------------------------------------------------------------------

Date: 29 Oct 83 23:57:36-PDT (Sat)
From: hplabs!hao!csu-cs!denelcor!neal @ Ucb-Vax
Subject: Re: Rational Psychology
Article-I.D.: denelcor.182

I see what you are saying and I beg to disagree. I don't believe that
the distinction between rational and irrational psychology (it's probably
not that simple) as depending on whether or not the scientist is being
rational but on whether or not the subject is (or rather which aspect of
his behavior--or mentation if you accept the existence of that--is under
consideration). More like the distinction between organic and inorganic
chemistry.

------------------------------

Date: Mon, 31 Oct 83 10:16:00 PST
From: Philip Kahn <kahn@UCLA-CS>
Subject: Sequential vs. parallel

It was claimed that "parallel computation can always
be done sequentially." I had thought that this naive concept had passed
away into never never land, but I suppose not. I do not deny that MANY
parallel computations can be accomplished sequentially, yet not ALL
parallel computations can be made sequential. Those class of parallel
computations that cannot be accomplished sequentially are those that
involve the state of all variables in a single instant. This class
of parallelism often arises in sensor applications. It would not be
valid, for example, to raster-scan (sequential computation) a sensing field
if the processing of that sensing field relied upon the quantization of
elements in a single instant.

I don't want to belabor this point, but it should be recognized
that the common assertion that all parallel computation can be done
sequentially is NOT ALWAYS VALID. In my own experience, I have found
that artificial intelligence (and real biologic intelligence for that
matter) relies heavily upon comparisons of various elements at a single
time instant. As such, the assumption of sequentialty of parallelistic
algorithms is often invalid. Something to think about.

------------------------------

Date: Saturday, 29 Oct 1983 21:05-PST
From: sdcrdcf!trw-unix!scgvaxd!qsi03!achut@rand-relay
Subject: Conciousness, Halting Problem, Intelligence


I am new to this mailing list and I see there is some lively
discussion going on. I am eager to contribute to it.

Consciousness:
I treat the words self-awareness, consciousness, and soul as
synonyms in the context of these discussions. They are all epiphenomena
of the phenomenon of intelligence, along with emotions, desires, etc.
To say that machines can never be truly intelligent because they cannot
have a "soul" is to be excessively naive and anthropocentric. Self-
awareness is not a necessary prerequisite for intelligence; it arises
naturally *because* of intelligence. All intelligent beings possess some
degree of self-awareness; to perceive and interact with the world, there
must be an internal model, and this invariably involves taking into
account the "self". A very, very low intelligence, like that of a plant,
will possess a very, very low self-awareness.

Parallelism:
The human brain resembles a parallel machine more than it does a
purely sequential one. Parallel machines can do many things much quicker
than their sequential counterpart. Parallel hardware may well make the
difference between the attainment of AI in the near future and the
unattainment for several decades. But I cannot understand those who claim
that there is something *fundamentally* different between the two types of
architectures. I am always amazed at the extremes to which some people will
go to find the "magic spark" which separates intelligence from non-
intelligence. Two of these are "continuousness vs. discreteness" and
"non-determinism vs. determinism".
Continuous? Nothing in the universe is continuous. (Except maybe
arguments to the contrary :-)) Mass, energy, space and even time, at least
according to current physical knowledge, are all quantized. Non-determinism?
Many people feel that "randomness" is a necessary ingredient to intelligence.
But why isn't this possible with a sequential architecture? I can
construct a "discrete" random number generator for my sequential machine
so that it behaves in a similar manner to your "non-deterministic" parallel
machine, although perhaps slower. (See "Slow intelligence" below)
Perhaps the "magic sparkers" should consider that difference they are
searching for is merely one of complexity. (I really hate to use the
word "merely", since I appreciate the vast scope of the complexity, but
it seems appropriate here) There is no evidence, currently, to justify
thinking otherwise.

The Halting(?) Problem:
What Stan referred to as the "Halting Problem" is really
the "looping problem", hence the subsequent confusion. The Halting Problem
is not really relevant to AI, but the looping problem *is* relevant. The
question is not even "why don't humans get caught in loops", since, as
Mr. Frederking aptly points out, "beings which aren't careful about this
fail to breed, and are weeded out by evolution". (For an interesting story
of what could happen if this were not the case, see "Riddle of the universe
and its solution" by Christoper Cerniak in "The Mind's I") But rather, the
more interesting question is "by what mechanisms do humans avoid them?",
and then, "are these the best mechanisms to use in AI programs?".
It not clear that this might not be a problem when AI is attempted on a
machine whose internal states could conceivably recur. Now I am not saying
that this an insurmountable problem by any means; I am merely saying that
it might be a worthy topic of discussion.

Slow intelligence:
Intelligence is dependent on time? This would require a curious
definition of intelligence. Suppose you played chess at strength 2000 given
5 seconds per move, 2010 given 5 minutes, and 2050 given as much time as you
desired. Suppose the corresponding numbers for me were 1500, 2000, and 2500.
Who is the better (more intelligent) player? True, I need 5 minutes per
move just to play as good as you can at only 5 seconds. But shouldn't the
"high end" be compared instead? There are many bases on which to decide the
"greater" of two intelligences. One is (conceivably, but not exclusively)
speed. Another is number and power of inferences it can make in a given
situation. Another is memory, and ability to correlate current situations
with previous ones. STRAZ@MIT-OZ has the right idea. Incidentally, I'm
surprised that no one pointed out an example of an intelligence staring
us in the face which is slower but smarter than us all, individually.
Namely, this net!

------------------------------

Date: 25 Oct 83 13:34:02-PDT (Tue)
From: harpo!eagle!mhuxl!ulysses!cbosgd!cbscd5!pmd @ Ucb-Vax
Subject: Artificial Consciousness? [and Reply]

I'm interested in getting some feedback on some philosophical
questions that have been haunting me:

1) Is there any reason why developments in artificial intelligence
and computer technology could not someday produce a machine with
human consciousness (i.e. an I-story)?

2) If the answer to the above question is no, and such a machine were
produced, what would distinguish it from humans as far as "human"
rights were concerned? Would it be murder for us to destroy such a
machine? What about letting it die of natural (?) causes if we
have the ability to repair it indefinitely?
(Note: Just having a unique, human genetic code does not legally make
one human as per the 1973 *Row vs Wade* Supreme Court decision on
abortion.)

Thanks in advance.

Paul Dubuc

[For an excellent discussion of the rights and legal status of AI
systems, see Marshal Willick's "Artificial Intelligence: Some Legal
Approaches and Implications" in the Summer '83 issue (V. 4, N. 2) of
AI magazine. The resolution of this issue will of course be up to the
courts. -- KIL]

------------------------------

Date: 28 Oct 1983 21:01-PDT
From: fc%usc-cse%USC-ECL@SRI-NIC
Subject: Halting in learning programs

If you restrict the class of things that can be learned by your
program to those which don't cause infinite recursion or circularity,
you will have a good solution to the halting problem you state.
Although generalized learning might be nice, until we know more about
learning, it might be more appropriate to select specific classes of
adaption which lend themselves to analysis and development of new
theories.

As a simple example of a non halting problem learning automata,
the Purr Puss system developed by John Andreas (from New Zealand) does
an excellent job of learning without any such difficulty. Other such
systems exist as well, all you have to do is look for them. I guess the
point is that rather than pursue the impossible, find something
possible that may lead to the solution of a bigger problem and pursue
it with the passion and rigor worthy of the problem. An old saying:
'Problems worthy of attack prove their worth by fighting back'

Fred

------------------------------

Date: Sat, 29 Oct 83 13:23:33 CDT
From: Bob.Warfield <warbob.rice@Rand-Relay>
Subject: Halting Problem Discussion

It turns out that any computer program running on a real piece of hardware
may be simulated by a deterministic finite automaton, since it only has a
finite (but very large) number of possible states. This is usually not a
productive observation to make, but it does present one solution to the
halting problem for real (i.e. finite) computing hardware. Simulate the
program in question as a DFA and look for loops. From this, one should
be able to tell what input to the DFA would produce an infinite loop,
and recognition of that input could be done by a smaller DFA (the old
one sans loops) that gets incorporated into the learning program. It
would run the DFA in parallel (or 1 step ahead?) and take action if a
dangerous situation appeared.

Bob Warfield
warbob@rice

------------------------------

Date: Mon 31 Oct 83 15:45:12-PST
From: Calton Pu <CALTON@WASHINGTON.ARPA>
Subject: Halting Problem: Resource Use

From Shebs@Utah-20:

The question is this: consider a learning program, or any
program that is self-modifying in some way. What must I do
to prevent it from getting caught in an infinite loop, or a
stack overflow, or other unpleasantnesses? ...
How can *it* know when it's stuck in a losing situation?

Trying to come up with a loop detector program seemed to find few enthusiasts.
The limited loop detector suggests another approach to the "halting problem".
The question above does not require the solution of the halting problem,
although that could help. The question posed is one of resource allocation
and use. Self-awareness is only necessary for the program to watch itself
and know whether it is making progress considering its resource consumption.
Consequently it is not surprising that:

The best answers I saw were along the lines of an operating
system design, where a stuck process can be killed, or
pushed to the bottom of an agenda, or whatever.

However, Stan wants more:

Workable, but unsatisfactory. In the case of an infinite
loop (that nastiest of possible errors), the program can
only guess that it has created a situation where infinite
loops can happen.

The real issue here is not whether the program is in loop, but whether the
program will be able to find a solution in feasible time. Suppose a program
will take a thousand years to find a solution, will you let it run that long?
In other words, the problem is one of measuring gained progress versus
spent resources. It may turn out that a program is not in loop but you
choose to write another program instead of letting the first run to completion.
Looping is just one of the losing situations.

Summarizing, the learning program should be allowed to see a losing situation
because it is unfeasible, whether the solution is possible or not.
From this view, there are two aspects to the decision: the measurement of
progress made by the program, and monitoring resource consumption.
It is the second aspect that involves some "operating systems design".
I would be interested to know whether your parser knows it is making progress.


-Calton-

Usenet: ...decvax!microsoft!uw-beaver!calton

------------------------------

Date: 31 Oct 83 2030 EST
From: Dave.Touretzky@CMU-CS-A
Subject: forwarded article


- - - - Begin forwarded message - - - -
Date: 31 Oct 1983 18:41 EST (Mon)
From: Daniel S. Weld <WELD%MIT-OZ@MIT-MC.ARPA>
To: macmol%MIT-OZ@MIT-MC.ARPA
Subject: Molecular Computers

Below is a forwarded message:
From: David Rogers <DRogers at SUMEX-AIM.ARPA>

I have always been confused by the people who work on
"molecular computers", it seems so stupid. It seems much
more reasonable to consider the reverse application: using
computers to make better molecules.

Is anyone out there excited by this stuff?

MOLECULAR COMPUTERS by Lee Dembart, LA Times
(reprinted from the San Jose Mercury News 31 Oct 83)

SANTA MONICA - Scientists have dreamed for the past few years of
building a radically different kind of computer, one based on
molecular reactions rather than on silicon.

With such a machine, they could pack circuits much more tightly than
they can inside today's computers. More important, a molecular
computer might not be bound by the rigid binary logic of conventional
computers.

Biological functions - the movement of information within a cell or
between cells - are the models for molecular computers. If that basic
process could be reproduced in a machine, it would be a very powerful
machine.

But such a machine is many, many years away. Some say the idea is
science fiction. At the moment, it exists only in the minds of of
several dozen computer scientists, biologists, chemists and engineers,
many of whom met here last week under the aegis of the Crump Institute
for Medical Engineering at the University of California at Los
Angeles.

"There are a number of ideas in place, a number of technologies in
place, but no concrete results," said Michael Conrad, a biologist and
computer scientist at Wayne State University in Detroit and a
co-organizer of the conference.

For all their strengths, today's digital computers have no ability to
judge. They cannot recognize patterns. They cannot, for example,
distinguish one face from another, as even babies can.

A great deal of information can be packed on a computer chip, but it
pales by comparison to the contents of the brain of an ant, which can
protect itself against its environment.

If scientists had a computer with more flexible logic and circuitry,
they think they might be able to develop "a different style of
computing", one less rigid than current computers, one that works more
like a brain and less like a machine. The "mood" of such a device
might affect the way scientists solve problems, just as people's moods
affect their work.

The computing molecules would be manufactured by genetically
engineered bacteria, which has given rise to the name "biochip" to
describe a network of them.

"This is really the new gene technology", Conrad said.

The conference was a meeting on the frontiers - some would say fringes
- of knowledge, and several times participants scoffed, saying that
the discussion was meandering into philosophy.

The meeting touched on some of the most fundamental questions of brain
and computer research, revealing how little is known of the mind's
mechanisms.

The goal of artificial intelligence work is to write programs that
simulate thought on digital computers. The meeting's goal was to think
about different kinds of computers that might do that better.

Among the questions posed at the conference:

- How do you get a computer to chuckle at a joke?

- What is the memory capacity of the brain? Is there a limit to that
capacity?

- Are there styles of problem solving that are not digitally
computable?

- Can computer science shed any light on the mechanisms of biological
science? Can computer science problems be addressed by biological
science mechanisms?

Proponents of molecular computers argue that it is possible to make
such a machine because biological systems perform those processes all
the time. Proponents of artificial intelligence have argued for years
that the existence of the brain is proof that it is possible to make a
small machine that thinks like a brain.

It is a powerful argument. Biological systems already exist that
compute information in a better way than digital computers do. "There
has got to be inspiration growing out of biology", said F. Eugene
Yates, the Crump Institutes director.

Bacteria use sophisticated chemical processes to transfer information.
Can that process be copied?

Enzymes work by stereoscopically matching their molecules with other
molecules, a decision-making process that occurs thousands of times a
second. It would take a binary computer weeks to make even one match.

"It's that failure to do a thing that an enzyme does 10,000 times a
second that makes us think there must be a better way," Yates said.

In the history of science, theoretical progress and technological
progress are intertwined. One makes the other possible. It is not
surprising, therefore, that thinking about molecular computers has
been spurred recently by advances in chemistry and biotechnology that
seem to provide both the materials needed and a means for producing it
on a commercial scale.

"If you could design such a reaction, you could probably get a
bacteria to make it," Yates said.

Conrad thinks that a functioning machine is 50 years away, and he
described it as a "futuristic" development.
- - - - End forwarded message - - - -

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT