Copy Link
Add to Bookmark
Report
AIList Digest Volume 1 Issue 063
AIList Digest Monday, 26 Sep 1983 Volume 1 : Issue 63
Today's Topics:
Robotics - Physical Strength,
Parallelism & Physiology,
Intelligence - Turing Test,
Learning & Knowledge Representation,
Rational Psychology
----------------------------------------------------------------------
Date: 21 Sep 83 11:50:31-PDT (Wed)
From: ihnp4!mtplx1!washu!eric @ Ucb-Vax
Subject: Re: Strong, agile robot
Article-I.D.: washu.132
I just glanced at that article for a moment, noting the leg mechanism
detail drawing. It did not seem to me that the beastie could move
very fast. Very strong IS nice, tho... Anyway, the local supplier of
that mag sold them all. Anyone remember if it said how fast it could
move, and with what payload?
eric ..!ihnp4!washu!eric
------------------------------
Date: 23 Sep 1983 0043-PDT
From: FC01@USC-ECL
Subject: Parallelism
I thought I might point out that virtually no machine built in the
last 20 years is actually lacking in parallelism. In reality, just as
the brain has many neurons firing at any given time, computers have
many transistors switching at any given time. Just as the cerebellum
is able to maintain balance without the higher brain functions in the
cerebrum explicitly controlling the IO, most current computers have IO
controllers capable of handling IO while the CPU does other things.
Just as people have faster short term memory than long term memory but
less of it, computers have faster short term memory than long term
memory and use less of it. These are all results of cost/benefit
tradeoffs for each implementation, just as I presume our brains and
bodies are. Don't be so fast to think that real computer designers are
ignorant of physiology. The trend towards parallelism now is more like
the human social system of having a company work on a problem. Many
brains, each talking to each other when they have questions or
results, each working on different aspects of a problem. Some people
have breakdowns, but the organization keeps going. Eventually it comes
up with a product, although it may not really solve the problem posed
at the beginning, it may have solved a related problem or found a
better problem to solve.
Another copyrighted excerpt from my not yet finished book on
computer engineering modified for the network bboards, I am ever
yours,
Fred
------------------------------
Date: 14 Sep 83 22:46:10-PDT (Wed)
From: pur-ee!uiucdcs!uicsl!dinitz @ Ucb-Vax
Subject: Re: in defense of Turing - (nf)
Article-I.D.: uiucdcs.2822
Two points where Martin Taylor's response reveals that I was not
emphatic enough [you see, it is possible to underflame, and thus be
misunderstood!] in my comments on the Turing test.
1. One of Dennett's main points (which I did not mention, since David
Rogers had already posted it in the original note of this string) is
that the unrestricted Turing-like test of which he spoke is a
SUFFICIENT, but not a NECESSARY test for intelligence comparable to
that possessed and displayed by most humans in good working order. [I
myself would add that it tests as much for mastery of human
communication skills (which are indeed highly dependent on particular
cultures) as it does for intelligence.] That is to say, if a program
passes such a rigorous test, then the practitioners of AI may
congratulate themselves for having built such a clever beast.
However, a program which fails such a test need not be considered
unintelligent. Indeed, a human which fails such a test need not be
considered unintelligent -- although one would probably consider
him/her to be of substandard intelligence, or of impaired
intelligence, or dyslexic, or incoherent, or unconscious, or amnesic,
or aphasic, or drunk (i.e. disabled in some fashion).
2. I did not post "a set of criteria which an AI system should pass to
be accepted as human-like at a variety of levels." I posted a set of
tests by which to gauge progress in the field of AI. I don't imagine
that these tests have anything to do with human-ness. I also don't
imagine that many people who discuss and discourse upon "intelligence"
have any coherent definition for what it might be.
Other comments that seem relevant (but might not be)
----- -------- ---- ---- -------- ---- ----- --- ---
Neither Dennett's test, nor my tests are intended to discern whether
or not the entity in question possesses a human brain.
In addition to flagrant use of hindsight, my tests also reveal my bias
that science is an endeavor which requires intelligence on the part of
its human practitioners. I don't mean to imply that it is the only
such domain. Other domains which require that the people who live in
them have "smarts" are puzzle solving, language using, language
learning (both first and second), etc. Other tasks not large enough
to qualify as domains that require intelligence (of a degree) from
people who do them include: figuring out how to use a paper clip or a
stapler (without being told or shown), figuring out that someone was
showing you how to use a stapler (without being told that such
instruction was being given), improvising a new tool or method for a
routine task that one is accustomed to doing with an old tool or
method, realizing that an old method needs improvement, etc.
The interdependence of intelligence and culture is much more important
that we usually give it credit for. Margaret Mead must have been
quite a curiousity to the peoples she studied. Imagine that a person
of such a different and strange (to us) culture could be made to
understand enough about machines and the Turing test so that he/she
could be convinced to serve as an interlocutor... On second thought,
that opens up such a can of worms that I'd rather deny having proposed
it in the first place.
------------------------------
Date: 19 Sep 83 17:43:53-PDT (Mon)
From: harpo!utah-cs!shebs @ Ucb-Vax
Subject: Re: Rational Psychology
Article-I.D.: utah-cs.1913
I just read Jon Doyle's article about Rational Psychology in the
latest AI Magazine (Fall '83), and am also very interested in the
ideas therein. The notion of trying to find out what is *possible*
for intelligences is very intriguing, not to mention the idea of
developing some really sound theories for a change.
Perhaps I could mention something I worked on a while back that
appears to be related. Empirical work in machine learning suggests
that there are different levels of learning - learning by being
programmed, learning by being told, learning by example, and so forth,
with the levels being ordered by their "power" or "complexity",
whatever that means. My question: is there something fundamental
about this classification? Are there other levels? Is there a "most
powerful" form of learning, and if so, what is it?
I took the approach of defining "learning" as "behavior modification",
even though that includes forgetting (!), since I wasn't really
concerned with whether the learning resulted in an "improvement" in
behavior or not. The model of behavior was somewhat interesting.
It's kind of a dualistic thing, consisting of two entities: the
organism and the environment. The environment is everything outside,
including the organsism's own physical body, while the organism is
more or less equivalent to a mind. Each of these has a state, and
behavior can be defined as functions mapping the set of all states to
itself. Both the environment and the organism have behaviors that can
be treated in the same way (that is, they are like mirror images of
each other). The whole development is too elaborate for an ASCII
terminal, but it boiled down to this: that since learning is a part
of behavior, but it also *modifies* behavior, then there is a part of
the behavior function that is self-modifying. One can then define
"1st order learning" as that which modifies ordinary behavior. 2nd
order learning would be "learning how to learn", 3rd order would be
"learning how to learn how to learn" (whatever *that* means!). The
definition of these is more precise than my Anglicization here, and
seem to indicate a whole infinite heirarchy of learning types, each
supposedly more powerful than the last. It doesn't do much for my
original questions, because the usual types of learning are all 1st
order - although they don't have to be. Lenat's work on learning
heuristics might be considered 2nd order, and if you look at it in the
right way, it may actually be that EURISKO actually implements all
orders of learning at the same time, so the above discussion is
garbage (sigh).
Another question that has concerned me greatly (particularly since
building my parser) is the relation of the Halting Problem to AI. My
program was basically a production system, and had an annoying
tendency to get caught in infinite loops of various sorts. More
misfeatures than bugs, though, since the theory did not expressly
forbid such loops! To take a more general example, why don't circular
definitions cause humans to go catatonic? What is the mechanism that
seems to cut off looping? Do humans really beat the Halting Problem?
One possible mechanism is that repetition is boring, and so all loops
are cut off at some point or else pushed so far down on the agenda of
activities that they are effectively terminated. What kind of theory
could explain this?
Yet another (last one folks!) question is one that I raised a while
back, about all representations reducing down to attribute-value
pairs. Yes, they used to be fashionable but are now out of style, but
I'm talking about a very deep underlying representation, in the same
way that the syntax of s-expressions underlies Lisp. Counterexamples
to my conjecture about AV-pairs being universal were algebraic
expressions (which can be turned into s-expressions, which can be
turned into AV-pairs) and continuous values, but they must have *some*
closed form representation, which can then be reduced to AV-pairs. So
I remained unconvinced that the notion of objects with AV-pairs
attached is *not* universal (of course, for some things, the
representation is so primitive as to be as bad as Fortran, but then
this is an issue of possibility, not of goodness or efficiency).
Looking forward to comments on all of these questions...
stan the l.h.
utah-cs!shebs
------------------------------
Date: 22 Sep 83 11:26:47-PDT (Thu)
From: ihnp4!drux3!drufl!samir @ Ucb-Vax
Subject: Re: Rational Psychology
Article-I.D.: drufl.663
To me personally, Rational Psychology is a misnomer.
"Rational" negates what "Psychology" wants to understand.
Flames to /dev/null.
Interesting discussions welcome.
Samir Shah
drufl!samir
AT&T Information Systems, Denver.
------------------------------
Date: 22 Sep 83 17:12:11-PDT (Thu)
From: ihnp4!houxm!hogpc!houti!ariel!norm @ Ucb-Vax
Subject: Re: Rational Psychology
Article-I.D.: ariel.456
Samir's view: "To me personally, Rational Psychology
is a misnomer. "Rational" negates
what "Psychology" wants to understand."
How so?
Can you support your claim? What does psychology want to understand
that Rationality negates? Psychology is the Logos of the Psyche or
the logic of the psyche. How does one understand without logic? How
does one understand without rationality? What is understand? Isn't
language itself dependent upon the rational faculty, or more
specifically, upon the ability to form concepts, as opposed to
percepts? Can you understand without language? To be totally without
rationality (lacking the functional capacity for rationality
- the CONCEPTUAL faculty) would leave you without language, and
therefore without understanding. In what TERMS is something said to
be understood? How can terms have meaning without rationality?
Or perhaps you might claim that because men are not always rational
that man does not possess a rational faculty, or that it is defective,
or inadequate? How about telling us WHY you think Rational negates
Psychology?
These issues are important to AI, psychology and philosophy
students... The day may not be far off when AI research yields
methods of feature abstraction and integration that approximate
percept-formation in humans. The next step, concept formation, will
be much harder. How does an epistemology come about? What are the
sequential steps necessary to form an epistemology of any kind? By
what method does the mind (what's that?) integrate percepts into
concepts, make identifications on a conceptual level ("It is an X"),
justify its identifications ("and I know it is an X because..."), and
then decide (what's that?) what to do about it ("...so therefore I
should do Y")?
Do you seriously think that understanding these things won't take
Rationality?
Norm Andrews, AT&T Information Systems, Holmdel, N.J. ariel!norm
------------------------------
Date: 22 Sep 83 12:02:28-PDT (Thu)
From: decvax!genrad!mit-eddie!mit-vax!eagle!mhuxi!mhuxj!mhuxl!achilles
!ulysses!princeton!leei@Ucb-Vax
Subject: Re: Rational Psychology
Article-I.D.: princeto.77
I really think that the ability that we humans have that allows us to
avoid looping is the simple ability to recognize a loop in our logic
when it happens. This comes as a direct result of our tendency for
constant self- inspection and self-evaluation. A machine with this
ability, and the ability to inspect its own self-inspections . . .,
would probably also be able to "solve" the halting problem.
Of course, if the loop is too subtle or deep, then even we cannot see
it. This may explain the continued presence of various belief systems
that rely on inherently circular logic to get past their fundamental
problems.
-Lee Iverson
..!princeton!leei
------------------------------
End of AIList Digest
********************