Copy Link
Add to Bookmark
Report

AIList Digest Volume 1 Issue 055

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest           Saturday, 10 Sep 1983      Volume 1 : Issue 55 

Today's Topics:
Intelligence - Turing Test & Definitions,
AI Environments - Computing Power & Social Systems
----------------------------------------------------------------------

Date: Saturday, 3 Sep 1983 13:57-PDT
From: bankes@rand-unix
Subject: Turing Tests and Definitions of Intelligence


As much as I dislike adding one more opinion to an overworked topic, I
feel compelled to make a comment on the ongoing discussion of the
Turing test. It seems to me quite clear that the Turing test serves
as a tool for philosopical argument and not as a defining criterion.
It serves the purpose of enlightening those who would assert the
impossibility of any machine ever being intelligent. The point is, if
a machine which would pass the test could be produced, then a person
would have either to admit it to be intelligent or else accept that
his definition of intelligence is something which cannot be perceived
or tested.

However, when the Turing test is used as a tool with which to think
about "What is intelligence?" it leads primarily to insights into the
psychology and politics of what people will accept as intelligent.
(This is a consequence of the democratic definition - its intelligent
if everybody agrees it is). Hence, we get all sorts of distractions:
Must an intelligent machine make mistakes, should an intelligent
machine have emotions, and most recently would an intelligent machine
be prejudiced? All of this deals with a sociological viewpoint on
what is intelligent, and gets us no closer to a fundamental
understanding of the phenomenon.

Intelligence is an old word, like virtue and honor. It may well be
that the progress of our understanding will make it obsolete, the word
may come to suggest the illusions of an earlier time. Certainly, it
is much more complex than our language patterns allow. The Turing
test suggests it to be a boolean, you got it or you don't. We
commonly use smart as a relational, you're smarter than me but we're
both smarter than rover. This suggests intelligence is a scaler,
hence IQ tests. But recent experience with IQ testing across cultures
together with the data from comparative psychology, would suggest that
intelligence is at least multi-dimensional. Burrowing animals on the
whole do better at mazes than others. Animals whose primary defense
is flight respond differently to aversive conditioning than do more
aggressive species.

We may have seen a recapitulation of this in the last twenty years
experience with AI. We have moved from looking for the philosophers
stone, the single thing needed to make something intelligent, to
knowledge based systems. No one would reasonably discuss (I think)
whether my program is smarter than yours. But we might be able to say
that mine knows more about medicine than yours or that mine has more
capacity for discovering new relations of a specified type.

Thus I would suggest that the word intelligence (noun that it is,
suggesting a thing which might somehow be gotten ahold of) should be
used with caution. And that the Turing test, as influential as it has
been, may have outlived its usefulness, at least for discussions among
the faithful.


-Steve Bankes
RAND

------------------------------

Date: Sat, 3 Sep 83 17:07:33 EDT
From: "John B. Black" <Black@YALE.ARPA>
Subject: Learning Complexity

There was recently a query on AIList about how to characterize
learning complexity (and saying that may be the crucial issue in
intelligence). Actually, I have been thinking about this recently, so
I thought I would comment. One way to characterize the learning
complexity of procedural skills is in terms of what kind of production
system is needed to perform the skill. For example, the kind of
things a slug or crayfish (currently popular species in biopsychology)
can do seem characterizable by production systems with minimal
internal memory, conditions that are simple external states of the
world, and actions that are direct physical actions (this is
stimulus-response psychology in a nutshell). However, human skills
(progamming computers, doing geometry, etc.) need much more complex
production systems with complex networks as internal memories,
conditions that include variables, and actions that are mental in
addition to direct physical actions. Of course, what form productions
would have to be to exhibit human-level intelligence (if indeed, they
can) is an open question and a very active field of research.

------------------------------

Date: 5 Sep 83 09:42:44 PDT (Mon)
From: woodson%UCBERNIE@Berkeley (Chas Woodson)
Subject: AI and computing power

Can you direct me to some wise comments on the following question?
Is the progress of AI being held up by lack of computing power?


[Reply follows. -- KIL]

There was a discussion of this on Human-Nets a year ago.
I am reprinting some of the discussion below.

My own feeling is that we are not being held back. If we had
infinite compute power tomorrow, we would not know how to use it.
Others take the opposite view: that intelligence may be brute force
search, massive theorem proving, or large rule bases and that we are
shying away from the true solutions because we want a quick finesse.
There is also a view that some problems (e.g. vision) may require
parallel solutions, as opposed to parallel speedup of iterative
solutions.

The AI principal investigators seem to feel (see the Fall AI Magazine)
that it would be enough if each AI investigator had a Lisp Machine
or equivalent funding. I would extend that a little further. I think
that the biggest bottleneck right now is the lack of support staff --
systems wizards, apprentice programmers, program librarians, software
editors (i.e., people who edit other people's code), evaluators,
integrators, documentors, etc. Could Lukas have made Star Wars
without a team of subordinate experts? We need to free our AI
gurus from the day-to-day trivia of coding and system building just
as we use secretaries and office machines to free our management
personnel from administrative trivia. We need to move AI from the
lone inventor stage to the industrial laboratory stage. This is a
matter of social systems rather than hardware.

-- Ken Laws

------------------------------

Date: Tuesday, 12 October 1982 13:50-EDT
From: AGRE at MIT-MC
Subject: artificial intelligence and computer architecture

[Reprinted from HUMAN-NETS Digest, 16 Oct 1982, Vol. 5, No. 96]

A couple of observations on the theory that AI is being held back by
the sorry state of computer architecture.

First, there are three projects that I know of in this country that
are explicitly trying to deal with the problem. They are Danny
Hillis' Connection Machine project at MIT, Scott Fahlman's NETL
machine at CMU, and the NON-VON project at Columbia (I can't
remember who's doing that one right offhand).

Second, the associative memory fad came and went very many years
ago. The problem, simply put, is that human memory is a more
complicated place than even the hairiest associative memory chip.
The projects I have just mentioned were all first meant as much more
sophisticated approaches to "memory architectures", though they have
become more than that since.

Third, it is quite important to distinguish between computer
architectures and computational concepts. The former will always
lag ten years behind the latter. In fact, although our computer
architectures are just now beginning to pull convincingly out of the
von Neumann trap, the virtual machines that our computer languages
run on haven't been in the von Neumann style for a long time. Think
of object-oriented programming or semantic network models or
constraint languages or "streams" or "actors" or "simulation" ideas
as old as Simula and VDL. True these are implemented on serial
machines, but they evoke conceptions of computation more closer to
our ideas about how the physical world works, with notions of causal
locality and data flow and asynchronous communication quite
analogous to those of physics; one uses these languages properly not
by thinking of serial computers but by thinking in these more
general terms. These are the stuff of everyday programming, at
least among the avant garde in the AI labs.

None of this is to say that AI's salvation isn't in computer
architecture. But it is to say that the process of freeing
ourselves from the technology of the 40's is well under weigh.
(Yes, I know, hubris.) - phiL

------------------------------

Date: 13 Oct 1982 08:34 PDT
From: DMRussell at PARC-MAXC
Subject: AI and alternative architectures

[Reprinted from HUMAN-NETS Digest, 16 Oct 1982, Vol. 5, No. 96]

There is a whole subfield of AI growing up around parallel
processing models of computation. It is characterized by the use of
massive compute engines (or models thereof) and a corresponding
disregard for efficiency concerns. (Why not, when you've got n^n
processors?)

"Parallel AI" is a result of a crossing of interests from neural
modelling, parallel systems theory, and straightforward AI.
Currently, the most interesting work has been done in vision --
where the transformation from pixel data to more abstract
representations (e.g. edges, surfaces or 2.5-D data) via parallel
processing is pretty easy. There has been rather less success in
other, not-so-obviously parallel, fields.

Some work that is being done:

Jerry Feldman & Dana Ballard (University of Rochester)
-- neural modelling, vision
Steve Small, Gary Cottrell, Lokendra Shastri (University of Rochester)
-- parallel word sense and sentence parsing
Scott Fahlman (CMU) -- knowledge rep in a parallel world
??? (CMU) -- distributed sensor net people
Geoff Hinton (UC San Diego?) -- vision
Daniel Sabbah (IBM) -- vision
Rumelhart (UC San Diego) -- motor control
Carl Hewitt, Bill Kornfeld (MIT) -- problem solving

(not a complete list -- just a hint)

The major concerns of these people has been controlling the parallel
beasts they've created. Basically, each of the systems accepts data
at one end, and then munges the data and various hypotheses about
the data until the entire system settles down to a single
interpretation. It is all very messy, and incredibly difficult to
prove anything. (e.g. Under what conditions will this system
converge?)

The obvious question is this: What does all of this alternative
architecture business buy you? So far, I think it's an open
question. Suggestions?

-- DMR --

------------------------------

Date: 13 Oct 1982 1120-PDT
From: LAWS at SRI-AI
Subject: [LAWS at SRI-AI: AI Architecture]


[Reprinted from HUMAN-NETS Digest, 16 Oct 1982, Vol. 5, No. 96]

In response to Glasser @LLL-MFE:

I doubt that new classes of computer architecture will be the
solution to building artificial intelligence. Certainly we could
use more powerful CPUs, and the new generation of LISP machines make
practical approaches that were merely feasibility demonstrations
before. The fact remains that if we don't have the algorithms for
doing something with current hardware, we still won't be able to do
it with faster or more powerful hardware.

Associative memories have been built in both hardware and software.
See, for example, the LEAP language that was incorporated into the
SAIL language. (MAINSAIL, an impressive offspring of SAIL, has
abandoned this approach in favor of subroutines for hash table
maintenance.) Hardware is also being built for data flow languages,
applicative languages, parallel processing, etc. To some extent
these efforts change our way of thinking about problems, but for the
most part they only speed up what we knew how to do already.

For further speculation about what we would do with "massively
parallel architectures" if we ever got them, I suggest the recent
papers by Dana Ballard and Geoffrey Hinton, e.g. in the Aug. ['82]
AAAI conference proceedings [...]. My own belief is that the "missing
link" to AI is a lot of deep thought and hard work, followed by VLSI
implementation of algorithms that have (probably) been tested using
conventional software running on conventional architectures. To be
more specific we would have to choose a particular domain since
different areas of AI require different solutions.

Much recent work has focused on the representation of knowledge in
various domains: representation is a prerequisite to acquisition and
manipulation. Dr. Lenat has done some very interesting work on a
program that modifies its own representations as it analyzes its own
behavior. There are other examples of programs that learn from
experience. If we can master knowledge representation and learning,
we can begin to get away from programming by full analysis of every
part of every algorithm needed for every task in a domain. That
would speed up our progress more than new architectures.

[...]

-- Ken Laws

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT