Copy Link
Add to Bookmark
Report
AIList Digest Volume 2 Issue 008
AIList Digest Wednesday, 18 Jan 1984 Volume 2 : Issue 8
Today's Topics:
Programming Languages - Lisp for IBM,
Intelligence - Subcognition,
Seminar - Knowledge-Based Design Environment
----------------------------------------------------------------------
Date: Thu 12 Jan 84 15:07:55-PST
From: Jeffrey Mogul <MOGUL@SU-SCORE.ARPA>
Subject: Re: lisp for IBM
[Reprinted from the SU-SCORE bboard.]
Does anyone know of LISP implementations for IBM 370--3033--308x?
Reminds me of an old joke:
How many IBM machines does it take to run LISP?
Answer: two -- one to send the input to the PDP-10, one
to get the output back.
------------------------------
Date: Thursday, 12 Jan 1984 21:28-PST
From: Steven Tepper <greep@SU-DSN>
Subject: Re: lisp for IBM
[Reprinted from the SU-SCORE bboard.]
Well, I used Lisp on a 360 once, but I certainly wouldn't recommend
that version (I don't remember where it came from anyway -- the authors
were probably so embarrassed they wanted to remain anonymous). It
was, of course, a batch system, and its only output mode was "uglyprint" --
no matter what the input looked like, the output would just be printed
120 columns to a line.
------------------------------
Date: Fri 13 Jan 84 06:55:00-PST
From: Ethan Bradford <JLH.BRADFORD@SU-SIERRA.ARPA>
Subject: LISP (INTERLISP) for IBM
[Reprinted from the SU-SCORE bboard.]
Chris Ryland (CPR@MIT-XX) sent out a query on this before and he got back
many good responses (he gave me copies). The main thing most people said
is that a version was developed at Uppsula in Sweden in the 70's. One
person gave an address to write to, which I transcribe here with no gua-
rantees of currentness:
Klaus Appel
UDAC
Box 2103
750 02 Uppsala
Sweden
Phone: 018-11 13 30
------------------------------
Date: 13 Jan 84 0922 PST
From: Jussi Ketonen <JK@SU-AI>
Subject: Lisp for IBM machines
[Reprinted from the SU-SCORE bboard.]
Standard Lisp runs quite well on the IBM machines.
The folks over at IMSSS on campus know all about it --
they have written several large theorem proving/CAI programs for
that environment.
------------------------------
Date: 11 January 1984 06:27 EST
From: Jerry E. Pournelle <POURNE @ MIT-MC>
Subject: intelligence and genius
I should have thought that if you can make a machine more or
less intelligent; and make another machine ABLE TO RECOGNIZE
GENIUS (it need not itself be able to "be" or "have" genius)
then the "genius machine " problem is probably solved: have the
somewhat intelligent one generate lots of ideas, with random
factors thrown in, and have the second "recognizing" machine
judge the products.
Obviously they could be combined into one machine.
------------------------------
Date: Sunday, 15 January 1984, 00:18-EST
From: Marek W. Lugowski <MAREK%MIT-OZ@MIT-MC.ARPA>
Subject: Adrressing DRogers' questions (at last) + on subcogniton
DROGERS (c. November '84):
I have a few questions I would like to ask, some (perhaps most)
essentially unanswerable at this time.
Appologies in advance for rashly attempting to answer at this time.
- Should the initially constructed subcognitive systems be
"learning" systems, or should they be "knowledge-rich" systems? That
is, are the subcognitive structures implanted with their knowledge
of the domain by the programmer, or is the domain presented to the
system in some "pure" initial state? Is the approach to
subcognitive systems without learning advisable, or even possible?
I would go off on a limb and claim that attempting wholesale "learning"
first (whatever that means these days) is silly. I would think one
would first want to spike the system with hell of a lot of knowledge
(e.g., Dughof's "Slipnet" of related concepts whose links are subject to
cummulative, partial activation which eventually makes the nodes so
connected highly relevant and therefore taken into consideration by the
system). To repeat Minsky (and probably, most of the AI folk: one can
only learn if one already almost knows it).
- Assuming human brains are embodiments of subcognitive systems,
then we know how they were constructed: a very specific DNA
blueprint controlling the paths of development possible at various
times, with large assumptions as to the state of the intellectual
environment. This grand process was created by trial-and-error
through the process of evolution, that is, essentially random
chance. How much (if any) of the subcognitive system must be created
essentially by random processes? If essentially all, then there are
strict limits as to how the problem should be approached.
This is an empirical question. If my now-attempted implementation of
the Copycat Project (which uses the Slipnet described above)
[forthcoming MIT AIM #755 by Doug Hofstadter] will converge nicely, with
trivial tweaking, I'll be inclined to hold that random processes can
indeed do most of the work. Such is my current, unfounded, belief. On
the other hand, a failure will not debunk my position--I could always
have messed up implementationally and made bad guesses which "threw"
the system out of its potential convergence.
- Which processes of the human brain are essentially subcognitive
in construction, and which use other techniques? Is this balance
optimal? Which structures in a computational intelligence would be
best approached subcognitively, and which by other methods?
Won't even touch the "optimal" question. I would guess any process
involving a great deal of fan-in would need to be subcognitive in
nature. This is argued from efficiency. For now, and for want of
better theories, I'd approach ALL brain functions using subcognitive
models. The alternative to this at present means von Neumannizing the
brain, an altogether quaint thing to do...
- How are we to judge the success of a subcognitive system? The
problems inherent in judging the "ability" of the so-called expert
systems will be many times worse in this area. Without specific goal
criteria, any results will be unsatisfying and potentially illusory
to the watching world.
Performance and plausibility (in that order) ought to be our criteria.
Judging performance accurately, however, will continue to be difficult
as long as we are forced to use current computer architectures.
Still, if a subcognitive system converges at all on a LispM, there's no
reason to damn its performance. Plausibility is easier to demonstrate;
one needs to keep in touch with the neurosciences to do that.
- Where will thinking systems REALLY be more useful than (much
refined) expert systems? I would guess that for many (most?)
applications, expertise might be preferable to intelligence. Any
suggestions about fields for which intelligent systems would have a
real edge over (much improved) expert systems?
It's too early (or, too late?!) to draw such clean lines. Perhaps REAL
thinking and expertise are much more intertwined than is currently
thought. Anyway, there is nothing to be gained by pursuing that line of
questioning before WE learn how to explicitly organize knowledge better.
Over all, I defend pursuing things subcognitively for these reasons:
-- Not expecting thinking to be a cleanly organized, top-down driven
activity is minimizing one's expectations. Compare thinking with such
activities as cellular automata (e.g., The Game of Life) or The Iterated
Pairwise Prisoner's Dilemma Game to convince yourself of the futility of
top-down modeling where local rules and their iterated interactions are
very successful at concisely describing the problem at hand. No reason
to expect the brain's top-level behavior to be any easier to explain
away.
-- AI has been spending a lot of itself on forcing a von Neumannian
interpretation on the mind. At CMU they have it down to an art, with
Simon's "symbolic information processing" the nowadays proverbial Holy
Grail. With all due respect, I'd like to see more research devoted to
modeling various alleged brain activities with high degree of
parallelism and probabilistic interaction, systems where "symbols" are
not givens but intricately invovled intermediates of computation.
-- It has not been done carefully before and I want at least a thesis
out of it.
-- Marek
------------------------------
Date: Mon, 16 Jan 1984 12:40 EST
From: GLD%MIT-OZ@MIT-MC.ARPA
Subject: minority report
From: MAREK
To repeat Minsky (and probably, most of the AI folk: one can
only learn if one already almost knows it).
By "can only learn if..." do you mean "can't >soon< learn unless...", or
do you mean "can't >ever< learn unless..."?
If you mean "can't ever learn unless...", then the statement has the Platonic
implication that a person at infancy must "already almost know" everything she
is ever to learn. This can't be true for any reasonable sense of "almost
know".
If you mean "can't soon learn unless...", then by "almost knows X", do you
intend:
o a narrow interpretation, by which a person almost knows X only if she
already has knowledge which is a good approximation to understanding X--
eg, she can already answer simpler questions about X, or can answer
questions about X, but with some confusion and error; or
o a broader interpretation, which, in addition to the above, counts as
"almost knowing X" a situation where a person might be completely in the
dark about X-- say, unable to answer any questions about X-- but is on the
verge of becoming an instant expert on X, say by discovering (or by being
told of) some easy-to-perform mapping which reduces X to some other,
already-well-understood domain.
If you intend the narrow interpretation, then the claim is false, since people
can (sometimes) soon learn X in the manner described in the broad-
interpretation example. But if you intend the broad interpretation, then the
statement expands to "one can't soon learn X unless one's current knowledge
state is quickly transformable to include X"-- which is just a tautology.
So, if this analysis is right, the statement is either false, or empty.
------------------------------
Date: Mon, 16 Jan 1984 20:09 EST
From: MAREK%MIT-OZ@MIT-MC.ARPA
Subject: minority report
From: MAREK
To repeat Minsky (and probably, most of the AI folk): one can
only learn if one already almost knows it.
From: GLD
By "can only learn if..." do you mean..."can't >ever< learn unless..."?
If you mean "can't ever learn unless...", then the statement has
the Platonic implication that a person at infancy must "already almost
know" everything she is ever to learn. This can't be true for any
reasonable sense of "almost know".
I suppose I DO mean "can't ever learn unless". However, I disagree
with your analysis. The "Platonic implication" need not be what you
stated it to be if one cares to observe that some of the things an
entity can learn are...how to learn better and how to learn more. My
original statement presupposes an existence of a category system--a
capacity to pigeonhole, if you will. Surely you won't take issue with
the hypothesis that an infant's category system is lesser than that of
an adult. Yet, faced with the fact that many infants do become
adults, we have to explain how the category system can muster to grow
up, as well.
In order to do so, I propose to think that the human learning
is a process where, say, in order to assimilate a chunk of information
one has to have a hundred-, nay, a thousand-fold store of SIMILAR
chunks. This is by direct analogy with physical growing up--it
happens very slowly, gradually, incrementally--and yet it happens.
If you recall, my original statement was made against attempting
"wholesale learning" as opposed to "knowledge-rich" systems when
building subcognitive sytems. Admittedly, the complexity of a human
being is many an order of magnitude beyond that what AI will attempt
for decades to come, yet by observing the physical development of a
child we can arrive at some sobbering tips for how to successfully
build complex systems. Abandoning the utopia of having complex
systems just "self-organize" and pop out of simple interactions of a
few even simplier pieces is one such tip.
-- Marek
------------------------------
Date: Tue 17 Jan 84 11:56:01-PST
From: Juanita Mullen <MULLEN@SUMEX-AIM.ARPA>
Subject: SIGLUNCH ANNOUNCEMENT- JANUARY 20, l984
[Reprinted from the Stanford SIGLUNCH distribution.]
Friday, January 20, 1984 12:05
LOCATION: Chemistry Gazebo, between Physical & Organic Chemistry
SPEAKER: Harold Brown
Stanford University
TOPIC: Palladio: An Exploratory Environment for Circuit Design
Palladio is an environment for experimenting with design methodologies
and knowledge-based design aids. It provides the means for
constructing, testing and incrementally modifying design tools and
languages. Palladio is a testbed for investigationg elements of
design including specification, simulation, refinement and use of
previous designs.
For the designer, Palladio supports the construction of new
specification languages particular to the design task at hand and
augmentation of the system's expert knowledge to reflect current
design goals and constraints. For the design environment builder,
Palladio provides several programming paradigms: rule based, object
oriented, data oriented and logical reasoning based. These
capabilities are largely provided by two of the programming systems in
which Palladio is implemented: LOOPS and MRS.
In this talk, we will describe the basic design concepts on which
Palladio is based, give examples of knowledge-based design aids
developed within the environment, and describe Palladio's
implementation.
------------------------------
End of AIList Digest
********************