Copy Link
Add to Bookmark
Report
AIList Digest Volume 2 Issue 010
AIList Digest Friday, 27 Jan 1984 Volume 2 : Issue 10
Today's Topics:
AI Culture - IJCAI Survey,
Cognition - Parallel Processing Query,
Programming Languages - Symbolics Support & PROLOG/ZOG Request,
AI Software - KEE Knowledge Representation System,
Review - Rivest Forsythe Lecture on Learning,
Seminars - Learning with Constraints & Semantics of PROLOG,
Courses - CMU Graduate Program in Human-Computer Interaction
----------------------------------------------------------------------
Date: 24 Jan 84 12:19:21 EST
From: Smadar <KEDAR-CABELLI@RUTGERS.ARPA>
Subject: Report on "How AI People Think..."
I received a free copy because I attended IJCAI. I have an address
here, but I don't know if it is the appropriate one for ordering this
report:
Re: the report "How AI People Think - Cultural Premises of the AI community"
Commission of the European Communities
Rue de la Loi, 200
B-1049 Brussels, Belgium
(The report was compiled by Massimo Negrotti, Chair of Sociology of
Knowledge, University of Genoa, Italy)
Smadar (KEDAR-CABELLI@RUTGERS).
------------------------------
Date: Wed 18 Jan 84 11:05:26-PST
From: Rene Bach <BACH@SUMEX-AIM.ARPA>
Subject: brain, a parallel processor ?
What are the evidences that the brain is a parallel processor? My own
introspection seem to indicate that mine is doing time-sharing. That is
I can follow only one idea at a time, but with a lot of switching
between reasoning paths (often more non directed than controlled
switching). Have different people different processors ? Or is the brain
able to function in more than one way (parallel, serial, time-sharing) ??
Rene (bach@sumex)
------------------------------
Date: Wed, 25 Jan 84 15:37:39 CST
From: Mike Caplinger <mike@rice>
Subject: Symbolics support for non-Lisp languages
[This is neither an AI nor a graphics question per se, but I thought
these lists had the best chance of reaching Symbolics users...]
What kind of support do the Symbolics machines provide for languages
other than Lisp? Specifically, are there interactive debugging
facilities for Fortran, Pascal, etc.? It's my understanding that the
compilers generate Lisp output. Is this true, and if so, is the
interactive nature of Lisp exploited, or are the languages just
provided as batch compilers? Finally, does anyone have anything to say
about efficiency?
Answers to me, and I'll summarize if there's any interest. Thanks.
------------------------------
Date: Wed 25 Jan 84 09:38:25-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Reply-to: AIList-Request@SRI-AI
Subject: KEE Representation System
The Jan. issue of IEEE Computer Graphics reports the following:
Intelligenetics has introduced the Knowledge Engineering Environment
AI software development system for AI professionals, computer
scientists, and domain specialists. The database management program
development system is graphics oriented and interactive, permitting
use of a mouse, keyboard, command-option menus, display-screen
windows, and graphic symbols.
KEE is a frame-based representation system that provides support
for descriptive and procedural knowledge representation, and a
declarative, extendable formalism for controlling inheritance of
attributes and attribute values between related units of
knowledge. The system provides support for multiple inheritance
hierarchies; the use of user-extendable data types to promote
knowledge-base integrity; object-oriented programming; multiple-
inference engines/rule systems; and a modular system design through
multiple knowledge bases.
The first copy of KEE sells for $60,000; the second for $20,000.
Twenty copies cost $5000 each.
------------------------------
Date: 01/24/84 12:08:36
From: JAWS@MIT-MC
Subject: PROLOG and/or ZOG for TOPS-10
Does anyone out there know where I can get a version of prolog and/or
ZOG to that will run on a DEC-10 (7.01)? The installation is owned by the
US government, albeit beneign (DOT).
THANX JAWS@MC
------------------------------
Date: Tue 24 Jan 84 11:26:14-PST
From: Armar Archbold <ARCHBOLD@SRI-AI.ARPA>
Subject: Rivest Forsythe Lecture on Learning
[The following is a review of a Stanford talk, "Reflections on AI", by
Dr. Ron Rivest of MIT. I have edited the original slightly after getting
Armar's permission to pass it along. -- KIL]
Dr. Rivest's talk emphasized the interest of small-scale studies of
learning through experience (a "critter" with a few sensing and
effecting operations building up a world model of a blocks environment).
He stressed such familiar themes as
- "the evolutionary function and value of world models is predicting
the future, and consequently knowledge is composed principally of
expectations, possibilities, hypotheses - testable action-sensation
sequences, at the lowest level of sophistication",
- "the field of AI has focussed more on 'backdoor AI', where you
directly program in data structures representing high-level
knowledge, than on 'front-door' AI, which studies how knowledge is
built up from non-verbal experience, or 'side door AI', which studies
how knowledge might be gained through teaching and instruction using
language;
- such a study of simple learning systems in a simple environment -- in
which an agent with a given vocabulary but little or no initial
knowledge ("tabula rasa") investigates the world (either through
active experiementation or through changes imposed by perturbations
in the surroundings) and attempts to construct a useful body of
knowledge through recognition of identities, equivalences,
symmetries, homomorphisms, etc., and eventually metapatterns, in
action-sensation chains (represented perhaps in dynamic logic) -- is
of considerable interest.
Such concepts are not new. There have been many mathematical studies,
psychological similations, and AI explorations along the lines since the
50s. At SRI, Stan Rosenschein was playing around with a simplified learning
critter about a year ago; Peter Cheeseman shares Rivest's interest in
Jaynes' use of entropy calculations to induce safe hypotheses in an
overwhelmingly profuse space of possibilities. Even so, these concerns
were worth having reactivated by a talk. The issues raised by some of the
questions from the audience were also intesting, albeit familiar:
- The critter which starts out with a tabula rasa will only make it
through the enormous space of possible patterns induceable from
experience if it initially "knows" an awful lot about how to learn,
at whatever level of procedural abstraction and/or "primitive"
feature selection (such as that done at the level of the eye itself).
- Do we call intelligence the procedures that permit one to gain useful
knowledge (rapidly), or the knowledge thus gained, or what mixture of
both?
- In addition, there is the question of what motivational structure
best furthers the critter's education. If the critter attaches value
to minimum surprise (various statistical/entropy measures thereof),
it can sit in a corner and do nothing, in which case it may one day
suddenly be very surprised and very dead. If it attaches tremendous
value to surprise, it could just flip a coin and always be somewhat
surprised. The mix between repetition (non-surprise/confirmatory
testing) and exploration which produces the best cognitive system is
a fundamental problem. And there is the notion of "best" - "best"
given the critter's values other than curiosity, or "best" in terms
of survivability, or "best" in a kind of Occam's razor sense
vis-a-vis truth (here it was commented you could rank Carnapian world
models based on the simple primitive predicates using Kolmogorov
complexity measures, if one could only calculate the latter...)
- The success or failure of the critter to acquire useful knowledge
depends very much on the particular world it is placed in. Certain
sequences of stimuli will produce learning and others won't, with a
reasonable, simple learning procedure. In simple artificial worlds,
it is possible to form some kind of measure of the complexity of the
environment by seeing what the minimum length action-sensation chains
are which are true regularities. Here there is another traditional
but fascinating question: what are the best worlds for learning with
respect to critters of a given type - if the world is very
stochastic, nothing can be learned in time; if the world is almost
unchanging, there is little motivation to learn and precious little
data about regular covariances to learn from.
Indeed, in psychological studies, there are certain sequences which
will bolster reliance on certain conclusions to such an extent that
those conclusions become (illegitimately) protected from
disconfirmation. Could one recreate this phenomenon with a simple
learning critter with a certain motivational structure in a certain
kind of world?
Although these issues seemed familiar, the talk certainly could stimulate
the general public.
Cheers - Armar
------------------------------
Date: Tue 24 Jan 84 15:45:06-PST
From: Juanita Mullen <MULLEN@SUMEX-AIM.ARPA>
Subject: SIGLUNCH ANNOUNCEMENT - FRIDAY, January 27, 1984
[Reprinted from the Stanford SIGLUNCH distribution.]
Friday, January 27, 1984
Chemistry Gazebo, between Physical & Organic Chemistry
12:05
SPEAKER: Tom Dietterich, HPP
Stanford University
TOPIC: Learning with Constraints
In attempting to construct a program that can learn the semantics of
UNIX commands, several shortcomings of existing AI learning techniques
have been uncovered. Virtually all existing learning systems are
unable to (a) perform data interpretation in a principled way, (b)
form theories about systems that contain substantial amounts of state
information, (c) learn from partial data, and (d) learn in a highly
incremental fashion. This talk will describe these shortcomings and
present techniques for overcoming them. The basic approach is to
employ a vocabulary of constraints to represent partial knowledge and
to apply constraint-propagation techniques to draw inferences from
this partial knowledge. These techniques are being implemented in a
system called, EG, whose task is to learn the semantics of 13 UNIX
commands (ls, cp, mv, ln, rm, cd, pwd, chmod, umask, type, create,
mkdir, rmdir) by watching "over-the-shoulder" of a teacher.
------------------------------
Date: 01/25/84 17:07:14
From: AH
Subject: Theory of Computation Seminar
[Forwarded from MIT-MC by SASW.]
DATE: February 2nd, 1984
TIME: 3:45PM Refreshments
4:00PM Lecture
PLACE: NE43-512A
"OPERATIONAL AND DENOTATIONAL SEMANTICS FOR P R O L O G"
by
Neil D. Jones
Datalogisk Institut
Copenhagen University
Abstract
A PROLOG program can go into an infinite loop even when there exists a
refutation of its clauses by resolution theorem proving methods. Conseguently
one can not identify resolution of Horn clauses in first-order logic with
PROLOG as it is actually used, namely, as a deterministic programming
language. In this talk two "computational" semantics of PROLOG will be given.
One is operational and is expressed as an SECD-style interpreter which is
suitable for computer implementation. The other is a Scott-Strachey style
denotational semantics. Both were developed from the SLD-refutation procedure
of Kowalski and APT and van Embden, and both handle "cut".
HOST: Professor Albert R. Meyer
------------------------------
Date: Wednesday, 25 Jan 84 23:47:29 EST
From: reiser (brian reiser) @ cmu-psy-a
Reply-to: <Reiser%CMU-PSY-A@CMU-CS-PT>
Subject: Human-Computer Interaction Program at CMU
***** ANNOUNCEMENT *****
Graduate Program in Human-Computer Interaction
at Carnegie-Mellon University
The field of human-computer interaction brings to bear theories and
methodologies from cognitive psychology and computer science to the design
of computer systems, to instruction about computers, and to
computer-assisted instruction. The new Human-Computer Interaction program
at CMU is geared toward the development of cognitive models of the complex
interaction between learning, memory, and language mechanisms involved in
using computers. Students in the program apply their psychology and
computer science training to research in both academic and industry
settings.
Students in the Human-Computer Interaction program design their educational
curricula with the advice of three faculty members who serve as the
student's committee. The intent of the program is to guarantee that
students have the right combination of basic and applied research
experience and coursework so that they can do leading research in the
rapidly developing field of human-computer interaction. Students typically
take one psychology course and one computer science course each semester
for the first two years. In addition, students participate in a seminar on
human-computer interaction held during the summer of the first year in
which leading industry researchers are invited to describe their current
projects.
Students are also actively involved in research throughout their graduate
career. Research training begins with a collaborative and apprentice
relationship with a faculty member in laboratory research for the first one
or two years of the program. Such involvement allows the student several
repeated exposures to the whole sequence of research in cognitive
psychology and computer science, including conceptualization of a problem,
design and execution of experiments, analyzing data, design and
implementation of computer systems, and writing scientific reports.
In the second half of their graduate career, students participate in
seminars, teaching, and an extensive research project culminating in a
dissertation. In addition, an important component of students' training
involves an internship working on an applied project outside the academic
setting. Students and faculty in the Human-Computer Interaction program
are currently studying many different cognitive tasks involving computers,
including: construction of algorithms, design of instruction for computer
users, design of user-friendly systems, and the application of theories of
learning and problem solving to the design of systems for computer-assisted
instruction.
Carnegie-Mellon University is exceptionally well suited for a program in
human-computer interaction. It combines a strong computer science
department with a strong psychology department and has many lines of
communication between them. There are many shared seminars and research
projects. They also share in a computational community defined by a large
network of computers. In addition, CMU and IBM have committed to a major
effort to integrate personal computers into college education. By 1986,
every student on campus will have a powerful state-of-the-art personal
computer. It is anticipated that members of the Human-Computer Interaction
program will be involved in various aspects of this effort.
The following faculty from the CMU Psychology and Computer Science
departments are participating in the Human-Computer Interaction Program:
John R. Anderson, Jaime G. Carbonell, John R. Hayes, Elaine Kant, David
Klahr, Jill H. Larkin, Philip L. Miller, Alan Newell, Lynne M. Reder, and
Brian J. Reiser.
Our deadline for receiving applications, including letters of
recommendation, is March 1st. Further information about our program and
application materials may be obtained from:
John R. Anderson
Department of Psychology
Carnegie-Mellon University
Pittsburgh, PA 15213
------------------------------
End of AIList Digest
********************