Copy Link
Add to Bookmark
Report
AIList Digest Volume 4 Issue 008
AIList Digest Wednesday, 15 Jan 1986 Volume 4 : Issue 8
Today's Topics:
Queries - Macsyma & Symbolics Prolog & Speech Learning Machine,
Definition - Paradigm,
Intelligence - Computer IQ Tests,
AI Tools & Applications - Expert Systems and Computer Graphics &
Common Lisp for Xerox & Real-Time Process Control
----------------------------------------------------------------------
Date: 13 Jan 86 23:16 GMT
From: dkb-amos @ HAWAII-EMH.ARPA
Subject: Macsyma
I would appreciate any help that could be supplied in locating a
source for Macsyma.
I'm looking for a version that will run under Franzlisp Opus 38.91.
We do contract work for the Air Force but I have no immediate
contract application for this package, I would just like to get
familier with it and have it around for possible future applications.
Thanks.
-- Dennis Biringer
------------------------------
Date: Mon 13 Jan 86 16:07:10-PST
From: Luis Jenkins <lej@SRI-KL>
Subject: Symbolics Prolog
[Sorry if this topic has been beaten to death before many time ...]
Here at Schlumberger Palo Alto Research (SPAR) we have been working
for some time on large Prolog programs for Hardware Verification,
first in Dec-20 Prolog and then in Quintus Prolog for Suns.
Recently we have been interested in the possibility of using Symbolics
Prolog for further R&D work, as the lab has a bunch of LispMs.
Does anyone out there has first-hand (or n-hand, please specify)
experience with the Prolog that Symbolics offers. Specifically, we
want to hear praises/complaints about :-
o DEC-10/Quintus Compatibility
o Speed
o Bugs
o Extensions
o Interface with the LispM environment
o Mixing Prolog & Lisp code
o Random User Comments
Thanks,
Luis Jenkins
Schlumberger Palo Alto Research
lej@sri-kl
...decwrl!spar!lej
------------------------------
Date: 13 Jan 86 11:22:01 EST
From: kyle.wbst@Xerox.ARPA
Subject: Johns Hopkins Learning Machine
Does anyone have any more info on the following:
I caught the tail end of a news item on the NBC Today Show this morning
about someone at Johns Hopkins who has built a "Networked" computer
consisting of 300 "elements" that has a speech synthesizer attached to
it. The investigator claims that the thing learns to speak English the
same way a human baby does. They played a tape recording which
represented a condensation of several hours of "learning" by the device.
The investigator claims he does not know how the the thing works. I
didn't catch his name.
Who is this person and what is the system configuration of the machine
(which seemed to fit into one large rack of equipment).
Earle Kyle
------------------------------
Date: Tue, 14 Jan 86 09:34:53 est
From: Walter Hamscher <hamscher@MIT-HTVAX.ARPA>
Subject: Today Show Segment
A friend of mine saw the Today Show this Monday morning,
and said there was a particularly breathless segment that
left the impression that somebody has solved `the AI
problem'. It seems to have been a rather vague story
about someone at Johns Hopkins who has built some sort of
massively parallel machine that learns language.
Sorry the details are so sketchy. Did anybody else
see this segment or know the story behind the story?
------------------------------
Date: 14 Jan 86 22:05:47 EST
From: Mike Tanner @ Ohio State <TANNER@RED.RUTGERS.EDU>
Subject: Paradigm
I've seen some discussion of paradigm in recent AILists and since I
just audited a grad course in philosophy of science where we read Kuhn
I thought I'd summarize what I remember of Kuhn's notion of paradigm.
(Auditing a course certainly does not make me an expert, but it does
mean that I've read Kuhn recently and carefully.)
Several people have pointed out that the dictionary definition (e.g.,
Webster's 3rd New International) of `paradigm' is `example',
`pattern', or `model'. But they further claim that this is not what
Kuhn meant. However, I think that the way `paradigm' is used by Kuhn
is (most of the time) perfectly compatible with the dictionary.
In _The_Structure_of_Scientific_Revolutions_ Kuhn normally uses
`paradigm' to mean `example of theory applied' or `example of how to
do science'. (Sometimes he uses it to mean `theory', which is
confusing and I think he later admits that it is just sloppiness on
his part.) Ron Laymon, our prof in the philosophy of science course,
suggested that it might be best to think of paradigm as `an
uninterpreted book'. Everybody working in some field points to a book
when asked what they do and says, "There, read that book and you'll
know." Of course, once the book is opened there's likely to be a lot
of disagreement about what it means.
Another important characteristic of paradigms is that they suggest a
lot of further research. If I were a cynical person I would say that
the success of a paradigm depends on people's perceptions of funding
prospects for research of the sort that it defines.
I'm not sure that AI is mature enough to rate any paradigms. But I
think that a case could be made for some things as "mini-paradigms",
such as GPS, MYCIN, Minsky's frame paper, etc. That is, they defined
some sub-discipline within AI where a lot of people did, and are
doing, fruitful work. (I don't mean "mini" to be pejorative. I just
think that a paradigm has to be a candidate for unifying research in
the field, or maybe even defining the field, and these probably don't
qualify. But then, I might be expecting too much of paradigms.)
-- mike
ARPA: tanner@Rutgers
CSNet: tanner@Ohio-State
Physically, I am at Ohio State but I have a virtual existence at
Rutgers and can receive mail either place.
------------------------------
Date: Wed 15 Jan 86 09:42:58-CST
From: David Throop <AI.THROOP@R20.UTEXAS.EDU>
Subject: Computers & IQ Tests
There have been recent inquiries about how well computer programs can do on
IQ tests.
An article in the journal _Telicom_ (1) mentions a computer program for
taking IQ tests. It seems to be aimed entirely at the kinds of math
puzzles that fill in missing numbers in series.
"The program (is) called HIQ-SOLVER 160 ... BASIC, less than 10 Kbytes...
in July/August Dutch computer magazine _Sinclair_Gebruiker_ has the
listing... The program has been tried on the numerical test in Hans
Eysenck's _Check_Your_Own_IQ_ and it solved 36 out of 50 problems,
corresponding with an IQ of about 160 (hence its name); as some items in
the Eysenck test were of a type that had not been implemented one might
argue that the program's raw score corresponds with an even higher IQ ..."
He goes on to give the algorithm.
I think this example highlights an example of the difficulty of applying
human IQ tests to machines - the program scores very high on certain IQ
tests because it does a very limited kind of pattern recognition very well.
But it is completely brittle - it's helpless to recognize patterns that are
only slightly off what it expects.
Human intelligence tests do not measure human intelligence directly.
They measure characteristics associated with intelligence. The underlying
assumption is that this association is good enough that it will predict how
well humans will do on tasks that cannot be given as standard tests, but
evince intelligence.
This is a dubious proposition for humans, but it breaks down completely
on machines. Nonetheless, it shouldn't be too hard to CONS up some
programs that do terribly well on some not too terribly well designed IQ
tests.
(1)Feenstra, Marcel "Numerical IQ - Tests and Intelligence" Telicom, Aug
85, Bx 141 San Francisco 94101
------------------------------
Date: Sun 12 Jan 86 18:05:49-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Expert Systems and Computer Graphics
IEEE Computer Graphics and Applications, December 1985, pp. 58-59,
has a review by Ware Myers of the 6th Eurographics conference.
The key theme was integrating expert systems and computer graphics.
Several of the papers discussed binding Prolog and the GKS
graphical kernel standard.
------------------------------
Date: Sun 12 Jan 86 17:28:34-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Common Lisp for Xerox
Expert Systems, Vol. 2, No. 4, October 1985, p. 252, reports that
Xerox will be implementing Common Lisp on its Lisp workstations.
The first copies may be available in the second quarter of 1986.
Xerox will continue to support Interlisp-D, and will be adding
extensions and compatable features to both languages. A package
for converting Interlisp-D programs to Common Lisp is being
developed.
Guy Steele said (Common Lisp, p. 3) that it is expected that user-
level packages such as InterLisp would be built on top of the Common
Lisp core. Perhaps that is now happening. Xerox is also offering
CommonLoops as a proposed standard for object-oriented programming.
------------------------------
Date: Sun 12 Jan 86 18:00:24-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Real-Time Process Control
IEEE Spectrum, January 1986, p. 64, reports the following:
The building of engineering expertise into single-loop controllers
is beginning to bear fruit in the form of a self-tuning process
controller. The Foxboro Co. in Foxboro, Mass., included self-tuning
features in its Model 760 single-loop controller as well as in
three other controller-based products. Common PID (proportional,
integral, and derivative) controllers made by Foxboro now have a
built-in microprocessor with some 200 production rules; the loop-tuning
rules have evolved over the last 40 years both at Foxboro and
elsewhere. The Foxboro self-tuning method is a pattern recognition
approach that allows the user to specify desirable temporal
response to disturbances in the controlled parameter or in the
controlled set point. The controller then observes the actual
shape of these disturbances and adjusts its PID values to restore
the desirable response.
Asea also makes a self-tuning controller, Novatune, but the current
version requires substantial knowledge of stochastic control theory
to install.
Lisp Machine Inc. has now installed PICON, its expert system for
real-time process control, at about a half-dozen sites. It has also
announced support for GM's MAP communication protocol for factory
automation.
------------------------------
End of AIList Digest
********************