Copy Link
Add to Bookmark
Report
AIList Digest Volume 2 Issue 142
AIList Digest Friday, 19 Oct 1984 Volume 2 : Issue 142
Today's Topics:
Applications - Biofeedback Instrument Link,
LISP - Common Lisp Flavors,
AI Tools - TI Expert System Development Tool & Benchmarks,
Linguistics - Languages and Cases,
Knowledge Representation - Universal Languages,
Administrivia - Sites Receiving AIList & Net Readership
----------------------------------------------------------------------
Date: 18 Oct 84 14:28:34 EDT
From: kyle.wbst@XEROX.ARPA
Subject: Biofeedback Instrument Link
The John F. Kennedy Institute For Handicapped Children (707 North
Broadway, Baltimore, Maryland 21205 phone 955-5000) has done work in
this area. Contact Lynn H. Parker, or Dr. Michael F. Cataldo. They have
also published in things like Journal of Behavioral Medicine.
Dr. D. Regan at Dalhousie University Department of Psychology Halifax,
N.S. B3H 4J1 has also done a lot in this area including the real time
Fourier analysis in a feedback loop. You can read about his work in the
Dec. 1979 issue of Scientific American (Vol. 241, No. 6 around p. 144 as
I recall).
At Carnegie-Mellon University were some people with experience in this
area. You may try contacting: A. Terry Bahill (Bioengineering ); Mark B.
Friedman (Psychology and EE). They may also be able to put you in touch
with a person they worked with about 4 years ago at the Pittsburgh Home
for Crippled Children called Mata Loevner Jaffe. I think she left full
time status at HCC and is now a professor at the University of
Pittsburgh.
If you want historical info, look in the literature for a system called
PIAPACS (I forgot what the acronym stands for now) that was developed by
LEar Siegler Co. in Michigan for test pilots at Edwards Air Force Base
in California in the mid 1960's.
And finally there is the historical work at Cambridge Air Force Research
Labs in the early 1960's to put a man in a feedback loop to use
amplitude modulation of the brain waves (alphas) to send morse code via
a PDP-8 (to clean up the signals and do some limited pattern
recognition) to a teletypewriter to transmit the first message
"CYBERNETICS". Shortly thereafter, Barbara Brown (I'm not sure of the
first name here) at the VA Hospital in Los Angeles used BFT techniques
to have subjects control lights and small model railroad trains.
Earle.
P.S. The ultimate source of commercially available hardware and software
in this area would be the TRACE Center at the University of Wisconsin at
Madison.
------------------------------
Date: Thu, 18 Oct 1984 17:53 EDT
From: Steven <Handerson@CMU-CS-C.ARPA>
Subject: Common Lisp Flavors
I am working on Flavors as part of the Spice Lisp project at CMU. Although a
prototype system has been finished, we are currently in the process of
redesigning the thing from the ground up in an attempt to make it more modular
and portable [we've pretty much trashed the idea of a "white-pages"
(manual-level) object-oriented interface for now]. Could be another month.
-- Steve <Handerson at CMU-CS-C>
------------------------------
Date: Thu, 18 Oct 84 11:43:53 pdt
From: Stanley Lanning <lanning@lll-crg.ARPA>
Subject: Expert System Development Tool from TI
[From the October 1984 issure of Systems & Software magazine, page 50]
TI AI tool prompts users to develop application
With many companies now entering the artificial-intelligence business, the
question, "Are there enough AI experts to write the programs?" has been
raised. The answer is that Ph.D.s in AI are no longer needed to write expert
systems because several expert-system-development tools are available,
including one just introduced by Texas Instruments.
To ensure that AI tools can be used by nonexperts, Texas Instruments has
introduced a first-of-a-kind tool that prompts users for all information
needed to develop an expert system. The Personal Consultant is a menu-and
window-oriented system that devolps rule-based, backward-chaining expert
systems on the TI Professional Computer under MS-DOS operating systems...
------------------------------
Date: 19 October 1984 12:07-EDT
From: George J. Carrette <GJC @ MIT-MC>
Subject: LMI, TI, and Lisp Benchmarks.
Glad to be of some help. The main problem I had with Pentland's note
was the explanatory comments which were technically not as informative
as they could have been. Let me take a moment to review them:
(1) BITBLT. This result has more to do with the different amounts
of microcode dedicated to such things and the micro instruction
execution speed. Both the TI and 3600 have a simple and fast
memory bus talking to similar dynamic ram technology. (On the
other hand the LAMBDA has a cache and block/read capability)
(2) FLOATING POINT. Unless TI has extensively reworked the seldomly used
small-floating-point-number code from what LMI sent them, it is the
case that small floats are converted into longs inside the microcode
and then converted back.
(2)(3) CONS & PAGING. ??? Would be more interesting to know how long
a full-gc of a given number of mega-conses takes. That bears more on the
real overall cost of consing and paging.
(4) MAKE-INSTANCE. Could indeed be improved on both the TI and the 3600.
People who need to make instances fast and know how usually resort
to writing their own %COPY-INSTANCE, since overhead of system default
MAKE-INSTANCE depends a lot on sending :INIT methods and other parsing
and book-keeping duties.
(5)(6) SEND/FUNCALL. These are full microcoded, although improvements are
possible. There are some fundamental differences between the
LMI/TI micro architecture and the 3600 when it comes to function
calling though. In a "only-doing-function-calls-but-no-work"
kind of trivial benchmark there are good reasons why a
LMI/TI architecture will never equal a 3600 architecture.
(7) 32bit floating. Similar comment as applied to small floats,
there wasn't any 32-bit floating point number representation in the
code before, the floating point numbers were longer than 32 bits total.
Then there was a reference to "it is already much more than an LMI,
CADR or LM2." First of all, the Explorer *is* an LMI product, and
secondly the main product line based on the LMI-LAMBDA has some
fundamentally different features including pageable microstore,
lisp->microcode compiler, plenty of room for user loadable microstore,
SMD disk interface, multiple-processor software support, physical memory
cache, which can very strongly and materially change the performance of many
applications interesting in both AI research and practice.
If you need raw performance in simulation, vision research, array processing,
the classic way to go is special microcode
or special purpose hardware. The rule may be that simple operations
(such as what one may find in trivial benchmarks) done many times call
for specilization. The LAMBDA has better support
for microcode development, (more statistics counters, micro history,
micro stack, micro store, the possibility of
doing lambda->lambda debug using multiple-processor lambda configuration,
paging microcode good for patching during development) than any other
lispmachine. Of course, it does have a high degree of microcode
compatibility with the Explorer, which does suggest some possible ways
to do things probably of interest more to applying technology than to pure
get-it-up-the-first-time research.
-gjc
------------------------------
Date: Thu 18 Oct 84 15:44:36-MDT
From: Uday Reddy <U-REDDY@UTAH-20.ARPA>
Subject: Languages and cases
When we discuss why cases have disappeared, we should also consider why
they have appeared. It is clear that they have appeared as "naturally" as
they have disappeared. Which of these represents a rise in "entropy"?
A reasonable explanation seems to be that cases have appeared for the sake
of convenience and brevity. Before their proliferation, probably
prepositions and suppositions were used. Eventually, the cases became such
a burden that people moved away from their complexity. Don't we see the
same trend in programming languages?
Uday Reddy
------------------------------
Date: Fri 19 Oct 84 09:22:50-MDT
From: Stan Shebs <SHEBS@UTAH-20.ARPA>
Subject: Universal Languages (again!)
Before worrying about a universal language for man-machine communication,
we need a universal mechanism for knowledge representation! After all,
the external language cannot include concepts (words) for things that
are not internally expressible. And while there have been numerous
claimants for the status of a UKRL (Universal Knowledge Representation
Language) (one of my own projects included), there are none that can
really qualify, except perhaps on the basis of Turing-equivalence.
Perhaps the best overall candidate is some kind of logical formalism,
but as one makes the formalism more general, it seems to become
more content-free. Seems to me (from examination of the literature)
that the search for a UKRL was very active about 3-5 years ago, but
that now everybody has given it up as being the wrong thing to look
for (does anybody who was there disagree with this analysis?).
These days, I'm inclined to believe that one might establish conditions
for *sufficiency* in a KRL. There's the obvious condition that the
KRL should be Turing-equivalent. Less obviously perhaps, the KRL should
also have the means of automatically translating expressions written
using that KRL to ones in some other KRL. Also, the KRL should have
complete knowledge of itself (the second condition probably implies
this). There may be other reasonable conditions (such as some condition
stating that KRL expressions should have some explicit relation to things
in the "real world"), but I think the three above should be a minimum.
Notice that they also make the question of a *single* UKRL irrelevant.
Two sufficiently powerful KRLs can translate themselves back and forth
freely, so neither is more "universal" than the other. Notice also
that any given KRL must have knowledge of at least one other KRL, in
order to facilitate the translation process. When such KRLs are
available, then we can profitably think about standard ways of
communicating (to ease the poor humans' difficulty with handling
69 KRLs all at once!)
stan shebs
ps I haven't actually seen any research along these lines (although
Genesereth and Mackinlay made some suggestive remarks in their AAAI-84
paper). Is anybody out there looking at KRL translation, or maybe
something more specific, like OPS5 <-> Prolog?
------------------------------
Date: Thu 11 Oct 84 10:44:38-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Reply-to: AIList-Request@SRI-AI
Subject: Sites Receiving AIList
Readers at the following sites have responded to my Sep. 26 list of
AIList recipients (Volume 2, No. 125), or have since signed up for
the digest. (There are still many other sites, of course, particularly
on Usenet. I have also had contact with individuals who receive
the digest but cannot respond via the net.)
Army Ballistic Research Laboratory
Army Missile Command
Defense Communications Agency
DoD Computer Security Center
Edwards Air Force Base
Arthur D. Little, Inc.
Battelle Northwest (Pacific Northwest Laboratory)
Bell Communications Research
Interactive Systems Corporation
Lockheed
Microelectronics and Computer Corporation
Varian Associates
Case Western Reserve University
Dundee College of Technology, Scotland
Indiana University
Northeastern University
Southern Methodist University
Stockton State College
University of California at San Diego
University of Illinois at Urbana
University of Waterloo
Washington University in St Louis
My apologies to any sites I previously misspelled, including
Naval Personnel Research and Development Center
Naval Research Laboratory
Naval Surface Weapons Center
University of Rochester
-- Ken Laws
------------------------------
Date: Wed, 10 Oct 84 01:50:10 edt
From: bedford!bandy@mit-eddie
Subject: Net Readership
[Forwarded from the Human-Nets digest by Laws@SRI-AI.]
Date: Mon, 8 Oct 84 14:28 EDT
From: TMPLee@MIT-MULTICS.ARPA
Has anyone ever made an estimate (with error bounds) of how
many people have electronic mailboxes reachable via the
Internet? (e.g., ARPANET, MILNET, CHAOSNET, DEC ENET, Xerox,
USENET, CSNET, BITNET, and any others gatewayed that I've
probably overlooked?) (included in that of course group
mailboxes, even though they are a poor way of doing business.)
Gee, my big chance to make a bunch of order of magnitude
calculations.... [...]
USENET/DEC ENET: 10k machines, probably on the order of 40 regular
users for the unix machines and 20 for the "other" machines so that's
100k users right there.
[Rich Kulaweic (RSK@Purdue) notes 15k users on 40 Unix machines
at Purdue, with turnover of several thousand per year. -- KIL]
BITNET: something like 100 machines and they're university machines in
general, which implies that they're HEAVILY overloaded, 100-200
regular active users for each machine - 10k users.
[A news item in the latest CACM mentions 200 hosts at 60 sites,
soon to be expanded to 200 sites worldwide. A BITNET information
center is also being developed by a consortium of 500 U.S.
universities, so I expect they'll all get nodes soon. -- KIL]
Chaos: about 100-300 machines, 10 users per machine (yes, oz and ee
are heavily overloaded at times, but then there's all those unused
vaxen on the 9th floor of ne43). 1k users for chaosnet.
I think that we can ignore csnet here (they're all either on usenet or
directly on internet anyway...), so they count for zero.
ARPA/MILNET: Hmm... This one is a little tougher (I'm going to include
the 'real' internet as a whole here), but as I remember, there are
about 1k hosts. Now, some of the machines here are heavily used
(maryland is the first example that pops to mind) and some have
moderate loads (daytime - lots of free hardware at 5am!), let's say
about 40 regular users per machine -- another 10k users.
I dare not give a guesstimate for Xerox.
[Murray.PA@Xerox estimates 4000 on their Grapevine system. -- KIL]
So it's something on the order of 100k users for the community. [...]
Well, it could be 50k people, but these >are< order of magnitude
calculations...
[Mark Crispin (MRC@Score) notes that there are 10k addressable
mailboxes at Stanford, but that the number of active users is
perhaps only a tenth of this. Andy's final estimate might be
inflated or deflated by such a factor. -- KIL]
Now that I've stuck my neck out giving these estimates, I'm awaiting
for it to be chopped off.
andy beals
bandy@{mit-mc,lll-crg}
------------------------------
End of AIList Digest
********************