Copy Link
Add to Bookmark
Report
AIList Digest Volume 1 Issue 049
AIList Digest Monday, 29 Aug 1983 Volume 1 : Issue 49
Today's Topics:
Conferences - AAAI-83 Registration,
Bindings - Rog-O-Matic & Mike Mauldin,
Artificial Languages - Loglan,
Knowledge Representation & Self-Consciousness - Textnet,
AI Publication - Corporate Constraints,
Lisp Availability - PSL on 68000's,
Automatic Translation - Lisp-to-Lisp & Natural Language
----------------------------------------------------------------------
Date: 23 Aug 83 11:04:22-PDT (Tue)
From: decvax!linus!philabs!seismo!rlgvax!cvl!umcp-cs!arnold@Ucb-Vax
Subject: Re: AAAI-83 Registration
Article-I.D.: umcp-cs.2093
If there will be over 7000 people attending AAAi_83,
then there will almost be as many people as will
attend the World Sci. Fic. Convention.
I worked registration for AAAI-83 on Aug 22 (Monday).
There were about 700 spaces available, along with about
1700 people who pre-registered.
[...]
--- A Volunteer
------------------------------
Date: 26 Aug 83 2348 EDT
From: Rudy.Nedved@CMU-CS-A
Subject: Rog-O-Matic & Mike Mauldin
Apparently people want something related to Rog-O-Matic and are
sending requests to "Maudlin". If you notice very closely that is not
how his name is spelled. People are transposing the "L" and the "D".
Hopefully this message will help the many people who are trying to
send Mike mail.
If you still can't get his mailing address right, try
"mlm@CMU-CS-CAD".
-Rudy
A CMU Postmaster
------------------------------
Date: 28 August 1983 06:36 EDT
From: Jerry E. Pournelle <POURNE @ MIT-MC>
Subject: Loglan
I've been interested in LOGLANS since Heinlein's GULF which was in
part devoted to it. Alas, nothing seems to happen that I can use; is
the institute about to publish new materials? Is there anything in
machine-readable form using Loglans? Information appreciated. JEP
------------------------------
Date: 25-Aug-83 10:03 PDT
From: Kirk Kelley <KIRK.TYM@OFFICE-2>
Subject: Re: Textnet
Randy Trigg mentioned his "Textnet" thesis project a few issues back
that combines hypertext and NLS/Augment structures. He makes a strong
statement about distributed Textnet on worldnet:
There can be no mad dictator in such an information network.
I am interested in building a testing ground for statements such as
that. It would contain a model that would simulate the global effects
of technologies such as publishing on-line. Here is what may be of
interest to the AI community. The simulation would be a form of
"augmented global self-consciousness" in that it models its own
viability as a service published on-line via worldnet. If you have
heard of any similar project or might be interested in collaborating
on this one, let me know.
-- kirk
------------------------------
Date: 25 Aug 83 15:47:19-PDT (Thu)
From: decvax!microsoft!uw-beaver!ssc-vax!tjj @ Ucb-Vax
Subject: Re: Language Translation
Article-I.D.: ssc-vax.475
OK, you turned your flame-thrower on, now prepare for mine! You want
to know why things don't get published -- take a look at your address
and then at mine. You live (I hope I'm not talking to an AI Project)
in the academic community; believe it or not there are those of us
who work in something euphemistically refered to as industry where
the rule is not publish or perish, the rule is keep quiet and you are
less likely to get your backside seared! Come on out into the 'real'
world where technical papers must be reviewed by managers that don't
know how to spell AI, let alone understand what language translation
is all about. Then watch as two of them get into a moebius argument,
one saying that there is nothing classified in the paper but there is
proprietary information, while the other says no proprietary but it
definitely is classified! All the while this is going on the
deadline for submission to three conferences passes by like the
perennial river flowing to the sea. I know reviews are not unheard
of in academia, and that professors do sometimes get into arguments,
but I've no doubt that they would be more generally favorable to
publication than managers who are worried about the next
stockholder's meeting.
It ain't all that bad, but at least you seem to need a wider
perspective. Perhaps the results haven't been published; perhaps the
claims appear somewhat tentative; but the testing has been critical,
and the only thing left is primarily a matter of drudgery, not
innovative research. I am convinced that we may certainly find a new
and challenging problem awaiting us once that has been done, but at
least we are not sitting around for years on end trying to paste
together a grammar for a context
sensitive language!!
Ted Jardine
TJ (with Amazing Grace) The Piper
ssc-vax!tjj
------------------------------
Date: 24 Aug 83 19:47:17-PDT (Wed)
From: pur-ee!uiucdcs!uicsl!pollack @ Ucb-Vax
Subject: Re: Lisps on 68000's - (nf)
Article-I.D.: uiucdcs.2626
I played with a version of PSL on a HP 9845 for several hours one
day. The environment was just like running FranzLisp under Emacs in
"electric-lisp" mode. (However, the editor is written in PSL itself,
so it is potentially much more powerful than the emacs on our VAX,
with its screwy c/mock-lisp implementation.) The language is in the
style of Maclisp (rather than INTERLISP) and uses standard scoping
(rather than the lexical scoping of T). The machine has 512 by 512
graphics and a 2.5 dimensional window system, but neither are as
fully integrated into the programming environment as on a Xerox
Dolphin. Although I have no detailed benchmarks, I did port a
context-free chart parser to it. The interpreter speed was not
impressive, but was comparable with interpreted Franz on a VAX.
However, the speed of compiled code was very impressive. The compiler
is incremental, and built-in to the lisp system (like in INTERLISP),
and caused about a 10-20 times speedup over interpreted code (my
estimate is that both the Franz and INTERLISP-d compilers only net
2-5 times speedup). As a result, the compiled parser ran much faster
on the 68000 than the same compiled program on a Dolphin.
I think PSL is definitely a superior lisp for the 68000, but I have
no idea whether is will be available for non-HP machines...
Jordan Pollack
University of Illinois
...pur-ee!uiucdcs!uicsl!pollack
------------------------------
Date: 24 Aug 83 16:20:12-PDT (Wed)
From: harpo!gummo!whuxlb!floyd!vax135!cornell!uw-beaver!ssc-vax!sts@Ucb-Vax
Subject: Re: Lisp-to-Lisp translation
Article-I.D.: ssc-vax.468
These problems just go to show what AI people have known for years
(ever since the first great bust of machine translation) - ya can't
translate without understanding what yer translating. Optimizing
compilers are often impressive encodings of expert coders' knowledge,
and they are for very simple languages - not like Interlisp or English
stan the lep hacker
ssc-vax!sts (soon utah-cs)
------------------------------
Date: 24 Aug 83 16:12:59-PDT (Wed)
From: harpo!floyd!vax135!cornell!uw-beaver!ssc-vax!sts @ Ucb-Vax
Subject: Re: Language Translation
Article-I.D.: ssc-vax.467
You have heard of my parser. It's a variant on Berkeley's PHRAN, but
has been improved to handle arbitrarily ambiguous sentences. I
submitted a paper on it to AAAI-83, but it was rejected (well, I did
write it in about 3 days - wasn't very good). A paper will be
appearing at the AIAA Computers in Aerospace conference in October.
The parser is only a *basic* solution - I suppose I should have made
that clearer. Since it is knowledge-based, it needs **lots** of
knowledge. Right now we're working on ways to acquire linguistic
knowledge automatically (Selfridge's work is very interesting). The
knowledge base is woefully small, but we don't anticipate any problems
expanding it (famous last words!).
The parser has just been released for use within Boeing ("just"
meaning two days ago), and it may be a while before it becomes
available elsewhere (sorry). I can mail details on it though.
As for language analysis being NP-complete, yes you're right. But are
you sure that humans don't brute-force the process, and that computers
won't have to do the same?
stan the lep hacker
ssc-vax!sts (soon utah-cs)
ps if IBM is using APL, that explains a lot (I'm a former MVS victim)
------------------------------
Date: 24 Aug 83 15:47:11-PDT (Wed)
From: harpo!gummo!whuxlb!floyd!vax135!cornell!uw-beaver!ssc-vax!sts@Ucb-Vax
Subject: Re: So the language analysis problem has been solved?!?
Article-I.D.: ssc-vax.466
Heh-heh. Thought that'd raise a few hackles (my boss didn't approve
of the article; oh well. I tend to be a bit fiery around the edges).
The claim is that we have "basically" solved the problem. Actually,
we're not the only ones - the APE-II parser by Pazzani and others from
the Schank school have also done the same thing. Our parser can
handle arbitrarily ambiguous sentences, generating *all* the possible
meanings, limited only by the size of its knowledge base. We have the
capability to do any sort of idiom, and mix any number of natural
languages. Our problems are really concerned with the acquisition of
linguistic knowledge, either by having nonspecialists put it in by
hand (*everyone* is an expert on the native language) or by having the
machine acquire it automatically. We can mail out some details if
anyone is interested.
One advantage we had is starting from ground zero, so we had very few
preconceptions about how language analysis ought to be done, and
scanned the literature. It became apparent that since we were
required to handle free-form input, any kind of grammar would
eventually become less than useful and possibly a hindrance to
analysis. Dr. Pereira admits as much when he says that grammars only
reflect *some* aspects of language. Well, that's not good enough. Us
folks in applied research can't always afford the luxury of theorizing
about the most elegant methods. We need something that models human
cognition closely enough to make sense to knowledge engineers and to
users. So I'm sort of in the Schank camp (folks at SRI hate 'em)
although I try to keep my thinking as independent as possible (hard
when each camp is calling the other ones charlatans; I'll post
something on that pernicious behavior eventually).
Parallel production systems I'll save for another article...
stan the leprechaun hacker
ssc-vax!sts (soon utah-cs)
ps I *did* read an article of Dr. Pereira's - couldn't understand the
point. Sorry. (perhaps he would be so good as to explain?)
[Which article? -- KIL]
------------------------------
Date: 26 Aug 83 11:19-EST (Fri)
From: Steven Gutfreund <gutfreund%umass-cs@UDel-Relay>
Subject: Musings on AI and intelligence
Spafford's musings on intelligent communications reminded me of an
article I read several years ago by John Thomas (then at T.J. Watson,
now at White Plains, a promotion as IBM sees it).
In the paper he distinguishes between two distinct approaches (or
philosophies) at raising the level man/machine communication.
[Natural langauge recognition is one example of this problem. Here the
machine is trying to "decipher" the user's natural prose to determine
the desired action. Another application are intelligent interfaces
that attempt to decipher user "intentions"]
The Human Approach -
Humans view communication as inherently goal based. When one
communicates with another human being, there is an explicit goal -> to
induce a cognitive state in the OTHER. This cognitive state is usually
some function of the communicators cognitive state. (usually the
identity function, since one wants the OTHER to understand what one is
thinking). In this approach the medium of communication (words, art,
gestulations) are not the items being communicated, they are
abstractions meant to key certain responses in the OTHER to arrive at
the desired goal.
The Mechanistic Approach
According to Thomas this is the approach taken by natural language
recognition people. Communication is the application of a decrypto
function to the prose the user employed. This approach is inherently
flawed, according to Thomas, since the actually words and prose do not
contain meaning in themselves but are tools for efecting cognitive
change. Thus, the text of one of Goebell's propaganda speeches can
not be examined in itself to determine what it means, but one needs an
awareness of the cognitive models, metaphors, and prejudices of the
speaker and listeners. Capturing this sort of real world knowledge
(biases, prejudices, intuitive feelings) is not a stong point of the
AI systems. Yet, the extent that certain words move a person, may be
much more highly dependent on, say his Catholic upbringing than the
words themselves.
If one doubts the above thesis, then I encourage you to read Thomas
Kuhn's book "the Sturcture of Scientific Revolutions" and see how
culture can affect the interpretation of supposedly hard scientific
facts and observations.
Perhaps something that best brings this out was an essay (I believe it
was by Smuyllian) in "The Mind's Eye" (Dennet and Hofstadter). In this
essay a homunculus is set up with the basic tools of one of Schank's
language understanding systems (scripts, text, rules, etc.) He then
goes about the translation of the text from one language to another
applying a set of mechanistic transformation rules. Given that the
homunculus knows nothing of either the source language or the target
language, can you say that it has any understanding of what the script
was about? How does this differ from today's NUR systems?
- Steven Gutfreund
Gutfreund.umass@udel-relay
------------------------------
End of AIList Digest
********************