Copy Link
Add to Bookmark
Report
AIList Digest Volume 3 Issue 061
AIList Digest Wednesday, 8 May 1985 Volume 3 : Issue 61
Today's Topics:
Machine Translation - Survey,
Games & Learning - NIM References,
Opinion - Research Literature,
Seminar - Mechanized Hypothesis Formation (SMU),
Description - CS at Linkopings University, Sweden
----------------------------------------------------------------------
Date: 6 MAY 85 14:48-N
From: PETITP%CGEUGE51.BITNET@WISCVM.ARPA
Subject: more on machine translation
Here is a summary of the information I sent to Barbara Stafford concerning
Machine Translation. (cf her request in AIList V3 #54 and my note in V3 # 50)
To know what is beeing done on MT in Provo,Utah, contact Alan K. Melby at
Brigham Young University. Mr Melby works in the field of user interface for
MT and gave a paper at the ISSCO tutorial on MT: "Recipe for a translator's
workstation".
Here are ALPS and WEIDNER's address:
Alps Systems
190 West 800 North
Provo, Utah 84601
Tel: (801) 3750090
Another commercial system I didn't mentionned was developped in Germany:
LOGOS (also quite primitive). Here is their address:
LOGOS Computer Systems Deutschland Gmbh
Lyoner Strasse 26
6000 Frankfurt am Main 71
Tel: (0611) 666 69 50
Telex 4 189808
And also in Germany, Siemens just entered the market with
METAL (that I mentionned in AIList v3 #50). It was presented at the Hannover
fair this april.
In the ISSCO MT tutorial, the presentation by E.Ananiadou & S.Warwick ("An
overview of post ALPAC developments") might interest you: In that paper there
is a part on TITUS a french MT system for abstracts with a restricted syntax.
This might be close to your work. Here is their address:
Institut Textile de France
35 rue des Abondances
F-92100 Boulogne-sur-Seine (France)
Tel: 825.18.90
Telex: 250940
And some more reference to introductory readings in Machine Translation:
Kilby K.J., Whitelock P.J.
"Linguistic and computational techniques in Machine Translation system design.
- Final report", December 83
CCL/UMIST report 84/2
with description of the systems SYSTRAN, TAUM METEO,
TAUM AVIATION, GETA ARIANE-78, LRC METAL, Wilks' PS.
Centre for Computational Linguistics
University of Manchester Institute of Science and Technology
PO BOX 88
Manchester, England
(Pete Whitelock's network address is pjw%minim%umpa@UCL-CS.ARPA)
Veronica Lawson (Editor)
Practical Experience of Machine Translation
North Holland, 1982
(Papers from a conference given by the ASLIB)
------------------------------
Date: Sun, 5 May 85 13:53:17 edt
From: gross@dcn9.arpa (Phill Gross)
Subject: NIM references
With a partner, I developed a learning Nim game as part of a project
for a AI course at George Washington University. It was implemented
as a Collective Learning Automaton, a concept advanced by the instructor,
Peter Bock. The CLA concept is rather simple. It only works for small
state-space games where the current state representation contains all
information necessary to make the next move. It also works best with
games like Nim that have a perfect playing strategy. The automaton knows
the rules and begins by guessing moves. An independent process watches
the action and rewards or punishes the automaton based on the result. In
the next game, the automaton chooses moves based on what it 'learned' from
its previous experiences. After a series of games, it homes in on the
correct strategy.
We matched our automaton against 4 levels of opponents- an expert,
another learner, a dummy (made legal but totally random moves) and a
'novice' (smart enough not to make random moves but hasn't really gotten
the hang of it yet). Our interest was learning speed under various
reward/punishment regimes.
Despite the simplicity of the CLA model, some interesting, 'intuitively
obvious' anthropomorhisms were 'verified'. For example, it always learned
quickly and perfectly against an expert. Against the dummy, it exhibited
very confused behavior, characterized by slow, imperfect learning, since
it may be alternately rewarded and punished for the same move.
(The moral, I suppose, is if you want to learn a game, don't play against
schlubs). It seemed that heavier punishment was more important when playing
against the novice or the dummy. (ie, if a move loses against a dummy, it must
really be bad), whereas light punishment/heavier reward produced better
results against the expert (don't crucify the kid if he loses when he's
clearly out of his league). Interestingly enough, learning was slowed
against the expert without at least a small amount of punishment
(constructive criticism?).
AI is by no means my main interest but I feel compelled to open myself to
flames by adding a couple comments. The two AI courses I took (the other at
Penn State) were (charitably) the weakest courses I had in grad school.
While any good curriculum should include courses on topics like compilers
and operating systems, perhaps the state of AI today is such that it should
not be casually included just to broaden the course offerings. It seems
that the AI frontier is pushing forward on two fronts- a rather esoteric
'high end' exploring things like vision, natural language understanding
and cognitive processes; and a more down-to-earth 'low end' dealing with
knowledge based systems and heuristic programming. While the former mixes
a number of disciplines outside computer science and is best pursued as
thesis topics, I feel the latter could be taught to a bright class of
sophomores or juniors. A manager of KBES projects noted to me that his
experience showed that if you give some expensive tools to a smart programmer,
send her to a two week course given by one of the vendors and then let'm
hack for a while, you've got yourself a 'knowledge engineer'. If only
results in true AI could be achieved as readily.
All of this is a major digression from my original intent, which was to
add a few references to the Nim list.
* 'Basic Computer Games', David Ahl, Workman Publishing Co., 1978.
* 'Games of the World', Frederick Gruenfeld, Ballentine Books, 1975.
* 'The World Book of Math Power', (Adjunct to the good old World Book
Encyclopedia), Vol 2, "NIM", pp 667-671, 1983.
Anyone interested in CLA's should contact Peter Bock at GWU, since I
can't recall him giving references for his articles. Regretfully (and
somewhat gratefully, since I really don't feel that we did much to push
forward the AI frontiers), neither the code nor our final report is available
online, nor do I have the time and resources to make it available by mail.
Phill Gross
[Phil's approach sounds like the "learning machine" discussed in
Martin Gardner's Scientific American column in March 1962.
I remember inferring that something was wrong with my own NIM
strategy after noticing that 1) my machine quickly learned to
beat me whenever I started, and 2) it usually also beat me when
it started. That column got me interested in learning algorithms
and, eventually, AI. I still have the paper machine I constructed,
along with the card-and-hole "logic machine" from the December
1960 column.
The NIM reference I gave earlier should have been W. Rouse Ball, not
Bell. I think I have his 13th edition packed away somewhere. It
also has interesting discussions of mathematical card tricks, string
figures, and the Cambridge educational system -- much of which was
omitted from later editions. -- KIL]
------------------------------
Date: Tue 7 May 85 18:25:31-EDT
From: SRIDHARAN@BBNG.ARPA
Subject: Research Literature
I read an article in New Scientist (18 April 85) that ...
".. there are 2750 different mathematics research journals. If you
conservatively estimate the annual contents of each to be, on average,
700 pages, this means that each year some 1 650 000 pages of mathematical
research are published. One estimate puts the number of profesional
mathematicians in the world at 100 000."
Compared to this AI has I would guess has less than 2 dozen journals now
and about 1000 researchers (over-estimate). In my mind, this
discipline, concerned with knowledge and reasoning, is just as
fundamental as mathematics - and I should like AI to be eventually (in
the next century) to be just as large.
One consequence of growth should be understood. When someone writes a
paper, others may not actually read it, especially not right away. I
think the time lapse between the appearance of a paper and its wide
appeal may grow QUADRATICALLY with the volume of publications. I know
that I now take about 2 years to catch up with papers of interest to me.
A decade ago, I used to finish reading relevant papers in about 6
months. For example, six months after the '73 and '75 IJCAIs, I had
managed to read or scan relevant papers from those proceedings. Now,
I am behind even with AAAI-83 and -84.
All this says, we need to cultivate patience as the discipline grows!
The process of knowledge diffusion will take longer. Our respective
distractions engendered by commercial/business hoop-la, only aggravates
this. To do anything state-of-the-art, and to push the frontiers will
take more effort as time goes on.
People who write papers, now have an increased responsibility to do
more thorough checks of existing literature. Given that we have the
use of very advanced computational tools, we ought to be able to do
this more thoroughly than most other disciplines.
------------------------------
Date: 7 May 1985 19:50-EST
From: leff%smu.csnet@csnet-relay.arpa
Subject: Seminar - Mechanized Hypothesis Formation (SMU)
Department of Computer Science and Engineering, Southern Methodist
University
TOPIC: Progress in Automated Research
SPEAKER: Dr. Fred N. Springsteel, Visitng Professor University of Missouri,
Columbia, Missouri
WHERE, WHEN INFO: Thursday May 9, 1985 1:30-2:30 PM Thursday May 9, 1985
315 SIC, Southern Methodist University
Exploratory Data Analysis (EDA) is a lesser-known field that outgrew
the bounds of statistics. Originated by J. W. Tukey in 1962, EDA works
toward an open-ended meta-goal: to discover "all interesting"
(nontrivial, normal form, valid) hypotheses about a domain that is
represented by a large scientific sample of data, e. g. a suburban
census matrix. One very active brand of EDA is being purused by a
20-year-old Czech research circle that I visited in 1976. Their EDA is
called Mechanized Hypothesis Formation (MHF); it can heuristically
generate-and-test many types of logical/statistical forms.
MHF algorithmic decision problems have been shown, by me, to have
complexities that swift shiftly from P-time to NP-hard (TCS '79 IJMMS
'81). Users of such complex, multilevel software packages need expert
advice! Lately, consulting system technics have been applied to make a
Test Advisor for users, based on their special needs; it recommends
which statistic (of many) to run and how to parametrize the load
module. A much larger project (GUHA-80) is planned, which hopes to
apply the results of AUTOMATED EDA to the big bottleneck in building
expert systems: KNOWLEDGE ACQUISITION.
------------------------------
Date: 3 May 1985 2115
From: mcvax!enea!liuida!jbl@seismo.ARPA
Subject: Description - CS at Linkopings University, Sweden
[Edited by Laws@SRI-AI.]
The Department of Computer and Information Science at Linkopings University in
Sweden announces the availability of postdoctoral research and sabbatical
leave positions. The department provides a wide range of research and
educational activities as indicated in the areas of faculty specialization.
The university is located in the town of Linkoping, approximately 200
kilometers south of Stockholm. Linkoping has a population of 120000 and is in
the heart of the rapidly expanding Ostergotland high technology industrial
area. Linkopings University employes approximately 1600 people and has
faculties of engineering, science, liberal arts, medicine and education. The
department of Computer and Information Science has approximately 80 employees
(faculty, staff and graduate students) of whom 15 have attained the doctoral
degree. [...]
Faculty members (for academic year 1984-1985)
Par Emanuelson, functional languages, program verification, program analysis
and program manipulation, programming environments, software
engineering.
Peter Fritzson (on leave to SUN MicroSystems during 1985), tool generation,
incremental tools, programming environments.
Anders Haraldsson, programming languages and systems, programming methodology,
program manipulation.
Roland Hjerppe, library science and systems, citation analysis and
bibliometrics, fact representation and information retrieval,
hypertext, human-computer interaction and personal computing.
Sture Hagglund, database technology, human-computer interaction, artificial
intelligence applications.
Harold W. Lawson, Jr. (Professor of Telecommunications and Computer Systems),
computer architecture, VLSI, computer-aided design, methodology of
computer-related education and training.
Bengt Lennartsson, programming environments, real-time applications,
distributed systems.
Andrzej Lingas, complexity theory, analysis of algorithms, geometric
complexity, graph algorithms, logic programming, VLSI theory.
Bryan Lyles (guest researcher), computer architecture, VLSI, user interfaces,
distributed systems.
Jan Maluszynski, logic programming, software specification methods.
Erik Sandewall (Professor of Computer Science), representation of knowledge
with logic, theory of information management systems, office
information systems, autonomous expert systems.
Bo Sundgren, database design, conceptual modelling, statistical information
systems.
Erik Tengvald, artificial intelligence, knowledge representation, planning and
problem solving, expert systems.
Associated Faculty Members
Jan-Olaf Bruer (Dept of Electrical Engineering), office automation systems,
especially security issues.
Ingemar Ingemarsson (Professor of Information Theory), information theory,
security and data encryption, error correction codes and data
compression.
Ove Wigertz (Professor of Medical Informatics), medical information systems,
expert systems.
During the next academic year (85/86) additional Ph.D. faculty will be joining
the department in the areas of computational complexity, computational
linguistics, software engineering and computer systems.
Department and University Computing Resources
The department has as research computers a DEC 2060, a DEC VAX11/780, several
SUNs, six Xerox 1108 InterLisp machines, and numerous smaller machines such as
PDP-11s and micro-VAXs. Department plans include significant near-term
expansion of research computing.
Undergraduate computing systems include two DEC 2065s, a DEC 2020, a DEC PDP
11/70 and PDP 11/73 running Unix, a large number of Apple Macintoshes and a
variety of small machines such as PDP 11s used for operating system labs. As
is the case with research computing, major expansions of undergraduate
computing capacity are planned in the near future. Since the total number of
undergraduates enrolled in computer related lines of study is less than at
some large U.S. universities, each student gets significant computer time.
Linkoping is part of the UUCP and SUNET networks. The campus is wired with
Ethernet and all major machines are connected via TCP/IP, DECNET or XNS
protocols.
For further information about Linkoping University and the Department of
Computer and Information Science contact:
Graduate Division
c/o Mrs. Lillemor Wallgren
Department of Computer and Information Science
Linkopings University
S-581 83 Linkoping
SWEDEN
Telephone (+46) 13-281480
Telex: 50067 LINBIBL S
UUCP: {decvax, seismo}!mcvax!enea!liuida!lew
ARPA: LEW%LIUIDA.UUCP@SEISMO.ARPA
------------------------------
End of AIList Digest
********************