Copy Link
Add to Bookmark
Report
AIList Digest Volume 1 Issue 101
AIList Digest Monday, 21 Nov 1983 Volume 1 : Issue 101
Today's Topics:
Pattern Recognition - Forced Matching,
Workstations - VAX,
Alert - Computer Vision,
Correction - AI Labs in IEEE Spectrum,
AI - Challenge,
Conferences - Announcements and Calls for Papers
----------------------------------------------------------------------
Date: Wed, 16 Nov 83 10:53 EST
From: Tim Finin <Tim.UPenn@Rand-Relay>
Subject: pattern matchers
From: Stanley T. Shebs <SHEBS@UTAH-20.ARPA>
Subject: Pattern Matchers
... My next puzzle is about pattern matchers. Has anyone looked carefully
at the notion of a "non-failing" pattern matcher? By that I mean one that
never or almost never rejects things as non-matching. ...
There is a long history of matchers which can be asked to "force" a match.
In this mode, the matcher is given two objects and returns a description
of what things would have to be true for the two objects to match. Two such
matchers come immediately to my mind - see "How can MERLIN Understand?" by
Moore and Newell in Gregg (ed), Knowledge and Cognition, 1973, and also
"An Overview of KRL, A Knowledge Representation Language" by Bobrow and
Winograd (which appeared in the AI Journal, I believe, in 76 or 77).
------------------------------
Date: Fri 18 Nov 83 09:31:38-CST
From: CS.DENNEY@UTEXAS-20.ARPA
Subject: VAX Workstations
I am looking for information on the merits (or lack of) of the
VAX Workstation 100 for AI development.
------------------------------
Date: Wed, 16 Nov 83 22:22:03 pst
From: weeks%ucbpopuli.CC@Berkeley (Harry Weeks)
Subject: Computer Vision.
There have been some recent articles in this list on computer
vision, some of them queries for information. Although I am
not in this field, I read with interest a review article in
Nature last week. Since Nature may be off the beaten track for
many people in AI (in fact articles impinging on computer science
are rare, and this one probably got in because it also falls
under neuroscience), I'm bringing the article to the attention of
this list. The review is entitled ``Parallel visual computation''
and appears in Vol 306, No 5938 (3-9 November), page 21. The
authors are Dana H Ballard, Geoffrey E Hinton and Terrence J
Sejnowski. There are 72 references into the literature.
Harry Weeks
g.weeks@Berkeley
------------------------------
Date: 17 Nov 83 20:25:30-PST (Thu)
From: pur-ee!uiucdcs!marcel @ Ucb-Vax
Subject: Re: IEEE Spectrum Alert - (nf)
Article-I.D.: uiucdcs.3909
For safety's sake, let me add a qualification about the table on sources of
funding: it's incorrect. The University of Illinois is represented as having
absolutely NO research in 5th-generation AI, not even under OTHER funding.
This is false, and will hopefully be rectified in the next issue of the
Spectrum. I believe a delegation of our Professors is flying to the coast to
have a chat with the Spectrum staff ...
If we can be so misrepresented, I wonder how the survey obtained its
information. None of our major AI researchers remember any attempts to survey
their work.
Marcel Schoppers
U of Illinois @ Urbana-Champaign
------------------------------
Date: 17 Nov 83 20:25:38-PST (Thu)
From: pur-ee!uiucdcs!marcel @ Ucb-Vax
Subject: Re: just a reminder... - (nf)
Article-I.D.: uiucdcs.3910
I agree [with a previous article].
I myself am becoming increasingly worried about a blithe attitude I
sometimes hear: if our technology eliminates some jobs, it will create others.
True, but not everyone will be capable of keeping up with the change.
Analogously, the Industrial Revolution is now seen as a Good Thing, and its
impacts were as profound as those promised by AI. And though it is said that
the growth of knowledge can only be advantageous in the long run (Logical
Positivist view?), many people became victims of the Revolution.
In this respect I very much appreciated an idea that was aired at IJCAI-83,
namely that we should be building expert systems in economics to help us plan
and control the effects of our research.
As for the localization of power, that seems almost inevitable. Does not the
US spend enough on cosmetics to cover the combined Gross National Products of
37 African countries? And are we not so concerned about our Almighty Pocket
that we simply CANNOT export our excess groceries to a needy country, though
the produce rot on our dock? Then we can also keep our technology to ourselves.
One very obvious, and in my opinion sorely needed, application of AI is to
automating legal, veterinary and medical expertise. Of course the law system
and our own doctors will give us hell for this, but on the other hand what kind
of service profession is it that will not serve except at high cost? Those most
in need cannot afford the price. See for yourself what kind of person makes it
through Medical School: those who are most aggressive about beating their
fellow students, or those who have the money to buy their way in. It is little
wonder that so few of them will help the under-priviledged -- from the start
the selection criteria wage against such motivation. Let's send our machines
in where our "doctors" will not go!
Marcel Schoppers
U of Illinois @ Urbana-Champaign
------------------------------
Date: 19 Nov 83 09:22:42 EST (Sat)
From: rej@Cornell (Ralph Johnson)
Subject: The AI Challenge
The recent discussions on AIlist have been boring, so I have another
idea for discussion. I see no evidence that that AI is going to make
as much of a change on the world as data processing or information
retrieval. While research in AI has produced many results in side areas
such as computer languages, computer architecture, and programming
environments, none of the past promises of AI (automatic language
translation, for example) have been fulfilled. Why should I expect
anything more in the future?
I am a soon-to-graduate PhD candidate at Cornell. Since Cornell puts
little emphasis on AI, I decided to learn a little on my own. Most AI
literature is hard to read, as very little concrete is said. The best
book that I read (best for someone like me, that is) was the three-volume
"Handbook on Artificial Intelligence". One interesting observation was
that I already knew a large percentage of the algorithms. I did not
even think of most of them as being AI algorithms. The searching
algorithms (with the exception of alpha beta pruning) are used in many
areas, and algorithms that do logical deduction are part of computational
mathematics (just my opinion, as I know some consider this hard core AI).
Algorithms in areas like computer vision were completely new, but I could
see no relationship between those algorithms and algorithms in programs
called "expert systems", another hot AI topic.
[Agreed, but the gap is narrowing. There have been 1 or 2 dozen
good AI/vision dissertations, but the chief link has been that many
individuals and research departments interested in one area have
also been interested in the other. -- KIL]
As for expert systems, I could see no relationship between one expert system
and the next. An expert system seems to be a program that uses a lot of
problem-related hacks to usually come up with the right answer. Some of
the "knowledge representation" schemes (translated "data structures") are
nice, but everyone seems to use different ones. I have read several tech
reports describing recent expert systems, so I am not totally ignorant.
What is all the noise about? Why is so much money being waved around?
There seems to be nothing more to expert systems than to other complicated
programs.
[My own somewhat heretical view is that the "expert system" title
legitimizes something that every complicated program has been found
to need: hackery. A rule-based system is sufficiently modular that
it can be hacked hundreds of times before it is so cumbersome
that the basic structures must be rewritten. It is software designed
to grow, as opposed to the crystalline gems of the "optimal X" paradigm.
The best expert systems, of course, also contain explanatory capabilities,
hierarchical inference, constrained natural language interfaces, knowledge
base consistency checkers, and other useful features. -- KIL]
I know that numerical analysis and compiler writing are well developed fields
because there is a standard way of thinking that is associated with each
area and because a non-expert can use tools provided by experts to perform
computation or write a parser without knowing how the tools work. In fact,
a good test of an area within computer science is whether there are tools
that a non-expert can use to do things that, ten years ago, only experts
could do. Is there anything like this in AI? Are there natural language
processors that will do what YACC does for parsing computer languages?
There seem to be a number of answers to me:
1) Because of my indoctrination at Cornell, I categorize much of the
important results of AI in other areas, thus discounting the achievements
of AI.
2) I am even more ignorant than I thought, and you will enlighten me.
3) Although what I have said describes other areas of AI pretty much, yours
is an exception.
4) Although what I have said describes past results of AI, major achievements
are just around the corner.
5) I am correct.
You may be saying to yourself, "Is this guy serious?" Well, sort of. In
any case, this should generate more interesting and useful information
than trying to define intelligence, so please treat me seriously.
Ralph Johnson
------------------------------
Date: Thu 17 Nov 83 16:57:55-PST
From: C.S./Math Library <LIBRARY@SU-SCORE.ARPA>
Subject: Conference Announcements and Call for Papers
[Reprinted from the SU-SCORE bboard.]
Image Technology 1984 37th annual conference May 20-24, 1984
Boston, Mass. Jim Clark, papers chairman
British Robot Association 7th annual conference 14-17, May 1984
Cambridge, England Conference director-B.R.A. 7,
British Robot Association, 28-30 High Street, Kempston, Bedford
MK427AJ, England
First International Conference on Computers and Applications
Beijing, China, June 20-22, 1984 co-sponsored by CIE computer society
and IEEE computer society
CMG XIV conference on computer evaluation--preliminary agenda
December 6-9, 1983 Crystal City, Va.
International Symposium on Symbolic and Algebraic Computation
EUROSAM 84 Cambridge, England July 9-11, 1984 call for papers
M. Mignotte, Centre de Calcul, Universite Louis Pasteur, 7 rue
rene Descartes, F67084 Strasvourg, France
ACM Computer Science Conference The Future of Computing
February 14-16, 1984 Philadelphia, Penn. Aaron Beller, Program
Chair, Computer and Information Science Department, Temple University
Philadelphia, Penn. 19122
HL
------------------------------
Date: Fri 18 Nov 83 04:00:10-CST
From: Werner Uhrig <CMP.WERNER@UTEXAS-20.ARPA>
Subject: ***** Call for Papers: LISP and Functional Programming *****
please help spread the word by announcing it on your local machines. thanks
---------------
()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()
() CALL FOR PAPERS ()
() 1984 ACM SYMPOSIUM ON ()
() LISP AND FUNCTIONAL PROGRAMMING ()
() UNIVERSITY OF TEXAS AT AUSTIN, AUGUST 5-8, 1984 ()
() (Sponsored by the ASSOCIATION FOR COMPUTING MACHINERY) ()
()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()
This is the third in a series of biennial conferences on the LISP language and
issues related to applicative languages. Especially welcome are papers
addressing implementation problems and programming environments. Areas of
interest include (but are not restricted to) systems, large implementations,
programming environments and support tools, architectures, microcode and
hardware implementations, significant language extensions, unusual applications
of LISP, program transformations, compilers for applicative languages, lazy
evaluation, functional programming, logic programming, combinators, FP, APL,
PROLOG, and other languages of a related nature.
Please send eleven (11) copies of a detailed summary (not a complete paper) to
the program chairman:
Guy L. Steele Jr.
Tartan Laboratories Incorporated
477 Melwood Avenue
Pittsburgh, Pennsylvania 15213
Submissions will be considered by each member of the program committee:
Robert Cartwright, Rice William L. Scherlis, Carnegie-Mellon
Jerome Chailloux, INRIA Dana Scott, Carnegie-Mellon
Daniel P. Friedman, Indiana Guy L. Steele Jr., Tartan Laboratories
Richard P. Gabriel, Stanford David Warren, Silogic Incorporated
Martin L. Griss, Hewlett-Packard John Williams, IBM
Peter Henderson, Stirling
Summaries should explain what is new and interesting about the work and what
has actually been accomplished. It is important to include specific findings
or results and specific comparisons with relevant previous work. The committee
will consider the appropriateness, clarity, originality, practicality,
significance, and overall quality of each summary. Time does not permit
consideration of complete papers or long summaries; a length of eight to twelve
double-spaced typed pages is strongly suggested.
February 6, 1984 is the deadline for the submission of summaries. Authors will
be notified of acceptance or rejection by March 12, 1984. The accepted papers
must be typed on special forms and received by the program chairman at the
address above by May 14, 1984. Authors of accepted papers will be asked to
sign ACM copyright forms.
Proceedings will be distributed at the symposium and will later be available
from ACM.
Local Arrangements Chairman General Chairman
Edward A. Schneider Robert S. Boyer
Burroughs Corporation University of Texas at Austin
Austin Research Center Institute for Computing Science
12201 Technology Blvd. 2100 Main Building
Austin, Texas 78727 Austin, Texas 78712
(512) 258-2495 (512) 471-1901
CL.SCHNEIDER@UTEXAS-20.ARPA CL.BOYER@UTEXAS-20.ARPA
------------------------------
End of AIList Digest
********************