Copy Link
Add to Bookmark
Report
AIList Digest Volume 4 Issue 025
AIList Digest Wednesday, 12 Feb 1986 Volume 4 : Issue 25
Today's Topics:
Journals - New Journal on Applied AI & CACM Invitation to Authors,
Conference - NCAI Exhibit Program,
Theory - Technology Review Article & Taxonomizing in AI
----------------------------------------------------------------------
Date: Sat, 25 Jan 86 14:21:41 est
From: FOXEA%VTVAX3.BITNET@WISCVM.WISC.EDU
Subject: New Journal on Applied AI
[Forwarded from the IRList Digest.]
New Journal: Applied Artificial Intelligence, An International Journal
Publication Information: published quarterly starting March 86
Rates: $55/volume indiv ($88 institutional) plus $24 air mail postage
Contacts: order with check or money order to -
Hemisphere Publishing Corporation, Journals Dept., 79 Madison Ave.
New York, New York 10016
Information: Elizabeth D'Costa, Circulation Mgr. (212) 725-1999
Aims and Scope: Applied Artificial Intelligence is intended to help
exchange information about advances and experiences in this field among
AI researchers. Furthermore, it will aid decision makers in industry and
management to understand the accomplishments and limitations of the
state-of-the-art of artificial intelligence.
Research to be presented will focus on methodology, time-schedules,
problems, work force strength, new tools, transfer of theoretical
accomplishements to application problems, information exchange among
concerned AI researchers and decision makers about the potential impact
of their work on their decisions.
------------------------------
Date: Mon 10 Feb 86 22:49:05-PST
From: Peter Friedland <FRIEDLAND@SUMEX-AIM.ARPA>
Subject: Invitation to Authors
I have recently been named to the Editorial Panel of Communications
of the ACM (CACM) with responsibility for artificial intelligence. CACM
is by far the widest-read computing publication with a current circulation
of over 75,000. I would like to encourage submissions to CACM in one of
several forms: articles of general interest (surveys, tutorials, reviews),
research contributions (original, previously-unpublished reports on
significant research), and reports on conferences or committee meetings.
In particular, manuscripts which act to bridge the gap between artificial
intelligence research and traditional computing methodologies are welcome.
All contributions will be fully reviewed with authors normally notified of
acceptance or rejection within 3 months of receipt.
In addition, CACM intends to devote substantial amounts of space
to special collections of related, high-quality, "Scientific American-like"
articles. For examples, see the September 1985 issue on "Architectures for
Knowledge-Based Systems" or the November 1985 issue on "Frontiers of
Computing in Science and Engineering." These special sections are usually
composed of invited papers selected by a guest editor from the community.
Professional editors at ACM headquarters devote on the order of man-weeks
per article in developing graphics and helping to make the articles readable
by a wide cross-section of the computing community. I welcome suggestions
(and volunteers) from anybody in the AI community for such special sections.
Articles and research contributions should be submitted directly
to: Janet Benton
Executive Editor, CACM
11 West 42nd St.
New York, NY 10036
Ideas for articles or special sections, and volunteers for helping
in the review process to insure the highest quality of AI publication
in CACM should be sent to me as FRIEDLAND@SUMEX (or call 415-497-3728).
Peter Friedland
------------------------------
Date: Mon 10 Feb 86 11:39:47-PST
From: AAAI <AAAI-OFFICE@SUMEX-AIM.ARPA>
Subject: Special Invitation
The AAAI would like to extend a special invitation to academic
institutions and non-profit research laboratories to participate
in this year's Exhibit Program at the National Conference on
Artificial Intelligence, August 11-15, 1986 in the Philadelphia
Civic Center. It's important to communicate what universities and
labortories are doing in AI by demonstrating their different
research projects to our conference attendees.
The AAAI will provide one 10' x 10' booth free of charge, describe
your demonstration in the Exhibit Guide, and assist you with your
logistical arrangements. Although we can not provide support
equipment (e.g. phone, lines or computers), we can direct you to
different vendors who may be able to assist you with your equipment
needs.
If you and your department are interesting in participating, please
call Ms. Lorraine Cooper at the AAAI (415) 328-3123.
------------------------------
Date: 3 Feb 86 19:46:53 GMT
From: ulysses!burl!clyde!watmath!utzoo!utcsri!utai!lamy@ucbvax.berkeley.edu
(Jean-Francois Lamy)
Subject: Re: Technology Review article
In article <7500002@ada-uts.UUCP> richw@ada-uts.UUCP writes:
>like: "In 25 years, AI has still not lived up to its promises and
>there's no reason to think it ever will"
Still thinking that fundamental breakthroughs in AI are achievable in such an
infinitesimal amount of time as 25 years is naive. I probably was not even
born when such claims could have been justified by sheer enthousiasm... Not
that we cannot get interesting and perhaps even useful developments in the
next 25 years.
>P.S. You might notice that about 10 pages into the issue, there's
> an ad for some AI system. I bet the advertisers were real
> pleased about the issue's contents...
Nowadays you don't ask for a grant or try to sell a product if the words "AI,
expert systems, knowledge engineering techniques, fifth generation and natural
language processing" are not included.
Advertisement is about creating hype, and it really works -- for a while,
until the next "in" thing comes around.
Jean-Francois Lamy
Department of Computer Science, University of Toronto,
Departement d'informatique et de recherche operationnelle, U. de Montreal.
CSNet: lamy@toronto.csnet
UUCP: {utzoo,ihnp4,decwrl,uw-beaver}!utcsri!utai!lamy
CDN: lamy@iro.udem.cdn (lamy%iro.udem.cdn@ubc.csnet)
------------------------------
Date: Fri, 7 Feb 86 20:51:58 PST
From: larry@Jpl-VLSI.ARPA
Subject: Sparklers from the Tech Review
I haven't read the Tech Review article; perhaps I shall just to see how
different will be my interpretation of it from the opinions heard here. The
discussion has made me want to offer some ideas of my own.
What we lump under AI is several different fields of research with often very
different if not contradictory approaches. As a dilletante in the AI field I
perceive the following:
COGNITIVE PSYCHOLOGY (a more restricted area than Cognitive Science) attempts
to understand biologically based thinking using behavioral and psychiatric
concepts and methods. This includes the effects emotional and social forces
exert on cognition. This group is increasingly borrowing from the following
groups.
COGNITIVE SCIENCE attempts to broaden the study to include machine-based
cognition. CS introduces heavy doses of metaphysics, logic, linguistics, and
information theory. My impression is that this area is too heavily invested
in symbol-processing research and could profitably spend more time on analog
computation and associative memories. These may better model humans' near-
instantaneous decision-making, which is more like doing a vector-sum than
doing massively parallel logical inferences.
PATTERN RECOGNITION, ROBOTICS, ETC. attempts to engineer cognition into
machines. Many workers in this field have a strong "hard-science" background
and a pragmatic approach; they often don't care whether they reproduce or
whether they mimic biological cognition.
EXPERT SYSTEMS, KNOWLEDGE ENGINEERING is more software engineering than
hardware engineering. Logic, computer science, and database theory are strong
here. Some of the simpler expert systems are imminently practical and have
been around for decades--though "programmed" into trouble-shooting books and
the like rather than a computer. (And while we're on this, most of what now
passes for rule-based programming could be done in BASIC or assembly language,
including self-modifying code, using fairly simple table-driven techniques.)
And perhaps several more groups could be distinguished. Of course, there are
plenty of exceptions to these categories, but humans do self-select into
groups and distill ideas and techniques into a rudimentary group persona.
If I were to characterize myself, I'd probably say that I'm less interested in
AI than IA--Intelligence Amplification. I'm interested by attempts to create
machine versions of human intelligence and I have little doubt that all the
vaunted "mystical" abilities of humans will eventually be reproduced,
including self-awareness.
Some of these abilities may be much easier to reproduce than we suppose:
intuition, for instance. I'm an artist in several media and use intuition
routinely. I've spent a lot of time introspecting about what happens when I
"solve" artistic problems, and I've learned how to "program" my undermind so
that I can promise solutions with considerable reliability. I believe I could
build an intuitive computer.
But what fascinates me is the idea of building systems which combine the best
capabilities of human and machine to overcome the limits of both. I think
it's much more economical, practical, and probably even humane to, say, make a
language-translation system that uses computers to do rapid, rough transla-
tions of 99% of a text and uses human sensitivities and skills to polish and
validate the translations. (Stated like that it sounds like two batch jobs
with a pipe between them. My concept is an interactive system with both human
and computer collaborating on the job, with the human doing continuous shaping
and scheduling of the entire process.)
Now I'll go back to being an interested by-stander for another six months!
Larry @ JPL-VLSI.arpa
------------------------------
Date: 3 Feb 86 18:04:58 GMT
From: amdcad!lll-crg!seismo!rochester!lab@ucbvax.berkeley.edu (Lab Manager)
Subject: Re: Technology Review article
In article <7500002@ada-uts.UUCP> richw@ada-uts.UUCP writes:
>
>Has anyone read the article about AI in the February issue of
>"Technology Review"? You can't miss it -- the cover says something
>like: "In 25 years, AI has still not lived up to its promises and
>there's no reason to think it ever will" (not a direct quote; I don't
>have the copy with me). General comments?
They basically say that things like blocks world doesn't scale up, and
AI can't model intuition because 'real people' aren't thinking
machines. An appropriate rebuttal to these two self-styled
philosophers:
"In 3000 years, Philosophy has still not lived up to its promises and
there's no reason to think it ever will."
Brad Miller Arpa: lab@rochester.arpa UUCP: rochester!lab
(also miller@rochester for non-lab stuff)
Title: CS Lab Manager
Snail: University of Rochester Computer Science Dept.
617 Hylan Building Rochester NY 14627
------------------------------
Date: Sun, 9 Feb 86 16:38:38 est
From: "Marek W. Lugowski" <marek%indiana.csnet@CSNET-RELAY.ARPA>
Subject: Taxonomizing in AI: neither useful or harmless
> [Stan Shebs:] In article <3600036@iuvax.UUCP> marek@iuvax.UUCP writes:
>
> Date: 4 Feb 86 19:55:00 GMT
> From: ihnp4!inuxc!iubugs!iuvax!marek@ucbvax.berkeley.edu
>
> ha ha ha! "taxonomy of the field" -- the latest gospel of AI? Let me be
> impudent enough to claim one of the most misguided AI efforts to date is
> taxonomizing a la Michalski et al: setting up categories along arbitrary
> lines dictated by somebody or other's intuition. If AI does not have
> the mechanism-cum-explanation to describe a phenomenon, what right does it
> have to a) taxonomize it and b) demand that its taxonomizing be recognized
> as an achievement?
> -- Marek Lugowski
>
> I assume you have something wonderful that we haven't heard about?
I assume that you are intentionally jesting, equating that which I criticize
with all that AI has to offer. Taxonomizing is a debatable art of empirical
science, usually justified when a scientist finds itself overwhelmed with
gobs and gobs of identifiable specimens, e.g. entymology. But AI is not
overwhelmed by gobs and gobs of tangible singulars; it is a constructive
endeavor that puts up putatative mechanisms to be replaced by others. The
kinds of learning Michalski so effortlessly plucks out of the thin air are not
as incontrovertibly real and graspable as instances of dead bugs.
One could argue, I suppose, that taxonomizing in absence of multitudes of
real specimens is a harmless way of pursuing tenure, but I argue in
Indiana U. Computer Science Technical Report No. 176, "Why Artificial
Intelligence is Necessarily Ad Hoc: Your Thinking/Approach/Model/Solution
Rides on Your Metaphors", that it causes grave harm to the field. E-mail
nlg@iuvax.uucp for a copy, or write to Nancy Garrett at Computer Science
Department, Lindley Hall 101, Indiana University, Bloomington, Indiana
47406.
> Or do you believe that because there are unsolved problems in physics,
> chemists and biologists have no right to study objects whose behavior is
> ultimately described in terms of physics?
>
> stan shebs
> (shebs@utah-orion)
TR #176 also happens to touch on the issue of how ill-formed Stan Shebs's
rhetorical question is and how this sort of analogizing has gotten AI into
its current (sad) shape.
Please consider whether taxonomizing kinds of learning from the AI perspective
in 1981 is at all analogous to chemists' and biologists' "right to study the
objects whose behavior is ultimately described in terms of physics." If so,
when is the last time you saw a biology/chemistry text titled "Cellular
Resonance" in which 3 authors offered an exhaustive table of carcinogenic
vibrations, offered as a collection of current papers in oncology?...
More constructively, I am in the process of developing an abstract machine.
I think that developing abstract machines is more in the line of my work as
an AI worker than postulating arbitrary taxonomies where there's neither need
for them nor raw material.
-- Marek Lugowski
an AI graduate student
Indiana U. CS
------------------------------
End of AIList Digest
********************