Copy Link
Add to Bookmark
Report

AIList Digest Volume 1 Issue 111

eZine's profile picture
Published in 
AIList Digest
 · 11 months ago

AIList Digest           Saturday, 10 Dec 1983     Volume 1 : Issue 111 

Today's Topics:
Call for Papers - Special Issue of AJCL,
Linguistics - Phrasal Analysis Paper,
Intelligence - Purpose of Definition,
Expert Systems - Complexity,
Environments - Need for Sharable Software,
Jargon - Mental States,
Administrivia - Spinoff Suggestion,
Knowledge Representation - Request for Discussion
----------------------------------------------------------------------

Date: Thu 8 Dec 83 08:55:34-PST
From: Ray Perrault <RPERRAULT@SRI-AI.ARPA>
Subject: Special Issue of AJCL

American Journal of Computational Linguistics

The American Journal of Computational Linguistics is planning a
special issue devoted to the Mathematical Properties of Linguistic
Theories. Papers are hereby requested on the generative capacity of
various syntactic formalisms as well as the computational complexity
of their related recognition and parsing algorithms. Articles on the
significance (and the conditions for the significance) of such results
are also welcome. All papers will be subjected to the normal
refereeing process and must be accepted by the Editor-in-Chief, James
Allen. In order to allow for publication in Fall 1984, five copies of
each paper should be sent by March 31, 1984 to the special issue
editor,

C. Raymond Perrault Arpanet: Rperrault@sri-ai
SRI International Telephone: (415) 859-6470
EK268
Menlo Park, CA 94025.

Indication of intention to submit would also be appreciated.

------------------------------

Date: 8 Dec 1983 1347-PST
From: MEYERS.UCI-20A@Rand-Relay
Subject: phrasal analysis paper


Over a month ago, I announced that I'd be submitting
a paper on phrasal analysis to COLING. I apologize
to all those who asked for a copy for not getting it
to them yet. COLING acceptance date is April 2,
so this may be the earliest date at which I'll be releasing
papers. Please do not lose heart!

Some preview of the material might interest AILIST readers:

The paper is entitled "Conceptual Grammar", and discusses
a grammar that uses syntactic and 'semantic' nonterminals.
Very specific and very general information about language
can be represented in the grammar rules. The grammar is
organized into explicit levels of abstraction.
The emphasis of the work is pragmatic, but I believe it
represents a new and useful approach to Linguistics as
well.

Conceptual Grammar can be viewed as a systematization of the
knowledge base of systems such as PHRAN (Wilensky and Arens,
at UC Berkeley). Another motivation for a conceptual grammar is
the lack of progress in language understanding using syntax-based
approaches. A third motivation is the lack of intuitive appeal
of existing grammars -- existing grammars offer no help in manipulating
concepts the way humans might. Conceptual Grammar is
an 'open' grammar at all levels of abstraction. It is meant
to handle special cases, exceptions to general rules, idioms, etc.

Papers on the implemented system, called VOX, will follow
in the near future. VOX analyzes messages in the Navy domain.
(However, the approach to English is completely general).

If anyone is interested, I can elaborate, though it is
hard to discuss such work in this forum. Requests
for papers (and for abstracts of UCI AI Project papers)
can be sent by computer mail, or 'snail-mail' to:

Amnon Meyers
AI Project
Department of Computer Science
University of California
Irvine, CA 92717

PS: A paper has already been sent to CSCSI. The papers emphasize
different aspects of Conceptual Grammar. A paper on VOX as
an implementation of Conceptual Grammar is planned for AAAI.

------------------------------

Date: 2 Dec 83 7:57:46-PST (Fri)
From: ihnp4!houxm!hou2g!stekas @ Ucb-Vax
Subject: Re: Rational Psych (and science)
Article-I.D.: hou2g.121

It is true that psychology is not a "science" in the way a physicist
defines "science". Of course, a physicist would be likely to bend
his definition of "science" to exclude psychology.

The situation is very much the same as defining "intelligence".
Social "scientists" keep tightening their definition of intelligence
as required to exclude anything which isn't a human being. While
AI people now argue over what intelligence is, when an artificial system
is built with the mental ability of a mouse (the biological variety!)
in no time all definitions of intelligence will be bent to include it.

The real significance of a definition is that it clarifies the *direction*
in which things are headed. Defining "intelligence" in terms of
adaptability and self-consciousness are evidence of a healthy direction
to AI.

Jim

------------------------------

Date: Fri 9 Dec 83 16:08:53-PST
From: Peter Karp <KARP@SUMEX-AIM.ARPA>
Subject: Biologists, physicists, and report generating programs

I'd like to ask Mr. Fostel how biologists "do research in ways that seem
different than physicists". It would be pretty exciting to find that
one or both of these two groups do science in a way that is not part of
standard scientific method.

He also makes the following claim:

... the complexity of the commercial software I mentionned is
MUCH greater than the usual problem attacked by AI people...

With the example that:

... designing the reports and generating them for a large complex
system (and doing a good job) may take a large fraction of the total
time, yet such reporting is not usually done in the AI world.

This claim is rather absurd. While I will not claim that deciding on
the best way to present a large amount of data is a trivial task, the
point is that report generating programs have no knowledge about data
presentation strategies. People who do have such knowledge spend hours
and hours deciding on a good scheme and then HARD CODING such a scheme
into a program. Surely one would not claim that a program consisting
soley of a set of WRITELN (or insert your favorite output keyword)
statements has any complexity at all, much less intelligence or
knowledge? Just because a program takes a long time to write doesn't
mean it has any complexity, in terms of control structures or data
structures. And in fact this example is a perfect proof of this
conjecture.

------------------------------

Date: 2 Dec 83 15:27:43-PST (Fri)
From: sri-unix!hplabs!hpda!fortune!amd70!decwrl!decvax!duke!mcnc!shebs
@utah-cs.UUCP (Stanley Shebs)
Subject: Re: RE: Expert Systems
Article-I.D.: utah-cs.2279

A large data-processing application is not an expert system because
it cannot explain its action, nor is the knowledge represented in an
adequate fashion. A "true" expert system would *not* consist of
algorithms as such. It would consist of facts and heuristics organized
in a fashion to permit some (relatively uninteresting) algorithmic
interpreter to generate interesting and useful behavior. Production
systems are a good example. The interpreter is fixed - it just selects
rules and fires them. The expert system itself is a collection of rules,
each of which represents a small piece of knowledge about the domain.
This is of course an idealization - many "expert systems" have a large
procedural component. Sometimes the existence of that component can
even be justified...

stan shebs
utah-cs!shebs

------------------------------

Date: Wed, 7 Dec 1983 05:39 EST
From: LEVITT%MIT-OZ@MIT-MC.ARPA
Subject: What makes AI crawl

From: Seth Goldman <seth@UCLA-CS>
Subject: Programming environments are fine, but...

What are all of you doing with your nifty, adequate, and/or brain-damaged
computing environments? Also, if we're going to discuss environments, it
would be more productive I think to give concrete examples...
[Sounds good to me. It would be interesting to know
whether progress in AI is currently held back by conceptual
problems or just by the programming effort of building
large and user-friendly systems. -- KIL]

It's clear to me that, despite a relative paucity of new "conceptual"
AI ideas, AI is being held back entirely by the latter "programming
effort" problem, AND by the failure of senior AI researchers to
recognize this and address it directly. The problem is regressive
since programming problems are SO hard, the senior faculty typically
give up programming altogether and lose touch with the problems.

Nobody seems to realize how close we would be to practical AI, if just
a handful of the important systems of the past were maintained and
extended, and if the most powerful techniques were routinely applied
to new applications - if an engineered system with an ongoing,
expanding knowledge base were developed. Students looking for theses
and "turf" are reluctant to engineer anything familiar-looking. But
there's every indication that the proven techniques of the 60's/early
70's could become the core of a very smart system with lots of
overlapping knowledge in very different subjects, opening up much more
interesting research areas - IF the whole thing didn't have to be
(re)programmed from scratch. AI is easy now, showing clear signs of
diminishing returns, CS/software engineering are hard.

I have been developing systems for the kinds of analogy problems music
improvisors and listeners solve when they use "common sense"
descriptions of what they do/hear, and of learning by ear. I have
needed basic automatic constraint satisfaction systems
(Sutherland'63), extensions for dependency-directed backtracking
(Sussman'77), and example comparison/extension algorithms
(Winston'71), to name a few. I had to implement everything myself.
When I arrived at MIT AI there were at least 3 OTHER AI STUDENTS
working on similar constraint propagator/backtrackers, each sweating
out his version for a thesis critical path, resulting in a draft
system too poorly engineered and documented for any of the other
students to use. It was idiotic. In a sense we wasted most of our
programming time, and would have been better off ruminating about
unfamiliar theories like some of the faculty. Theories are easy (for
me, anyway). Software engineering is hard. If each of the 3 ancient
discoveries above was an available module, AI researchers could have
theories AND working programs, a fine show.

------------------------------

Date: Thu, 8 Dec 83 11:56 EST
From: Steven Gutfreund <gutfreund%umass-cs@CSNet-Relay>
Subject: re: mental states of machines

I have no problem with using anthropomorphic (or "mental") descriptions of
systems as a heuristic for dealing with difficult problems. One such
trick I especially approve of is Seymour Papert's "body syntonicity"
technique. The basic idea is to get young children to understand the
interaction of mathematical concepts by getting them to enter into a
turtle world and become an active participant in it, and to use this
perspective for understanding the construction of geometric structures.

What I am objecting to is that I sense that John McCarthy is implying
something more in his article: that human mental states are no different
than the very complex systems that we sometimes use mental descriptions
as a shorthand to describe.

I would refer to Ilya Prigogine's 1976 Nobel Prize winning work in chemistry on
"Dissapative Structures" to illustrate the foolishness of McCarthy's
claim.

Dissapative structures can be explained to some extent to non-chemists by means
of the termite analogy. Termites construct large rich and complex domiciles.
These structures sometimes are six feet tall and are filled with complex
arches and domed structures (it took human architects many thousands of
years to come up with these concepts). Yet if one watches termites at
the lowest "mechanistic" level (one termite at a time), all one sees
is a termite randomly placing drops of sticky wood pulp in random spots.

What Prigogine noted was that there are parallels in chemistry. Where random
underlying processes spontaneously give rise to complex and rich ordered
structures at higher levels.

If I accept McCarthy's argument that complex systems based on finite state
automata exhibit mental characteristics, then I must also hold that termite
colonies have mental characteristics, Douglas Hofstadter's Aunt Hillary also
has mental characteristics, and that certain colloidal suspensions and
amorphous crystals have mental characteristics.

- Steven Gutfreund
Gutfreund.umass@csnet-relay

[I, for one, have no difficulty with assigning mental "characteristics"
to inanimate systems. If a computer can be "intelligent", and thus
presumably have mental characteristics, why not other artificial
systems? I admit that this is Humpty-Dumpty semantics, but the
important point to me is the overall I/O behavior of the system.
If that behavior depends on a set of (discrete or continuous) internal
states, I am just as happy calling them "mental" states as calling
them anything else. To reserve the term mental for beings having
volition, or souls, or intelligence, or neurons, or any other
intuitive characteristic seems just as arbitrary to me. I presume
that "mental" is intended to contrast with "physical", but I side with
those seeing a physical basis to all mental phenomena. Philosophers
worry over the distinction, but all that matters to me is the
behavior of the system when I interface with it. -- KIL]

------------------------------

Date: 5 Dec 83 12:08:31-PST (Mon)
From: harpo!eagle!mhuxl!mhuxm!pyuxi!pyuxnn!pyuxmm!cbdkc1!cbosgd!osu-db
s!lum @ Ucb-Vax
Subject: Re: defining AI, AI research methodology, jargon in AI
Article-I.D.: osu-dbs.426

Perhaps Dyer is right. Perhaps it would be a good thing to split net.ai/AIList
into two groups, net.ai and net.ai.d, ala net.jokes and net.jokes.d. In one
the AI researchers could discuss actual AI problems, and in the other, philo-
sophers could discuss the social ramifications of AI, etc. Take your pick.

Lum Johnson (cbosgd!osu-dbs!lum)

------------------------------

Date: 7 Dec 83 8:27:08-PST (Wed)
From: decvax!tektronix!tekcad!franka @ Ucb-Vax
Subject: New Topic (technical) - (nf)
Article-I.D.: tekcad.155

OK, some of you have expressed a dislike for "non-technical, philo-
sophical, etc." discussions on this newsgroup. So for those of you who are
tired of this, I pose a technical question for you to talk about:

What is your favorite method of representing knowlege in a KBS?
Do you depend on frames, atoms of data jumbled together randomly, or something
in between? Do you have any packages (for public consumption which run on
machines that most of us have access to) that aid people in setting up knowlege
bases?

I think that this should keep this newsgroup talking at least partially
technically for a while. No need to thank me. I just view it as a public ser-
vice.

From the truly menacing,
/- -\ but usually underestimated,
<-> Frank Adrian
(tektronix!tekcad!franka)

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Francesco's profile picture
Francesco Arca (@Francesco)
14 Nov 2024
Congratulations :)

guest's profile picture
@guest
12 Nov 2024
It is very remarkable that the period of Atlantis’s destruction, which occurred due to earthquakes and cataclysms, coincides with what is co ...

guest's profile picture
@guest
12 Nov 2024
Plato learned the legend through his older cousin named Critias, who, in turn, had acquired information about the mythical lost continent fr ...

guest's profile picture
@guest
10 Nov 2024
الاسم : جابر حسين الناصح - السن :٤٢سنه - الموقف من التجنيد : ادي الخدمه - خبره عشرين سنه منهم عشر سنوات في كبرى الشركات بالسعوديه وعشر سنوات ...

lostcivilizations's profile picture
Lost Civilizations (@lostcivilizations)
6 Nov 2024
Thank you! I've corrected the date in the article. However, some websites list January 1980 as the date of death.

guest's profile picture
@guest
5 Nov 2024
Crespi died i april 1982, not january 1980.

guest's profile picture
@guest
4 Nov 2024
In 1955, the explorer Thor Heyerdahl managed to erect a Moai in eighteen days, with the help of twelve natives and using only logs and stone ...

guest's profile picture
@guest
4 Nov 2024
For what unknown reason did our distant ancestors dot much of the surface of the then-known lands with those large stones? Why are such cons ...

guest's profile picture
@guest
4 Nov 2024
The real pyramid mania exploded in 1830. A certain John Taylor, who had never visited them but relied on some measurements made by Colonel H ...

guest's profile picture
@guest
4 Nov 2024
Even with all the modern technologies available to us, structures like the Great Pyramid of Cheops could only be built today with immense di ...
Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT