Copy Link
Add to Bookmark
Report

AIList Digest Volume 1 Issue 110

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest           Wednesday, 7 Dec 1983     Volume 1 : Issue 110 

Today's Topics:
AI and Manufacturing - Request,
Bindings - HPP,
Programming Languages - Environments & Productivity,
Vision - Cultural Influences on Perception,
AI Jargon - Mental States of Machines,
AI Challange & Expert Systems,
Seminar - Universal Subgoaling
----------------------------------------------------------------------

Date: 5 Dec 83 15:14:26 EST (Mon)
From: Dana S. Nau <dsn%umcp-cs@CSNet-Relay>
Subject: AI and Automated Manufacturing

I and some colleagues at University of Maryland are doing a literature
search on the use of AI techniques in Automated Manufacturing.
The results of the literature search will comprise a report to be
sent to the National Bureau of Standards as part of a research
contract. We'd appreciate any relevant information any of you may
have--especially copies of papers or technical reports. In
return, I can send you (on request) copies of some papers I have
published on that subject, as well as a copy of the literature
search when it is completed. My mailing address is

Dana S. Nau
Computer Science Dept.
University of Maryland
College Park, MD 20742

------------------------------

Date: Mon 5 Dec 83 08:27:28-PST
From: HPP Secretary <HPP-SECRETARY@SUMEX-AIM.ARPA>
Subject: New Address for HPP

[Reprinted from the SU-SCORE bboard.]

The HPP has moved. Our new address is:

Heuristic Programming Project
Computer Science Department
Stanford University
701 Welch Road, Bldg. C
Palo Alto, CA 94304

------------------------------

Date: Mon, 5 Dec 83 09:43:51 PST
From: Seth Goldman <seth@UCLA-CS>
Subject: Programming environments are fine, but...

What are all of you doing with your nifty, adequate, and/or brain-damaged
computing environments? Also, if we're going to discuss environments, it
would be more productive I think to give concrete examples of the form:

I was trying to do or solve X
Here is how my environment helped me OR
This is what I need and don't yet have

It would also be nice to see some issues of AIList dedicated to presenting
1 or 2 paragraph abstracts of current work being pursued by readers and
contributors to this list. How about it Ken?

[Sounds good to me. It would be interesting to know
whether progress in AI is currentlyheld back by conceptual
problems or just by the programming effort of building
large and user-friendly systems. -- KIL]

Seth Goldman

------------------------------

Date: Monday, 5 December 1983 13:47:13 EST
From: Robert.Frederking@CMU-CS-CAD
Subject: Re: marcel on "lisp productivity question"

I just thought I should mention that production system languages
share all the desirable features of Prolog mentioned in the previous
message, particularly being "rule-based computing with a clean formalism".
The main differences with the OPS family of languages is that OPS uses
primarily forward inference, instead of backwards inference, and a slightly
different matching mechanism. Preferring one over the other depends, I
suspect, on whether you think in terms of proofs or derivations.

------------------------------

Date: Mon, 5 Dec 83 10:23:17 pst
From: evans@Nosc (Evan C. Evans)
Subject: Vision & Such

Ken Laws in AIList Digest 1:99 states: an adequate answer [to
the question of why computers can't see yet] requires a guess
at how it is that the human vision system can work in all cases.
I cannot answer Ken's question, but perhaps I can provide some
useful input.

language shapes culture (Sapir-Whorf hypothesis)
culture shapes vision (see following)
vision shapes language (a priori)

The influence of culture on perception (vision) takes many forms.
A statistical examination (unpublished) of the British newspaper
game "Where's the ball?" is worth consideration. This game has
been appearing for some time in British, Australian, New Zealand,
& Fijian papers. So far as I know, it has not yet made its ap-
pearance in U.S. papers. The game is played thus:
A photograph of some common sport involving a ball is
published with the ball erased from the picture & the question,
where's the ball? Various members of the readership send in
their guesses & that closest to the ball's actual position in the
unmodified photo wins. Some time back the responses to several
rounds of this game were subjected to statistical analysis. This
analysis showed that there were statistically valid differences
associated with the cultural background of the participants.
This finding was particularly striking in Fiji with a resident
population comprising several very different cultural groups.
Ball placement by the different groups tended to cluster at sig-
nificantly different locations in the picture, even for a game
like soccer that was well known & played by all. It is unfor-
tunate that this work (not mine) has not been published. It does
suggest two things: a.) a cultural influence on vision & percep-
tion, & b.) a powerful means of conducting experiments to learn
more about this influence. For instance, this same research was
elaborated into various TV displays designed to discover where
children of various age groups placed an unseen object to which
an arrow pointed. The children responded enthusiastically to
this new TV game, giving their answers by means of a light pen.
Yet statistically significant amounts of data were collected ef-
ficiently & painlessly.
I've constructed the loop above to suggest that none of
the three: vision, language, & culture should be studied out of
context.

E. C. Evans III

------------------------------

Date: Sat 3 Dec 83 00:42:50-PST
From: PEREIRA@SRI-AI.ARPA
Subject: Mental states of machines

Steven Gutfreund's criticism of John McCarthy is unjustified. I
haven't read the article in "Psychology Today", but I am familiar with
the notion put forward by JMC and condemned by SG. The question can
be put in simple terms: is it useful to attribute mental states and
attitudes to machines? The answer is that our terms for mental states
and attitudes ("believe", "desire", "expect", etc...) represent a
classification of possible relationships between world states and the
internal (inacessible) states of designated individuals. Now, for
simple individuals and worlds, for example small finite automata, it
is possible to classify the world-individual relationships with simple
and tractable predicates. For more complicated systems, however, the
language of mental states is likely to become essential, because the
classifications it provides may well be computationally tractable in
ways that other classifications are not. Remember that individuals of
any "intelligence" must have states that encode classifications of
their own states and those of other individuals. Computational
representations of the language of mental states seem to be the only
means we have to construct machines with such rich sets of states that
can operate in "rational" ways with respect to the world and other
individuals.

SG's comment is analogous to the following criticism of our use of the
terms like "execution", "wait" or "active" when talking about the
states of computers: "it is wrong to use such terms when we all know
that what is down there is just a finite state machine, which we
understand so well mathematically."

Fernando Pereira

------------------------------

Date: Mon 5 Dec 83 11:21:56-PST
From: Wilkins <WILKINS@SRI-AI.ARPA>
Subject: complexity of formal systems

From: Steven Gutfreund <gutfreund%umass-cs@CSNet-Relay>
They then resort to arcane languages and to attributing 'mental'
characteristics to what are basically fuzzy algorithms that have been applied
to poorly formalized or poorly characterized problems. Once the problems are
better understood and are given a more precise formal characterization, one
no longer needs "AI" techniques.

I think Professor McCarthy is thinking of systems (possibly not built yet)
whose complexity comes from size and not from imprecise formalization. A
huge AI program has lots of knowledge, all of it may be precisely formalized
in first-order logic or some other well understood formalism, this knowledge
may be combined and used by well understood and precise inference algorithms,
and yet because of the (for practical purposes) infinite number of inputs and
possible combinations of the individual knowledge formulas, the easiest
(best? only?) way to desribe the behavior of the system is by attributing
mental characteristics. Some AI systems approaching this complex already
exist. This has nothing to do with "fuzzy algorithms" or "poorly formalized
problems", it is just the inherent complexity of the system. If you think
you can usefully explain the practical behavior of any well-formalized system
without using mental characteristics, I submit that you haven't tried it on a
large enough system (e.g. some systems today need a larger address space than
that available on a DEC 2060 -- combining that much knowledge can produce
quite complex behavior).

------------------------------

Date: 28 Nov 83 3:10:20-PST (Mon)
From: harpo!floyd!clyde!akgua!sb1!sb6!bpa!burdvax!sjuvax!rbanerji@Ucb-
Vax
Subject: Re: Clarifying my "AI Challange"
Article-I.D.: sjuvax.157

[...]
I am reacting to Johnson, Helly and Dietterich. I really liked
[Ken Laws'] technical evaluation of Knowledge-based programming. Basically
similar to what Tom also said in defense of Knowledge-based programming
but KIL said it much clearer.
On one aspect, I have to agree with Johnson about expert systems
and hackery, though. The only place there is any attempt on the part of
an author to explain the structure of the knowledge base(s) is in the
handbook. But I bet that as the structures are changed by later authors
for various justified and unjustified reasons, they will not be clearly
explained except in vague terms.
I do not accept Dietterich's explanation that AI papers are hard
to read because of terminology; or because what they are trying to do
are so hard. On the latter point, we do not expect that what they are
DOING be easy, just that HOW they are doing it be clearly explained:
and that the definition of clarity follow the lines set out in classical
scientific disciplines. I hope that the days are gone when AI was
considered some sort of superscience answerable to none. On the matter
of terminology, papers (for example) on algebraic topology have more
terminology than AI: terminology developed over a longer period of time.
But if one wants to and has the time, he can go back, back, back along
lines of reference and to textbooks and be assured he will have an answer.
In AI, about the only hope is to talk to the author and unravel his answers
carefully and patiently and hope that somewhere along the line one does not
get "well, there is a hack there..it is kind of long and hard to explain:
let me show you the overall effect"
In other sciences, hard things are explained on the basis of
previously explained things. These explanantion trees are much deeper
than in AI; they are so strong and precise that climbing them may
be hard, but never hopeless.
I agree with Helly in that this lack is due to the fact that no
attempt has been made in AI to have workers start with a common basis in
science, or even in scientific methodology. It has suffered in the past
because of this. When existing methods of data representation and processing
in theorem proving was found inefficient, the AI culture developed this
self image that its needs were ahead of logic: notwithstanding the fact
that the techniques they were using were representable in logic and that
the reason for their seeming success was in the fact that they were designed
to achieve efficiency at the cost (often high) of flexibility. Since
then, those words have been "eaten": but at considerable cost. The reason
may well be that the critics of logic did not know enough logic to see this.
In some cases, their professors did--but never cared to explain what the
real difficulty in logic was. Or maybe they believed their own propaganda.
This lack of uniformity of background came out clear when Tom said
that because of AI work people now clearly understood the difference between
the subset of a set and the element of a set. This difference has been well
known at least since early this century if not earlier. If workers in AI
did not know it before, it is because of their reluctance to know the meaning
of a term before they use it. This has also often come from their belief
that precise definitions will rob their terms of their richness (not realising
that once they have interpreted their terms by a program, they have a precise
definition, only written in a much less comprehensible way: set theorists
never had any difficulty understanding the diffeence between subsets and
elements). If they were trained, they would know the techniques that are
used in Science for defining terms.
I disagree with Helly that Computer Science in general is unscientific.
There has always been a precise mathematical basis of Theorem proving (AI,
actually) and in computation and complexity theory. It is true, however, that
the traditional techniques of experimental research have not been used in
AI at all: people have tried hard to use it in software, but seem to
be having difficulties.
Would Helly disagree with me if I say that Newell and Simon's work
in computer modelling of psychological processes have been carried out
with at least the amount of scientific discipline that psychologists use?
I have always seen that work as one of the success stories in AI. And
at least some psychologists seem to agree.

I agree with Tom that AI will have to keep going even if someone
proves that P=NP. The reason is that many AI problems are amenable to
N^2 methods already: except that N is too big. In this connection I have
a question, in case someone can tell me. I think Rabin has a theorem
that given any system of logic and any computable function, there is
a true statement which takes longer to prove than that function predicts.
What does this say about the relation between P and NP, if anything?
Too long already!

..allegra!astrovax!sjuvax!rbanerji

------------------------------

Date: 1 Dec 83 13:51:36-PST (Thu)
From: decvax!duke!mcnc!ncsu!fostel @ Ucb-Vax
Subject: RE: Expert Systems
Article-I.D.: ncsu.2420

Are expert systems new? Different? Well, how about an example. Time
was, to run a computer system, one needed at least one operator to care
and feed for the system. This is increasingly handled by sophisticated
operating systems. As such is an operating system an "expert system"?

An OS is usually developed using a style of programming which is quite
different from those of wimpy, unskilled, un-enlightenned applications
programmers. It would be very hard to build an operating system in the
applications style. (I claim). The people who developed the style and
practice it to build systems are not usually AI people although I would
wager the presonality profiles would be quite similar.

Now, that is I think a major point. Are there different type of people in
Physics as compared to Biology? I would say so, having seen some of each.
Further, biologists do research in ways that seem different (again, this is
purely idiosynchratic evidence) differently than physists. Is it that one
group know how to do science better, or are the fields just so differnt,
or are the people attracted to each just different?

Now, suppose a team of people got together and built an expert system which
was fully capable of taking over the control of a very sophisticated
(previously manual, by highly trained people) inventory, billing and
ordering system. I claim that this is at least as complex as diagnosis
of and dosing of particular drugs (e.g. mycin). My expert system
was likely written in Cobol by people doing things in quite different ways
from AI or systems hackers.

One might want to argue that the productivity was much lower, that the
result was harder to change and so on. I would prefer to see this in
Figures, on proper comparisons. I suspect that the complexity of the
commercial software I mentioned is MUCH greater than the usual problem
attacked by AI people, so that the "productivity" might be comparable,
with the extra time reflecting the complexity. For example, designing
the reports and generating them for a large complex system (and doing
a good job) may take a large fraction of the total time, yet such
reporting is not usually done in the AI world. Traces of decisions
and other discourse are not the same. The latter is easier I think, or
at least it takes less work.

What I'm getting at is that expert systems have been around for a long
time, its only that recently AI people have gotten in to the arena. There
are other techniques which have been applied to developing these, and
I am waiting to be convinced that the AI people have a priori superior
strategies. I would like to be so convinced and I expect someday to
be convinced, but then again, I probably also fit the AI personality
profile so I am rather biased.
----GaryFostel----

------------------------------

Date: 5 Dec 1983 11:11:52-EST
From: John.Laird at CMU-CS-ZOG
Subject: Thesis Defense

[Reprinted from the CMU-AI bboard.]

Come see my thesis defense: Wednesday, December 7 at 3:30pm in 5409 Wean Hall

UNIVERSAL SUBGOALING

ABSTRACT

A major aim of Artificial Intelligence (AI) is to create systems that
display general problem solving ability. When problem solving, knowledge is
used to avoid uncertainty over what to do next, or to handle the
difficulties that arises when uncertainity can not be avoided. Uncertainty
is handled in AI problem solvers through the use of methods and subgoals;
where a method specifies the behavior for avoiding uncertainity in pursuit
of a goal, and a subgoal allows the system to recover from a difficulty once
it arises. A general problem solver should be able to respond to every task
with appropriate methods to avoid uncertainty, and when difficulties do
arise, the problem solver should be able to recover by using an appropriate
subgoal. However, current AI problem solver are limited in their generality
because they depend on sets of fixed methods and subgoals.

In previous work, we investigated the weak methods and proposed that a
problem solver does not explicitly select a method for goal, with the
inherent risk of selecting an inappropriate method. Instead, the problem
solver is organized so that the appropriate weak method emerges during
problem solving from its knowledge of the task. We called this organization
a universal weak method and we demonstrated it within an architecture,
called SOAR. However, we were limited to subgoal-free weak methods.

The purpose of this thesis is to a develop a problem solver where subgoals
arise whenever the problem solver encounters a difficulty in performing the
functions of problem solving. We call this capability universal subgoaling.
In this talk, I will describe and demonstrate an implementation of universal
subgoaling within SOAR2, a production system based on search in a problem
space. Since SOAR2 includes both universal subgoaling and a universal weak
method, it is not limited by a fixed set of subgoals or methods. We provide
two demonstrations of this: (1) SOAR2 creates subgoals whenever difficulties
arise during problem solving, (2) SOAR2 extends the set of weak methods that
emerge from the structure of a task without explicit selection.

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT