Copy Link
Add to Bookmark
Report

AIList Digest Volume 6 Issue 060

eZine's profile picture
Published in 
AIList Digest
 · 1 year ago

AIList Digest            Tuesday, 29 Mar 1988      Volume 6 : Issue 60 

Today's Topics:
Opinion - The Future of AI,
Expert Systems - Student Versions of OPS5 & Certainty Factors,
Theory - Models of Uncertainty

----------------------------------------------------------------------

Date: 28 Mar 88 12:46:29 GMT
From: eagle!sbrunnoc@ucbvax.Berkeley.EDU (Sean Brunnock)
Subject: Re: The future of AI [was Re: Time Magazine -- Computers of
the Future]

In article <962@daisy.UUCP> klee@daisy.UUCP (Ken Lee) writes:
>What do people think of the PRACTICAL future of artificial intelligence?
>
>Is AI just too expensive and too complicated for practical use? I
>
>Does AI have any advantage over conventional programming?

Bear with me while I put this into a sociological perspective. The first
great "age" in mankind's history was the agricultural age, followed by the
industrial age, and now we are heading into the information age. The author
of "Megatrends" points out the large rise in the number of clerks as
evidence of this.

The information age will revolutionize agriculture and industry just as
industry revolutionized agriculture one hundred years ago. Industry gave to
the farmer the reaper, cotton gin, and a myriad of other products which
made his job easier. Food production went up an order of magnitude and by
the law of supply and demand, food became less valuable and farming became
less profitable.

The industrial age was characterized by machines that took a lot of
manual labor out of the hands of people. The information age will be
charcterized by machines that will take over mental tasks now accomplished
by people.

For example, give a machine access to knowledge of aerodynamics,
engines, materials, etc. Now tell this machine that you want it to
design a car that can go this fast, use this much fuel per mile, cost
this much to make, etc. The machine thinks about it and out pops a
design for a car that meets these specifications. It would be the
ultimate car with no room for improvement (unless some new scientific
discovery was made) because the machine looks at all of the possibilities.
These are the types of machines that I expect AI to make possible
in the future.

I know this is an amateurish analysis, but it convinces me to study
AI.

As for using AI in conventional programs? Some people wondered what
was the use of opening up a trans-continental railroad when the pony
express could send the same letter or package to where you wanted in just
seven days. AI may be impractical now, but we have to keep making an effort
at it.


Sean Brunnock
University of Lowell
sbrunnoc@eagle.cs.ulowell.edu

------------------------------

Date: 28 Mar 88 17:08:20 GMT
From: glacier!jbn@labrea.stanford.edu (John B. Nagle)
Subject: Re: The future of AI [was Re: Time Magazine -- Computers of
the Future]

In article <5789@swan.ulowell.edu> sbrunnoc@eagle.UUCP (Sean Brunnock) writes:
>industrial age, and now we are heading into the information age. The author
>of "Megatrends" points out the large rise in the number of clerks as
>evidence of this.

The number of office workers in the U.S. peaked in 1985-86 and has
declined somewhat since then. White collar employment by the Fortune 500
is down substantially over the last five years. The commercial real estate
industry has been slow to pick up on this, which is why there are so many
new but empty office buildings. The new trend is back toward manufacturing.
You can't export services, except in a very minor way. (Check the numbers
on this; they've been published in various of the business magazines and
can be obtained from the Department of Commerce.)

> For example, give a machine access to knowledge of aerodynamics,
>engines, materials, etc. Now tell this machine that you want it to
>design a car that can go this fast, use this much fuel per mile, cost
>this much to make, etc. The machine thinks about it and out pops a
>design for a car that meets these specifications. It would be the
>ultimate car with no room for improvement (unless some new scientific
>discovery was made) because the machine looks at all of the possibilities.

Wrong. Study some combinatorics. Exhaustive search on a problem like
that is hopeless. The protons would decay first.

John Nagle

------------------------------

Date: 25 Mar 88 16:23:17 GMT
From: trwrb!aero!srt@ucbvax.Berkeley.EDU (Scott R. Turner)
Subject: Re: Student versions of OPS5

In article <8803161533.AA03130@decwrl.dec.com> barabash@midevl.dec.com
(Digital has you now!) writes:
>
> In article <26695@aero.ARPA> srt@aero.UUCP (Scott R. Turner) writes:
>> I don't think the Vax version [of OPS5] uses Rete (at least, it allows
>> calculations in the LHS).
>
> In fact, VAX OPS5 uses the high-performance compiled Rete, first used
> by OPS83, wherein each node in the network is represented by machine
> instructions. This makes it easy for the compiler to support inline
> expression evaluation and external function calls in the LHS.

My understanding of the Rete algorithm (and admittedly, I don't have
the paper at hand) was that speed was obtained by building an
immediately executable tree of database checks. In essence, a
compiled D-Net based on the contents of WM. So the speed increase
isn't merely because you compile the LHS of all the rules (at some
level every language represents things as machine instructions, after
all), but because the Rete algorithm builds a discrimination tree that
efficiently orders the tests and guarantees that a minimum number of
tests will be taken to determine the applicable rules. Much of OPS5's
conflict resolution strategy falls directly out of Rete, I think, in
the order in which applicable rules are found.

So, can executable functions be included in the LHS without ruining
this scheme? I'd say no, with reservations. At the very least, two
rules with an identical function call in the LHS will result in the
function being executed twice (since the compiler can't guarantee that
the function doesn't have a side-effect which will change its result
between different invocation with identical arguments). So, in the
Rete scheme, if two rules have an identical WM check in the LHS, that
check is made only once each cycle of the OPS5 machine. If they have
an executable in the LHS, that gets executed twice. If you allow
functions which can change WM to exist in the LHS, things get much
worse.
-- Scott Turner

------------------------------

Date: 25 Mar 88 18:05:25 GMT
From: mcvax!unido!tub!tmpmbx!netmbx!morus@uunet.uu.net (Thomas M.)
Subject: Re: Student versions of OPS5

In article <27336@aero.ARPA> srt@aero.UUCP (Scott R. Turner) writes:
>In article <1501@netmbx.UUCP> muhrth@db0tui11.BITNET (Thomas Muhr) writes:
>>I have now available a few common-lisp
>>sources (each about 100KB big) which I will try to convert to a PC-runnable
>>version in the near future.
>
>It should be possible to write an OPS5-like language in a lot less than
>100K. The only difficult part of OPS5 to implement is the RETE algorithm.
>Throw that out, ignore some of the rules for determining which rule out
>of all the applicable rules to use (*), and you should be able to implement
>OPS5 in a couple of days. Of course, this version will be slow and GC
>every few minutes or so, but those problems will be present to some extent
>in any version written in Lisp.

Right, but after all the proposed deletions this code would hardly cover 2
pages. Leaving Rete-match out is not just affecting run-time (the decrease
in performance is incredible!) - but it would eliminate all features which
make OPS5 an interesting language - mainly the heuristics for selecting
rule-instantiations intelligently.
>
>(*) My experience is that most OPS5 programmers (not that there are many)
Is this right ? ---^^^^^^^^^^^^^^^^^^^^^^
>ignore or actively counter the "pick the most specific/least recently used"
>rules anyway.
Well, it would be fine to have a little more influence on conflict-resolution
strategies - but the mentioned ones are very important:
Default strategies via "specifity", controlling loops via "recency" are very
powerful features.
Ignoring these mechanisms means that they have chosen the wrong paradigm.

-------- Thomas Muhr

Knowledge Based Systems Group
Technical University of Berlin
--
@(^o^)@ @(^x^)@ @(^.^)@ @(^_^)@ @(*o*)@ @('o`)@ @(`!')@ @(^o^)@
@ Thomas Muhr Tel.: (Germany) 030 - 87 41 62 (voice!) @
@ NET-ADRESS: muhrth@db0tui11.bitnet or morus@netmbx.UUCP @
@ BTX: 030874162 @

------------------------------

Date: Sun, 27 Mar 88 22:14:42 HNE
From: Spencer Star <STAR%LAVALVM1.BITNET@CORNELLC.CCS.CORNELL.EDU>
Subject: Certainty Factors

This is a reply to KIKO's message asking about the validity of using
certainty factors in an expert system.

You can use certainty factors as a way of coping with uncertainty, but
you run the risk of introducing substantial errors in the analysis
due to the simplifying assumptions that underlie CFs. In some domains,
such as certain medical areas, the same treatment will be used to
cover several different diseases, e.g., antibiotics to cover infections of
various sorts. In that case, the incoherence introduced by using CFs is
often covered up by the lack of need to be very precise. I certainly
would not like to try to defend the use of CFs in a liability suit brought
against a person who made a poor diagnosis based on an expert system. I
would recommend doing a lot of research on the various ways that uncertainty
is handled by people working on uncertainty. An excellent starting point
is the paper, "Decision Theory in Expert Systems and Artificial Intelligence",
by Eric J. Horvitz, John S. Breese, and Max Henrion. It will be in the
Journal of Approximate Reasoning, Special Issue on Uncertain Reasoning,
July, 1988. Prepublication copies might be available from Horvitz ,
Knowledge Systems Laboratory, Department of Computer Science, Stanford
University, Stanford, CA 94305. This paper will become a standard
reference for people interested in using Bayesian decision theory
in AI.
Putting uncertainty into an expert system using decision theory is not
as simple as using certainty factors. But getting it right is not always
easy.
I hope this will be of some help. Best of luck.
Spencer Star (star@lavalvm1.bitnet) or
arpa: (star@b.gp.cs.cmu.edu)

------------------------------

Date: Mon, 28 Mar 88 02:17:39 HNE
From: Spencer Star <STAR%LAVALVM1.BITNET@CORNELLC.CCS.CORNELL.EDU>
Subject: Re: AIList V6 #57 - Theorem Prover, Models of Uncertainty


I'll try to respond to Ruspini's comments about my reasons for
choosing the Bayesian approach to representing uncertainty.
[AIList March 22, 1988]

>If, for example, we say that the probability of "economic
>recession" is 80%, we are indicating that there is either a
>known...or believed...tendency or propensity of an economic
>system to evolve into a state called "recession".
>
>If, on the other hand, we say the system will move into a
>state that has a possibility of 80% of being a recession, we are
>saying that we are *certain* that the system will evolve into a
>state that resembles or is similar at least to a degree of 0.8
>to a state of recession (note the stress on certainty with
>imprecision about the nature of the state as opposed to a
>description of a believed or previously observed tendency).

I think this is supposed to show the difference between something
that probabilities can deal with and a "fuzzy approach" that
probabilities can't deal with. However, probabilities can deal
with both situations, even at the same time. The following
example demonstrates how this is done.

Suppose that there is a state Z such that from that state there
is a P(Z)=60% chance of a recession in one year. This
corresponds to the state in the second paragraph. Suppose that
there are two other states, X and Y, that have probabilities of
entering a recession within a year of P(X)=20% and P(Y)=40%. My
beliefs that the system will enter those states are Bel(X)=.25
Bel(Y)=.30 Bel(Z)=.45, where beliefs are my subjective prior
probabilities conditional on all the information available to me.
Then the probability of a recession is, according to my beliefs,
P(recession)=alpha*[Bel(X)*P(X) + Bel(Y)*P(Y) + Bel(Z)*P(Z)],

where alpha is a normalization factor making

P(recession) + P(no recession) = 1.

In this example, a 25% belief in state X occurring gives a 5%
chance of recession and a 20% chance of no recession. Summing
the little buggers up across states gives a 44% chance of
recession and a 56% chance of no recession with alpha=1. These
latter figures give my beliefs about there being or not being a
recession.

So far, I haven't found a need to use fuzzy logic to represent
possibilities. Peter Cheeseman, "Probabilistic versus Fuzzy
Reasoning", in Kanal and Lemmer, Uncertainty in AI, deals with
this in more detail and comes to the same conclusion. Bayesian
probabilities work fine for the examples I have been given. But
that doesn't mean that you shouldn't use fuzzy logic. If you
feel more comfortable with it, fine.

I proposed simplicity as a property that I value in choosing a
representation. Ruspini seems to believe that a relatively
simple representation of a complex system is "reason for
suspicion and concern." Perhaps. It's a question of taste and
judgment. However, my statement about the Bayesian approach
being based on a few simple axioms is more objective.
Probability theory is built on Kolmogoroff's axioms, which for
the uninitiated say things like the sum of the probabilities must
equal 1, that we can sum probabilities for independent events,
and that probabilities are numbers less than or equal to one.
Nothing very complicated there. Decision theory adds utilities
to probabilities. For a utility function to exist, the agent
must be able to order his preferences, prefer more of a
beneficial good rather than less, and a few other simple things.

Ruspini mentions that "rational" people often engage in
inconsistent behavior when viewed from a Bayesian framework.
Here we are in absolute agreement. I use the Bayesian framework
for a normative approach to decision making. Of course, this
assumes the goal is to make good decisions. If your goal is to
model human decision making, you might very well do better than
to use the Bayesian model. Most of the research I am aware of
that shows inconsistencies in human reasoning has made
comparisons with the Bayesian model. I don't know if fuzzy sets
or D-S provides solutions for paradoxes in probability
judgments. Perhaps someone could educate me on this.


>Being able to connect directly with decision theory.
>(Dempster-Shafer can't)
Here, I think I was too hard on D-S. The people at SRI,
including Ruspini, have done some very interesting work using the
Dempster-Shafer approach in a variety of domains that require
decisions to be made. Ruspini is correct in pointing out that if
no information is available, the Bayesian approach is often to
use a maximum entropy estimate (everything is equally likely),
which could also be used as the basis for a decision in D-S. I
have been told by people in whom I trust that there are times
when D-S provides a better or more natural representation of the
situation than a strict Bayesian approach. This is an area ripe
for cooperative research. We need to know much more about the
comparative advantages of each approach on practical problems.

>Having efficient algorithms for computation.
When I stated that the Bayesian approach has efficient algorithms
for computation, I did not mean to state that the others did not.
Shafer and Logan published an efficient algorithm for Dempster's
rule in the case of hierarchical evidence. Fuzzy sets are often
implemented efficiently. This statement was much more a defence
of probability theory against the claim that there are too many
probabilities to calculate. Kim and Pearl have provided us with
an elegant algorithm that can work on a parallel machine. Cycles
in reasoning still present problems, although several people have
proposed solutions. I don't know how non-Bayesian logics deal
with this problem. I'd be happy to be informed.

>Being well understood.
This statement is based on my observations of discussions
involving uncertainty. I have seen D-S advocates disagree
numerous times over whether or not the particular paper X, or
implementation Y is really doing Dempster-Shafer evidential
reasoning. I have seen very bright people present a result on
D-S only to have other bright people say that result occurs only
because they don't understand D-S at all. I asked Glen Shafer
about this and his answer was that the theory is still being
developed and is in a much more formative stage than Bayesian
theory. I find much less of this type of argument occurring
among Bayesians. However, there is also I.J. Good's article
detailing the 46,656 varieties of Bayesians possible, given the
major different views on 11 fundamental questions.

The Bottom Line
There has been too much effort put into trying to show that
one approach is better or more general than the other and not
enough into some other important issues. This message is already
too long, so let me close with what I see as the major issue for
the community of experts on uncertainty to tackle.

The real battle is to get uncertainty introduced into basic AI
research. Uncertainty researchers are spending too much of their
time working with expert systems, which is already a relatively
mature technology. There are many subject areas such as machine
learning, non-monotonic reasoning, truth maintenance, planning,
etc. where uncertainty has been neglected or rejected. The world
is inherently uncertain, and AI must introduce methods for
managing uncertainty whenever it wants to leave toy micro-worlds
to deal with the real world. Ask not what you can do for the
theory of uncertainty; ask what the theory of uncertainty can do
for you.

Spencer Star
Bitnet: star@lavalvm1.bitnet
Arpanet: star@b.gp.cs.cmu.edu

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT