Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 201

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest            Monday, 24 Aug 1987      Volume 5 : Issue 201 

Today's Topics:
Comments - Programming Paradigms & Qualitative Simulation/Analysis,
Logic - Mr. S and Mr. P,
Philosophy - AI, Science, and Pseudo-Science & Natural Kinds

----------------------------------------------------------------------

Date: Mon, 17 Aug 87 08:49:36 MDT
From: shebs@cs.utah.edu (Stanley Shebs)
Reply-to: cs.utah.edu!shebs@cs.utah.edu (Stanley Shebs)
Subject: Re: Object-Oriented Programming

In article <12326542058.16.LAWS@KL.SRI.Com> Laws@KL.SRI.COM (Ken Laws) writes:

>Those interested in programming methodology (including expert systems)
>will probably enjoy reading Russell Abbott's article on "Knowledge
>Abstraction"
in the August issue of Communications of the ACM. It
>clarifies the role of domain knowledge in programming and suggests
>that object-oriented programming may be the wave of the future.

Perhaps I missed the point, but I found this paper rather boring.
It didn't seem to say much new - is there really anybody that doesn't
believe programs are encrypted knowledge, and that making the knowledge
more explicit is a Good Thing? Ditto for OOP - at least in the language
community, it's started to move from fanaticism to realism. Perhaps the
AI community is just getting started on the slide to object fanaticism?

Also, some more explicit examples of what is and is not knowledge abstraction
would have been useful. In fact, the concept of "knowledge" itself is pretty
vague - is "barks(X) :- dog(X)" a piece of knowledge or not, and how crucial
is a context or not? Or putting it in a more practical way, why would a
Silogic Prolog program be considered a "knowlege program" and not a Fortran
program?

>A related, but somewhat different, "knowledge level" view is taken
>by B. Chandrasekaran in his Fall 1986 IEEE Expert paper: "Generic
>Tasks in Knowledge-Based Reasoning: High-Level Building Blocks for
>Expert System Design."
While not incompatible with object-oriented
>programming, his generic tasks are at a level between that of common
>shell languages (rules, frames, nets, etc.) and the full specifics
>of real-world domain knowledge.

This same idea may be found amidst all the glitzy results in Lenat's
Eurisko papers - heuristics are objects with their own sorts of hierarchy
and inheritance. The so-called weak methods tend to be near the root of
hierarchies, while more specialized and domain-specific heuristics are
at the leaves.

>I sense a new view of AI coalescing ...

Or at least a new view of AI tools. Some exciting and relevant papers
may be found in a book edited by Gary Lindstrom and Doug DeGroot, called
"Logic Programming: Functions, Relations, and Equations" and published by
Prentice-Hall last year (reviewed in latest Computing Reviews). The papers
speak more to language types than to AI types, but there is much food for
thought...

stan shebs
shebs@cs.utah.edu

------------------------------

Date: Fri, 21 Aug 87 11:17:46 EDT
From: Paul Fishwick <fishwick%bikini.cis.ufl.edu@RELAY.CS.NET>
Subject: Qualitative Simulation/Analysis


In reference to the question on definitions for qualitative simulation
and analysis: the two terms should not be considered identical. If we
temporarily drop the adjective 'qualitative' then we have the terms
simulation and analysis --- simulation includes analysis of data but
also includes the primary area of modeling.

On a slightly different note, though, the term 'qualitative simulation'
is somewhat difficult to define since ultimately all simulations on
digital machinery will be quantitative. Qualitative modeling and
simulation in general seem to reflect the need to represent highly abstract
models using 'qualitative' terms. These terms are mapped onto the
real number space (for instance) and a quantitative simulation ensues.

Paul Fishwick
University of Florida

------------------------------

Date: 19 Aug 87 23:27:00 GMT
From: pur-ee!uicsgva!luke@seismo.CSS.GOV
Subject: Re: mr. s & mr. p


I definitely saw this problem in the "Mathematical
Games"
section of Scientific American some years ago. I am
not sure which issue it appeared in, but I am positive that
it came out between 1979 and 1982. I am 90% certain that it
can be found in the range of January 1980 to December 1981.
My first guess would be the October 1980 issue. The article
says that the problem made its debut at a party primarily
attended by mathematicians. I don't remember all the details
of the problem, but here is what I do remember:

Mr. P and Mr. S are experienced mathematicians. X and Y
are two different positive integers (For the benefit of the
reader, it has been disclosed that both X and Y are less
than or equal to 20. This constraint, however, is sup-
posedly unnecessary.) The sum of X and Y has been disclosed
to Mr. S and the product of X and Y has been disclosed to
Mr. P. Neither man knows the value of X or Y, nor are they
allowed to tell the other what their sum or product is.
They are allowed to talk to each other over the phone, and
do so after sufficient time to think about what the other
has said. The dialogue, as far as I remember is as follows:

Mr P to Mr S: I can't tell from the product what X and Y are.
(later....)
Mr S to Mr P: I can't tell what they are either.
(later....)
Mr P to Mr S: I still can't tell what X and Y are.

At this point, my memory fails me. But this is the earliest point
that I could feel comfortable with the following dialogue. I'm
pretty sure that these guys start knowing something within two more
bounces.

Mr ?? to Mr ??: Now I know what X and Y are.
(later)
Mr ?? to Mr ??: In that case, I know what X and Y are too!

According to Scientific American, the answer is 4 and 13.
If anyone finds the article, I would also like to know the
reference.

- Luke Young
Computer Systems Group
University of Illinois


+-------------------------------------------------------------+
| BITNET : LUKE@UIUCVMD CSNET: luke%haydn@uiuc.csnet |
| UUCP : {ihnp4,seismo,pur-ee,convex}!uiucuxc!uicsgva!luke |
| ARPANET : luke@haydn.csg.uiuc.edu or luke%haydn@uiucuxc |
| acoustic: office (217) 333-8164 home (217) 328-4570 |
| physical: 6-123 CSL, 1101 W Springfield, Urbana, IL 61801 |
+-------------------------------------------------------------+

------------------------------

Date: 15 Aug 87 12:29:41 GMT
From: munnari!trlamct.oz!andrew@uunet.UU.NET (Andrew Jennings)
Subject: Re: AI, science, and pseudo-science


In article <108@glenlivet.hci.hw.ac.uk>, gilbert@hci.hw.ac.UK
(Gilbert Cockton) writes:
>
> My criticism of AI is that most of the workers I meet are pretty
> ignorant of the CRITICAL TRADITIONS of ESTABLISHED disciplines which
> can say much about AI's supposed object of study. When AI folk do stop
> hacking (LISP, algebra or logic - it makes no difference, logic finger
> and algebra wrist are just as bad as the well known 'computer-bum'),
> they may do so only to raid a few concepts and 'facts' from some
> discipline, and then go and abuse them out of sight of the folk who
> originally developed them and understand their context and deductive
> limitations. What some of them do to English is even worse :-)
> --

I am afraid I cannot let this pass. It almost appears as if you view
programming as charlatan in itself ! Suffice it to say that if we
view AI as an empirical search then we have some definite criteria : either the
program works or it does not.

Sure I'm in favour of CRITICAL thought and CRITICAL appraisal of work in AI :
its just that I don't want to get buried in a pile of useless Lemmas (no doubt
generated by you and your accomplices).

Why can't you realise the simple truth : a discipline goes through STAGES of
development. First the empirical paradigm dominates, then the engineering
paradigm and last of all the theoreticians replete with armchairs.




--
UUCP: ...!{seismo, mcvax, ucb-vision, ukc}!munnari!trlamct.trl!andrew
ARPA: andrew%trlamct.trl.oz@seismo.css.gov
Andrew Jennings Telecom Australia Research Labs

------------------------------

Date: Tue, 18 Aug 87 07:51:03 PDT
From: Stephen Smoliar <smoliar@vaxa.isi.edu>
Subject: Re: Natural Kinds (Re: AIList Digest V5 #186)

In article <115@glenlivet.hci.hw.ac.uk> Gilbert Cockton
<mcvax!hci.hw.ac.uk!gilbert@seismo.CSS.GOV> writes:
>
>Whilst agreement on structure is possible by an appeal to sense-data
>mediated by a culture's categories, agreement on function is less
>likely. How do we know that an object has a function? Whilst the prime
>use of a chair, is indeed for sitting on, this does not preclude it's
>use for other functions - now don't these go back to structure? Or are
>they related to intention (i.e. when someone hits you on the head with
>a chair)?

There seems to be a bit of confusion between that the function of a perceived
object IS and what it CAN BE. There are very few concepts for which
structure and/or function are unique. The point is that both serve to
guide the classification of our perceptions. Thus, we may recognize a
chair by its structural appearance. Having done so, we can then identify
the surface upon which we should sit, how we should rest our back, where
we can tuck our legs, and so forth. On the other hand, if I walk into a
kitchen and see someone sitting on a step-stool, I recognize that he is
using that step-stool as a chair. Thus, I have made a functional recognition,
from which I conclude that he is using the top step as a seat, he is resting
his legs on a lower seat, and he is managing without a back support. Thus,
one can proceed from structural recognition to functional recognition or
vice versa.

This may be what Cockton means by "intention;" and it is most likely highly
societal in nature. However, we must not confuse the issue. We do not
classify our perceptions merely for the sake of classifying them but for
the sake of interacting with them. Depending on my needs, I may choose
to classify the chair at my dining room table as a place to sit while I
eat, a place to stand while I change a light bulb, or a weapon with which
to threaten (or attack) an intruder.

------------------------------

Date: Fri, 21 Aug 87 13:30:39 EDT
From: mckee%corwin.ccs.northeastern.edu@RELAY.CS.NET
Subject: Should AI be scientific? If yes, how?


One reason is simple intellectual honesty. If AI researchers
call themselves Computer Scientists (as many of them do), they're implicitly
also claiming to be scientists. And to be perfectly blunt, any scientist
who doesn't use the scientific method is a charlatan. I'd prefer AI to be
serious science, but if you don't want to do science, I won't argue.
Misrepresentation is a different matter: if it's not science, don't call
it science.
Another, more technical reason involves the perennial question
"what is reality?", and how one verifies any answer that might be submitted.
The question is important to AI not only in its "what is intelligence, really?"
aspect, but also because any AI system that interacts with the real world
ought to have an accurate understanding of its environment. Scientific facts
are (almost by definition) the most accurate description of the universe
that we have, and scientific theories the best summaries. And the reason
this is so is because the scientific method is the best way we've yet
discovered for making sure that facts and explanations are accurate.
Besides science, the other significant field with aspirations toward
understanding reality is philosophy, which has even evolved a specialized
subfield, ontology, devoted to the question. Now I haven't studied ontology,
not because the question is unimportant, but because I think philosophical
methodology is fatally flawed, and incapable of convincing me of the substance
of any conclusions that it might obtain. I'm not interested in a discussion
of how philosophy has or has not lost its way since Kant wrote his "Prolegomena
to Any Future Metaphysics Which Will Be Able to Come Forth as Science"
, but
I think philosophers' methodology has kept them from being as productive of
useful understanding as they could have been.
The critical question in choice of methodology concerns verifiablity.
I'd hate to see AI researchers cast adrift in a sea of notions by thinking
that a solid intellectual structure can be built on "Philosophical Foundations",
so I'm going to attempt to concisely describe a schema of the different ways
a theory can be confirmed. I'm afraid I'll have to leave out a lot of details
and examples, but I hope you'll be able to fill in the rest of the picture
yourself. In this schema, philosophy turns out to use the weakest form of
confirmation, AI as it's currently practiced uses somewhat stronger methods,
and the natural sciences end up as strongest.

To see how this happens, think of the subject matter of a field
of study as a set of statements (observations, facts) connected by a network of
reasons. The reasons can be arbitrarily long (or short) chains of inferences.
What a researcher needs to do to "understand" the field is find a set of
axioms and inference rules that will show the explanatory relation between
any pair of observations. However, the problem is underdetermined -- there's
more than one consistent set of explanations for any set of facts. At the
very least, one can always say "Because!", and define a special rule for
each ill-behaved pair of facts. Doing this everywhere gives your theory
a very simple structure, and Occam's razor decrees that simplicity is important.
If there are always multiple theories that can explain all the observed
data, then one must turn to some confirmation methodology to distinguish
between them, and using anything but the most powerful techniques is a waste
of time and resources. They are all based on prediction -- applying
explanations to facts until one has covered all the facts, then generating
new "potential facts" from incompletely bound explanations. For philosophers,
all that can be done is to compare predictions, since the operations of
the human mind are not externally visible. Worse, the facts of experience
itself are inaccessible to more than one theorist, so that the data
can't be verified, only statements about it. And since Godel proved his
famous incompleteness theorem, we've known that no realistic model of the
world can be derived from a finite set of axioms, so there's no way of telling
if any discrepancy in predictions might be cured by the addition of "just
one more"
axiom. [Beyond this my metamathematics doesn't go. It would be
interesting to know if there's any convergence at higher degrees of
metafication. I don't think so, though.]
In AI, one can trace the operation of a theory that's been instantiated
as a program, as long as there's sharing of source code and the hardware is
the same. This gives you operational confirmation as well as implicational
confirmation, since you can watch the computer's "mind" at work, pausing
to examine the data, or single-step the inference engine. The points of
divergence between multiple theories of the same phenomenon can thus be
precisely determined. But theories summarize data, and where does the
data come from? In academia, it's probably been typed in by a grad student;
in industry, I guess this is one of the jobs of the knowledge engineer.
In either case there's little or no standard way to tell if the data that
are used represent a reliable sample from the population of possible data
that could have been used. In other sciences the curriculum usually includes
at least one course in statistics to give researchers a feel for sampling
theory, among other topics. Statistical ignorance means that when an AI
program makes an unexpected statement, you have only blind intuition and
"common sense" to help decide whether the statement is an artifact of sampling
error or a substantial claim.
In the natural sciences, in addition to implicational and operational
confirmation, you'll find external confirmation. Each relation in the theory
is tested by an experiment on the phenomenon itself, often in many ways in
many experiments. It's not easy to think of statements about the content
of AI (as opposed to its practice or techniques) that *could* be validated
this way, much less hypotheses that actually *have* been experimentally
validated. Hopefully, it's my ignorance of the field that leads me to
say this. The best I can think of at the moment is "all intelligent systems
that interact with the physical world maintain multiple representations
for much of their knowledge."


To verify a hypothesis like this, one of the strategies one can
use is to build synthetic intelligent systems and then look at their
structure and performance, remembering that the engineering used during
construction is not the scientific goal. And then, to understand the
structure one would use analytic techniques, and to understand the performance
one would use behaviorist techniques. (Behaviorist anti-theory can safely
be ignored, but don't forget that their methodology allowed them to discover
learning sets when their animals became skilled at finding solutions to
new *kinds* of problems.)
Another strategy is to look at the structure and behavior of the
intelligent systems one finds in nature. One would use the same methods
to validate the behavioral descriptions as in the synthetic case, but
to study natural systems' structure one must use indirect, non-invasive means
or non-human subjects, since ethical considerations forbid destructive
testing of humans except in very special circumstances. However the problem
here is not lack of data but lack of understanding. If I believed that
more data was needed, I'd be back in the lab recording from multiple
microelectrodes, or standing in line for time on a magnetic resonance
imager (which can already give you sub-millimeter resolution in a 3-dimensional
brain image -- why wait for magnetoencephalography which won't tell you
what you want to know anyway?), instead of building and running abstract
models of neural tissue.


Oops, four times as many words as I had hoped for.
Oh well, thanks for your attention.
- George McKee
College of Computer Science [sic]
Northeastern University, Boston 02115

CSnet: mckee@Corwin.CCS.Northeastern.EDU
Phone: (617) 437-5204

Quote of the day: "It's not what you don't know that hurts you,
it's the things you know that ain't so."

- Mark Twain

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT