Copy Link
Add to Bookmark
Report
AIList Digest Volume 5 Issue 172
AIList Digest Thursday, 9 Jul 1987 Volume 5 : Issue 172
Today's Topics:
Query - Xlisp,
Programming - Software Reuse & Abstract Specifications,
Scientific Method - Is AI a Science?
----------------------------------------------------------------------
Date: Tue 7 Jul 87 09:18:25-PDT
From: BEASLEY@EDWARDS-2060.ARPA
Subject: xlisp
If anyone has any information or has heard any information about using
XLISP (eXperimental LISP) on the PC, please send me that information at
beasley@edwards-2060.ARPA. Thank you.
------------------------------
Date: 6 Jul 87 05:28:23 GMT
From: vrdxhq!verdix!ogcvax!dinucci@seismo.css.gov (David C. DiNucci)
Subject: Re: Software Reuse -- do we really know what it is ? (long)
In article <titan.668> ijd@camcon.co.uk (Ian Dickinson) writes:
>> Xref: camcon comp.lang.ada:166 comp.lang.misc:164
>Hence a solution: we somehow encode _abstractions_ of the ideas and place
>these in the library - in a form which also supplies some knowledge about the
>way that they should be used. The corollary of this is that we need more
>sophisticated methods for using the specifications in the library.
>(Semi)-automated transformations seem to be the answer to me.
>
>Thus we start out with a correct (or so assumed) specification, apply
>correctness-preserving transormation operators, and so end up with a correct
>implementation in our native tongue (Ada, Prolog etc, as you will). The
>transformations can be interactively guided to fit the precise circumstance.
>[Credit] I originally got this idea from my supervisor: Dr Colin Runciman
>@ University of York.
In his Phd thesis defense here at Oregon Graduate Center, Dennis
Volpano presented his package that did basically this. Though certainly
not of production quality, the system was able to take an abstraction
of a stack and, as a separate module, a description of a language and
data types within the language (in this case integer array and file,
if I remember correctly), and produce code which was an instantiation
of the abstraction - a stack implemented as an array or as a file.
I haven't actually read Dennis' thesis, so I don't know what the
limitations of constraints on his approach are. I believe he is
currently employed in Texas at MCC.
---
Dave DiNucci dinucci@Oregon-Grad
------------------------------
Date: 7 Jul 87 02:21:06 GMT
From: vrdxhq!verdix!ogcvax!pase@seismo.css.gov (Douglas M. Pase)
Subject: Re: Software Reuse (short title)
In article <glacier.17113> jbn@glacier.UUCP (John B. Nagle) writes:
>
> The trouble with this idea is that we have no good way to express
>algorithms "abstractly". [...]
Well, I'm not sure just where the limits are, but polymorphic types can go
a long way towards what you have been describing. It seems that a uniform
notation for operators + the ability to define additional operators +
polymorphically typed structures are about all you need. Several functional
languages already provide an adequate basis for these features. One such
language is called LML, or Lazy ML. Current language definitions tend to
concentrate on the novel features rather than attempt to make LML a full-blown
"production" language, and therefore may be missing some of your favorite
features. However, my point is that we may well be closer to your objective
than some of us realize.
I apologize for the brevity of this article -- if I have been too vague,
send me e-mail and I will be more specific.
--
Doug Pase -- ...ucbvax!tektronix!ogcvax!pase or pase@Oregon-Grad.csnet
------------------------------
Date: 7 Jul 87 15:18:32 GMT
From: debray@arizona.edu (Saumya Debray)
Subject: Automatic implementation of abstract specifications
In article <1337@ogcvax.UUCP>, dinucci@ogcvax.UUCP (David C. DiNucci) writes:
> In his Phd thesis defense here at Oregon Graduate Center, Dennis
> Volpano presented his package that did basically this. Though certainly
> not of production quality, the system was able to take an abstraction
> of a stack and, as a separate module, a description of a language and
> data types within the language (in this case integer array and file,
> if I remember correctly), and produce code which was an instantiation
> of the abstraction - a stack implemented as an array or as a file.
I believe there was quite a bit of work on this sort of stuff at MIT
earlier in the decade. E.g. there was a PhD thesis [ca. 1983] by
M. K. Srivas titled "Automatic Implementation of Abstract Data Types"
(or something close to it). The idea, if I remember correctly, was to
take sets of equations specifying the "source" ADT (e.g. stack) and the
"target" ADT (e.g. array), and map the source into the target.
--
Saumya Debray CS Department, University of Arizona, Tucson
internet: debray@arizona.edu
uucp: {allegra, cmcl2, ihnp4} !arizona!debray
------------------------------
Date: Mon, 6 Jul 87 10:06:05 MDT
From: shebs%orion@cs.utah.edu (Stanley T. Shebs)
Subject: AI vs Scientific Method
I can understand Don Norman's unhappiness about the lack of scientific method
in AI - from a practical point of view, the lack of well-understood criteria
for validity means that refereeing of publications is unlikely to be very
objective... :-(
The scientific method is a two-edged sword, however. Not only does it define
what is interesting, but what is uninteresting - if you can't devise a con-
trolled experiment varying just a single parameter, you can't say anything
about a phenomenon. A good scientist will perhaps be able to come up with
a different experiment, but if stymied enough times, he/she is likely to move
on to something else (at about the same time the grant money runs out :-) ).
Established sciences like chemistry have an advantage in that the parameters
most likely to be of interest are already known; for instance temperature,
percentages of compounds, types of catalysts, and so forth. What do we have
for studying intelligence? Hardly anything! Yes, I know psychologists have
plenty of experimental techniques, but the quality is pretty low compared to
the "hard sciences". A truly accurate psychology experiment would involve
raising cloned children in a computer-controlled environment for 18 years.
Even then, you're getting minute amounts of data about incredibly complex
systems, with no way to know if the parameters you're varying are even
relevant.
There's some consolation to be gained from the history of science/technology.
The established fields did not spring full-blown from some genius' head;
each started out as a confused mix of engineering, science, and speculation.
Most stayed that way until the late 19th or early 20th century. If you don't
believe me, look at an 18th or early 19th century scientific journal (most
libraries have a few). Quite amusing, in fact very similar to contemporary
AI work. For instance, an article on electric eels from about 1780 featured
the observations that a slave grabbing the eel got a stronger shock on the
second grab, and that the shock could be felt through a wooden container.
No tables or charts or voltmeter readings :-).
My suggestion is to not get too worked up about scientific methods in AI.
It's worth thinking about, but people in other fields have spent centuries
establishing their methods, and there's no reason to suppose it will take any
less for AI.
stan shebs
shebs@cs.utah.edu
------------------------------
Date: Mon, 6 Jul 1987 16:29 EDT
From: MINSKY%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU
Subject: AIList Digest V5 #170
I would like to see that discussion of "symbol grounding" reduced to
much smaller proportions because I think it is not very relevant to
AI, CS, or psychology. To understand my reason, you'd have to read
"Society of Mind", which argues that this approach is obsolete because
it recapitulates the "single agent" concept of mind that dominates
traditional philosophy. For example, the idea of "categorizing"
perceptions is, I think, mainly an artifact of language; different
parts of the brain deal with inputs in different ways, in parallel.
In SOM I suggest many alternative ways to think about thinking and, in
several sections, I also suggest reasons why the single agent idea has
such a powerful grip on us. I realize that it might seem self-serving
for me to advocate discussing Society of Mind instead. I would have
presented my arguments in reply to Harnad, but they would have been
too long-winded and the book is readily available.
------------------------------
Date: Mon, 6 Jul 87 18:25:51 EDT
From: Jim Hendler <hendler@brillig.umd.edu>
Subject: Re: AIList Digest V5 #171
While I have some quibbles with Don N.'s long statement on AI viz (or
vs.) science, I think he gets close to what I have felt a key point
for a long time -- that the move towards formalism in AI, while important
in the change of AI from a pre-science (alchemy was Drew McDermott's
term) to a science, is not enough. For a field to make the transition
an experimental methodology is needed. In AI we have the potential
to decide what counts as experimentation (with implementation being
an important consideration) but have not really made any serious
strides in that direction. When I publish work on planning and
claim ``my system makes better choices than <name of favorite
planning program's>'' I cannot verify this other than by showing
some examples that my system handles that <other>'s can't. But of
course, there is no way of establishing that <other> couldn't do
examples mine can't and etc. Instead we can end up forming camps of
beliefs (the standard proof methodology in AI) and arguing -- sometimes
for the better, sometimes for the worse.
While I have no solution for this, I think it is an important issue
for consideration, and I thank Don for provoking this discussion.
-Jim Hendler
------------------------------
Date: Tue, 7 Jul 1987 01:11 EDT
From: MINSKY%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU
Subject: AIList Digest V5 #171
At the end of that long and angry flame, I think D.Norman unwittingly
hit upon what made him so mad:
> Gedanken experiments are not accepted methods in science: they are
> simply suggestive for a source of ideas, not evidence at the end.
And that's just what AI has provided these last thirty years - a
source of ideas that were missing from psychology in the century
before. Representation theories, planning procedures, heuristic
methods, hundreds of such. The history of previous psychology is ripe
with "proved" hypotheses, few of which were worth a damn, and many of
which were refuted by Norman himself. Now "cognitive psychology" -
which I claim and Norman will predictably deny (see there: a testable
hypothesis!) is largely based on AI theories and experiments - is
taking over at last - as a result of those suggestions for ideas.
------------------------------
Date: Tue, 7 Jul 87 01:28 MST
From: "Paul B. Rauschelbach" <Rauschelbach@HIS-PHOENIX-MULTICS.ARPA>
Subject: What is science
I normally only observe this discussion, but Don Norman's pomposity
struck a nerve. The first objection I have is to his statement that
mathematics and philosophy are not sciences "in the normal
interpretation of the word." The Webster's definition (a fairly normal
interpretation) is: "accumulated knowledge systematized and formulated
with reference to the discovery of general truths or the operation of
general laws." This certainly applies to both.
The next problem is his statement that AI people think they're
scientists. He seemed to believe that it was a science until Nils Nilsson
told him the obvious. AI, like it's name implies, is a product, not a
phenomenon, not an occurence of nature to be described. The problem is the
creation of a product, an engineering problem. The preservation of theory is
far from an engineer's mind. The engineer uses theory to describe possible
solutions. If an engineer comes across a possible solution that has not been
addressed by theory, s/he may get his hands a little dirty before the
"scientists" take control of it. It seems to me that much of the talk in this
discussion is of a hypothetical nature, one of the elements of THE SCIENTIFIC
METHOD he was defending. This is a good place for that portion of the
method, as well as statement of the problem. The experimentation is left to
the psychologists, neurologists, etc. I see no one but scientists claiming
to be scientists, and I hear AI people shouting, "Yeah, but how do you code
it?" or "What doohickey will do that?" Implementation of theory. I have also
read discussion of the testing of implementation. Come to think of it,
engineering also fits the definition of science.
Both things, implementation and theory have been and should be discussed here.
If they intermingle, this can only be healthy, even if somewhat confusing. I
hope we can both get down off our respective high horses now.
Paul Rauschelbach
Honeywell Bull
P.O. Box 8000, M/S K55, Phoenix, AZ 85006
(602) 862-3650
pbr%pco@BCO-MULTICS.ARPA
Disclaimer: The opinions expressed above are mine, and not endorsed by
Honeywell Bull.
------------------------------
Date: 7 Jul 87 08:41:33 edt
From: Walter Hamscher <hamscher@ht.ai.mit.edu>
Subject: Why AI is not a science
Date: Fri, 3 Jul 87 07:29:41 pdt
From: norman%ics@sdcsvax.ucsd.edu (Donald A. Norman)
I started out writing a message that said this message was
97% true, but that there was an arguable 3%, namely:
The problem is that most folks in AI think they are scientists * * *
I was going to pick a nit with the word "most".
Then, I remembered that the AAAI-86 Proceedings were
split into a "Science" track and an "Engineering" track,
the former being about half again as thick as the latter...
------------------------------
Date: 8 Jul 87 01:37:17 GMT
From: munnari!goanna.oz!jlc@uunet.UU.NET (J.L Cybulski)
Subject: Re: Why AI is not a science
Don Norman says that AI is not a Science!
Is Mathematics a science or is it not?
No experiments, no comparisons, thus they are not Sciences!
Perhaps both AI and Maths are Arts, ie. creative disciplines.
Both adhere to their own rigour and methods.
Both talk about hypothetical worlds.
Both are used by researchers from other disciplines as tools,
Maths is used to formally describe natural phenomena,
AI is used to construct computable models of these phenomena.
So, where is the problem?
Hmmm, I think some of the AI researchers wander into the
areas of their incompetence and they impose their quasi-theories
on the specialists from other scientific domains. Some of those
quasi-theories are later reworked and adopted by the same specialists.
Is it, then, good or bad? It seems that lack of scientific constraints
may be helpful in advancing knowledge about the principles of science,
it seems that the greatest breakthroughs in Science come from those
who were regarded as unorthodox in their methods.
May be AI is such unorthodox Science, or perhaps an Art.
Let us keep AI this way!
Jacob L. Cybulski
------------------------------
Date: 07-Jul-1987 0829
From: billmers%aiag.DEC@decwrl.dec.com (Meyer Billmers, AI
Applications Group)
Subject: Re: AIList Digest V5 #171
Don Norman writes that "AI will contribute to the A, but will not
contribute to the I unless and until it becomes a science...".
Alas, since physics is a science and mathematics is not one, I guess the
latter cannot help contribute to the former unless and until mathematicians
develop an appreciation for the experimental methods of science. Ironic
that throughout history mathematics has been called the queen of sciences
(except, of course, by Prof. Norman).
Indeed, physics is a case in point. There are experimental physicists, but
there are also theoretical ones who formulate, posulate and hypothesize
about things they cannot measure or observe. Are these men not scientists?
And there are those who observe and measure that which has no theoretical
foundation (astrologists hypothesize about people's fortunes; would any
amount of experimentation turn astrology into a science?). I believe the
mix between theoretical underpinnings and scientific method makes for
science. The line is not hard and fast.
By my definition, AI has the right attributes to make it a science. There
are theoretically underpinnings in several domains (cognitive science,
theory of computation, information theory, neurobiology...) and yes, even an
experimental nature. Researchers postulate theories (of representation, of
implementation) but virtually every Ph.D. thesis also builds a working
program to test the theory.
If AI researchers seem to be weak in the disciplines of the scientific
method I submit it is because the phenomena they are trying to understand
are far more complex and elusive of definition that that of most science.
This is not a reason to deny AI the title of science, but rather a reason
to increase our efforts to understand the field. With this understanding
will come an increasingly visible scientific discipline.
------------------------------
Date: Mon, 6 Jul 87 17:19:38 PDT
From: cottrell%ics@sdcsvax.ucsd.edu (Gary Cottrell)
Subject: Re: thinking about thinking not being science
In article <8707030236.AA29872@flash.bellcore.com>
amsler@FLASH.BELLCORE.COM (Robert Amsler) writes:
>I think Don Norman's argument is true for cognitive psychologists,
>but may not be true for AI researchers. The reason is that the two
>groups seek different answers. [....] Speculating about flight might
>lead to building other types of aircraft (as certainly those now
>humorous old films of early aviation experiments show), but it would
>certainly be a bad procedure to follow to understand birds and how
>they fly.
In fact, the Wright Brothers spent quite a bit of time studying how
birds fly, and as a recent Scientific American notes, we may still have
a lot to learn from natural systems. A piece of Dennis Conner's boat was
based on a whale's tailfin.
I think Don's point was that many times AI researchers spend a lot of time
theorizing about how humans work, and then use that as justification for
their designs for AI systems, without ever consulting the facts.
It is certainly true that Cognitive Scientists and AI researchers are at
different ends of a spectrum (from NI (Natural Intelligence) to AI), but it
would be foolish for AI researchers not to take hints from the best example
of an intelligient being we have. On the other hand, it is not appropriate
for a medical expert system to make the same mistakes doctors do - sometimes
a criterion for a "good" cognitive model.
gary cottrell
Institute for Cognitive Science C-015
UCSD,
La Jolla, Ca. 92093
cottrell@nprdc.arpa (ARPA) (or perhaps cottrell%ics@cs.ucsd.edu)
{ucbvax,decvax,akgua,dcdwest}!sdcsvax!sdics!cottrell (USENET)
**********************************************************************
THE FUTURE'S SO BRIGHT I GOTTA WEAR SHADES - Timbuk 3
**********************************************************************
------------------------------
End of AIList Digest
********************