Copy Link
Add to Bookmark
Report
AIList Digest Volume 8 Issue 005
AIList Digest Tuesday, 12 Jul 1988 Volume 8 : Issue 5
Today's Topics:
Philosophy:
Re: Who else isn't a science?
Metaepistemology & Phil. of Science
Re: Bad AI: A Clarification
Re: The Social Construction of Reality
Generality in Artificial Intelligence
Theoretical vs. Computational Linguistics
----------------------------------------------------------------------
Date: 3 Jul 88 08:07:20 GMT
From: agate!garnet!weemba@ucbvax.berkeley.edu (Obnoxious Math Grad
Student)
Subject: Re: Who else isn't a science?
I'm responding very slowly nowadays. I think this will go over better
in s.phil.tech anyway, so I'm directing followups there.
In article <2663@mit-amt.MEDIA.MIT.EDU>, bc@mit-amt writes:
>In article <11387@agate.BERKELEY.EDU> weemba@garnet.berkeley.edu writes:
>>These researchers start by looking at *real* brains, *real* EEGs, they
>>work with what is known about *real* biological systems, and derive very
>>intriguing connectionist-like models. To me, *this* is science.
>And working in the other direction is not SCIENCE? Oh please...
Indeed it isn't. In principle, it could be, but it hasn't been.
Physics, for example, does work backwards. All physical models are expected
to point to experiment. Non-successful models are called "failures" or, more
politely, "mathematics". They are not called "Artificial Reality" as a way
of hiding the failure.
(If it isn't clear, I do not consider mathematics to be a "science". My say-
ing AI has not been science, in particular, is not meant as a pejorative.)
>>CAS & WJF have developed a rudimentary chaotic model based on the study
>>of olfactory bulb EEGs in rabbits. They hooked together actual ODEs with
>>actual parameters that describe actual rabbit brains, and get chaotic EEG
>>like results.
>There is still much that is not understood about how neurons work.
>Practically nothing is known about how structures of neurons work.
And theorizing forever won't tell us either. You have to get your hands
dirty.
> In
>50 years, maybe we will have a better idea. In the mean time,
>modelling incomplete and incorrect physical data is risky at best.
Incorrect??? What are you referring to?
Risky or not--it is "science". It provides constraints that theory must
keep in mind.
> In
>the mean time, synthesizing models is just as useful.
No. Synthesizing out of thin air is mostly useless. Synthesizing when
there is experiment to give theory feedback, and theory to give exper-
iment a direction to look, is what is useful. That is what Edelman,
Skarda and Freeman are doing.
>>We've also got what I think a lot of people who've never studied the
>>philosophy of science here too. Join the crowd.
>
>I took a course from Kuhn. Speak for youself, chum.
Gee. And I know Kuhn's son from long ago. A whole course? Just enough
time to memorize the important words. I'm not impressed.
>>>May I also inform the above participants that a MAJORITY of AI
>>>research is centered around some of the following:
>>>[a list of topics]
>>Which sure sounded like programming/engineering to me.
>Oh excuse me. They're not SCIENCE. Oh my. Well, we can't go studying THAT.
What's the point? Who said you had to study "science" in order to be
respectable. I think philosophy is great stuff--but I don't call it
science. The same for AI.
>>If your simulations have only the slightest relevance to ethology, is your
>>advisor going to tell you to chuck everything and try again? I doubt it.
>So sorry to disappoint you. My coworkers and I are modelling real,
>observable behavior, drawn from fish and ants.
>Marvin Minsky, our advisor, warns that we should not get "stuck" in
>closely reproducing behavior,
That seems to be precisely what I said up above.
>The bottom line is that it is unimportant for us to argue whether or
>not this or that is Real Science (TM).
You do so anyway, I notice.
>What is important is for us to create new knowledge either
>analytically (which you endorse) OR SYNTHETICALLY (which is just as
>much SCIENCE as the other).
Huh?? Methinks you've got us backwards. Heinously so. And I strongly
disagree with this "just as much as the other".
>Just go ask Kuhn.
Frankly, I'm not all that impressed with Kuhn.
ucbvax!garnet!weemba Matthew P Wiener/Brahms Gang/Berkeley CA 94720
------------------------------
Date: Sun, 03 Jul 88 03:47:51 EST
From: Jeff Coggshall <KLEMOSG%YALEVM.BITNET@MITVMA.MIT.EDU>
Subject: Metaepistemology & Phil. of Science
======================================================================== 273
>Date: Fri, 24 Jun 88 18:46 O
>From: <YLIKOSKI%FINFUN.BITNET@MITVMA.MIT.EDU>
>It is obvious the agent only can have a representation of the Ding an
>Sich. In this sense the reality is unknowable. We only have
>descriptions of the actual world.
>It seems that for the most part evolution has been responsible for
>developing life forms which have good descriptions of the Ding an Sich
I don't think that it's even obvious that we have anything that is in any
sense a "representation" of the Ding an Sich. The argument from "well, hey, -
look, we're doing pretty well at surviving, so therefore what we think is
really out there must have some sort of _correspondence_ with what's really
out there" just doesn't hold up. It doesn't follow. Yes, we are adapted to
survive, but that doesn't mean that our thoughts about what's going on are
even incomplete "representations" of some Ding an Sich.
Some may say: "well - who cares if we've got a real _representation_ of
some "really real reality", so long as our theoretical assumptions continue to
yeild fruitful results - then why worry?"
The problem is that our theoretical assumptions will bias our empirical
results. "facts" are theory-laden and theories are value laden. It all comes
down to ethics. Our human perceptual world is at least as real (and in many
ways more real) to us (and to who else would it make sense to talk about it's
being real to?) as the world of theoretical physics.
Once we assume that there is no priveledged source knowledge about the way
things really are, then, it seems, we are left with either saying that
"anything goes" (as in astrology is just as valdid as physics and so is voodoo)
or with insisting that "reality", however it is constued, must constrain
cognitive activity and that one must cultivate an openness to this constraint.
------------------------------------------------------------------------
>Date: 27 Jun 88 00:18:24 GMT
>From: bc@media-lab.media.mit.edu (bill coderre and his pets)
>Subject: Re: Who else isn't a science?
>The bottom line is that it is unimportant for us to argue whether or
>not this or that is Real Science (TM).
Well, yes and no. It doesn't make any sense to go around accusing other
people of not doing science when you haven't established any criteria for
considering some kind of discovery/inquiry-activity as a science. Is math a
science? There doesn't seem to be any empirical exploration going on... Is
math just a conceptual tool used to model empirical insights? Could AI be the
same?
I found a quote from Nelson Goodman that might interest you. Tell me what
you think, y'all:
"Standards of rightness in science do not rest on uniformity and constancy
of particular judgments. Inductive validity, fairness of sample, relevance of c
ategorization, all of them essential elements in judging the correctness of obs
ervations and theories, do depend upon conformity with practic - but upon a ten
uous conformity hard won by give-and-take adjustment involving extensive revisi
on of both observations and theories." (from _Mind and other Matters_, 1984, p.
12)
>What is important is for us to create new knowledge either
>analytically (which you endorse) OR SYNTHETICALLY (which is just as
>much SCIENCE as the other).
- are you using Kant's analytic/synthetic distinction? because if you are
(or want to be), then you should realize that all new knowledge is synthetic.
You might even be interested in an article by W. V. O. Quine in the book "From
a Logical Point of View". It is called "Two Dogmas of Empiricism" and therein
he convincingly trashes any possible analytic/synthetic distinction. I agree
with him. I think that the burden of proof lies on anyone who wants to claim
that there is any a priori knowledge. Believing this presupposes a God's eye
(or actually a no-eye) point of view on reality, which doesn't make sense.
Jeff Coggshall
(Jcoggshall@hampvms or Klemosg@yalevm)
------------------------------
Date: 5 Jul 88 18:09:24 GMT
From: bc@media-lab.media.mit.edu (bill coderre)
Subject: Re: Who else isn't a science?
I am going to wrap up this discussion here and now, since I am not
interested in semantic arguments or even philosophical ones. I'm sorry
to be rude. I have a thesis to finish as well, due in three
weeks.
First, the claim was made that there is little or no research in AI
which counts as Science, in a specific interpretation. This statement
is incorrect.
For example, the reasearch that I an my immediate colleagues are doing
is "REAL" Science, since we model REAL animals, make very REALISTIC
behavior, and have REAL ethologists as critics of our work.
Next, the claim was made that synthesis as an approach to AI has not
panned out as Science. Well, wrong again. There's plenty of such.
Then I am told that few AI people understand the Philosophy of
Science. Well, gee. Lots of my colleagues have taken courses in such.
Most are merely interested in the fundamentals, and have taken survey
courses, but some fraction adopt a philosophical approach to AI.
If I was a better AI hacker, I would just append a list of references
to document my claims. Unfortunately, my references are a mess, so let
me point you at The Encyclopedia of Artificial Intelligence (J Wiley
and Sons), which is generally excellent, and although lacking specific
articles on AI as a Science (I think, I didn't find any on a quick
glance), there are plenty of references concering the more central
philosophical issues to AI. Highly recommended. (Incidentally, there's
plenty of stuff in there on the basic approaches to and results from
AI research, so if you're a pragmatic engineer, you'll enjoy it too.)
Enough. No more followups from me.
------------------------------
Date: 6 Jul 88 15:00:57 GMT
From: mcvax!ukc!etive!aiva!jeff@uunet.uu.net (Jeff Dalton)
Subject: Re: Bad AI: A Clarification
In article <1337@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert
Cockton) writes:
I don't have time to respond to all of your articles that respond
to mine, but will try to say something. I suggested that you give
specific criticism of specific research, but you have declined to do
so. That's unfortunate, because as it is most people are just going
to ignore you, having heard such unsupported attacks before.
>In article <451@aiva.ed.ac.uk> jeff@uk.ac.ed.aiva (Jeff Dalton) writes:
>>>It would be nice if they followed good software engineering practices and
>>>structured development methods as well.
>>Are you trying to see how many insults can fit into one paragraph?
>No.
OK, I'll accept that. But if so, you failed to make your intention
clear. And of course it *would* be nice if they, etc., but do you
know "they" don't. My experience is that appropriate software
engineering practices are followed in many cases. That doesn't mean
they all use JSP (or eqivalent), but then it's not always appropriate
to do so.
>No-one in UK HCI research, as far as I know, objects to the criticism
>that research methodologies are useless until they are integrated
>with existing system development approaches.
That no one objects is not a valid argument. They might all be wrong.
>On software engineering too, HCI will have to deliver its
>goods according to established practices. To achieve this, some HCI
>research must be done in Computer Science departments in collaboration
>with industry. There is no other way of finishing off the research
>properly.
There is a difference between research and delivering goods that can
be used by industry. It is not the case that all research must be
delivered in finished form to industry. Of course, the needs of
industry, including their desire to follow established practices, are
important when research will be so delivered, but in other cases such
needs are not so significant.
We must also consider that the results of research are not always
embodied in software.
>You've either missed or forgotten a series of postings over the last
>two years about this problem in AI.
Or perhaps I don't agree with those postings, or perhaps I don't agree
with your view of the actual state of affairs.
>Project managers want to manage IKBS projects like existing projects.
Of course, they do: that's what they know. You've yet to give any
evidence that they're right and have nothing to learn.
>You must also not be talking to the same UK software houses as I am, as
>(parts of) UK industry feel that big IKBS projects are a recipe for
>burnt fingers, unless they can be managed like any other software project.
Big IKBS projects are risky regardless of how they're managed. Part
of the problem is that AI research hasn't advanced far enough: it's
not just a question of applying some software engineering; and so
the difficulties with big IKBS projects are not necessarily evidence
that they must be managed like any other software project.
But this is all beside the point -- industrial IKBS projects and
AI research are not the same thing.
------------------------------
Date: 6 Jul 88 15:16:08 GMT
From: mcvax!ukc!etive!aiva!jeff@uunet.uu.net (Jeff Dalton)
Subject: Re: Bad AI: A Clarification
In article <451@aiva.ed.ac.uk> jeff@uk.ac.ed.aiva (Jeff Dalton) writes:
1 Are you really trying to oppose "bad AI" or are you opportunistically
1 using it to attack AI as a whole? Why not criticise specific work you
1 think is flawed instead of making largely unsupported allegations in
1 an attempt to discredit the entire field?
In article <1336@crete.cs.glasgow.ac.uk> Gilbert Cockton writes:
2 No, I've made it clear that I only object to the comfortable,
2 protected privilege which AI gives to computational models of
2 Humanity.
If that is so, why don't you confine your remarks to that instead of
attacking AI's existence as a discipline?
2 Anything which could be called basic research is the province of other
2 disciplines, who make more progress with less funding per investigation (no
2 expensive workstations etc.).
Have you considered the costs of equipment in, say, Medicine or
Physics?
2 I do not think there is a field of AI. There is a strange combination
2 of topic areas covered at IJCAI etc. It's a historical accident, not
2 an epistemic imperative.
So are the boundaries of the UK. Does that mean it should not exist
as a country?
2 My concern is with the study of Humanity and the images of Humanity
2 created by AI in order to exist. Picking on specific areas of work is
2 irrelevant.
The question will then remain as to whether there is any work for
which your criticism is valid.
2 But when I read misanthropic views of Humanity in AI, I will reply.
2 What's the problem?
Perhaps you will have a better idea of the problem if you consider
that "responding to misanthropic views of Humanity in AI" is not an
accurate description of what you do.
-- Jeff
------------------------------
Date: 6 Jul 88 16:51:40 GMT
From: mcvax!ukc!etive!aiva!jeff@uunet.uu.net (Jeff Dalton)
Subject: Re: The Social Construction of Reality
In article <1332@crete.cs.glasgow.ac.uk> Gilbert Cockton writes:
>The dislike of ad hominem arguments among scientists is a sign of their
>self-imposed dualism: personality and the environment stop outside the
>cranium of scientists, but penetrate the crania of everyone else.
No, it is a sign that they recognize that someone can be right despite
having qualities that might make their objectivity suspect. They also
can remember relativity being attacked as "Jewish science" and other
ad hominem arguments of historical note.
>When people adopt a controversial position for which there is no convincing
>proof, the only scientific explanation is the individual's ideology.
Perhaps the person is simply mistaken and thinks there is convincing
proof. (Suppose they misread a conclusion that said something was
"not insignificant", say.) Or perhaps they are don't really hold
the position in question but are simply using it because others
find it hard to refute.
-- Jeff
------------------------------
Date: Thu, 7 Jul 88 10:24 O
From: <YLIKOSKI%FINFUN.BITNET@MITVMA.MIT.EDU>
Subject: Generality in Artificial Intelligence
Distribution-File:
AILIST@AI.AI.MIT.EDU
JMC@SAIL.Stanford.EDU
This entry was inspired by John McCarthy's Turing Award lecture in
Communications of the ACM, December 1987, Generality in Artificial
Intelligence.
> "In my opinion, getting a language for expressing general
> commonsense knowledge for inclusion in a general database is the key
> problem of generality in AI."
What is commonsense knowledge?
Here follows an example where commonsense knowledge plays its part. A
human parses the sentence
"Christine put the candle onto the wooden table, lit a match and lit
it."
The difficulty which humans overcome with commonsense knowledge but
which is hard to a program is to determine whether the last word, the
pronoun "it" refers to the candle or to the table. After all, you can
burn a wooden table.
Probably a human would reason, within less than a second, like this.
"Assume Christine is sane. The event might have taken place at a
party or during her rendezvous with her boyfriend. People who do
things such as taking part in parties most often are sane.
People who are sane are more likely to burn candles than tables.
Therefore, Christine lit the candle, not the table."
It seems to me that the inferences are not so demanding but the
inferencer utilizes a large amount of background knowledge and a good
associative access mechanism.
Thus, it would seem that in order for us to see true commonsense
knowledge exhibited by a program we need:
* a vast amount of knowledge involving the world of a person
in virtual memory. The knowledge involves gardening,
Buddhism, the emotions of an ordinary person and so forth -
its amount might equal a good encyclopaedia.
* a good associative access mechanism. An example of such
an access mechanism is the hashing mechanism of the
Metalevel Reasoning System described in/1/.
What kind of formalism should we use for expressing the commonsense
knowledge?
Modern theoretical philosophy knows of a number of logics with
different expressive power /2/. They form a natural scale for
evaluating different knowledge representation formalisms. For
example, it would be very interesting to know whether Sowa's
Conceptual Structures correspond to a previously known logical system.
I remember having seen a paper which complained that to a certain
extent the KRL is just another syntax for first-order predicate logic.
In my opinion, it is possible that an attempt to express commonsense
knowledge with a formalism is analogous to an attempt to fit a whale
into a tin sardine can. The knowledge of a person has so many nuances
which are well reflected by the richness of the language used in
poetry and fiction (yes, a poem may contain nontrivial knowledge!)
Think of the Earthsea trilogy by Ursula K. LeGuin. The climax of the
trilogy is when Sparrowhawk the wizard saves the world from Cob's evil
deeds by drawing the rune Agnen across the spring of the Dry River:
"'Be thou made whole!' he said in a clear voice, and with his staff
he drew in lines of fire across the gate of rocks a figure: the rune
Agnen, the rune of Ending, which closes roads and is drawn on coffin
lids. And there was then no gap or void place among the boulders.
The door was shut."
Think of how difficult it would be to express that with a formalism,
preserving the emotions and the nuances.
I propose that the usage of *natural language* (augmented with
text-processing, database and NL understanding technology) for
expressing commonsense knowledge be studied.
> "Reasoning and problem-solving programs must eventually allow the
> full use of quantifiers and sets, and have strong enough control
> methods to use them without combinatorial explosion."
It would seem to me that one approach to this problem is the use of
heuristics, and a good way to learn to use heuristics well is to study
how the human brain does it.
Here follows a reference which you may now know and which will certainly
prove useful when studying the heuristic methods the human brain uses.
In 1946, the doctoral dissertation of the Dutch psychologist Adrian
D. de Groot was published. The name of the dissertation is Het
Denken van den Schaaker, The Thinking of a Chess Player.
In the 30's, de Groot was a relatively well-known chess master.
The material of the book has been created by giving chess postitions
to Grandmasters, international masters, national masters and
first-class players and so forth for them to study. The chess master
told aloud how he made the decision which move he thought was the best.
Good players immediately start studying the right alternatives.
Weaker players usually calculate as much but they usually follow
the wrong ideas.
Later in his life, de Groot became the education manager of the
Philips Corporation and the professor of Psychology in Amsterdam
University. His dissertation was translated into English in the
60's in Stanford Institute as "Thought and Choice is Chess".
> "Whenever we write an axiom, a critic can say it is true only in a
> certain context. With a little ingenuity, the critic can usually
> devise a more general context in which the precise form of the axiom
> does not hold. Looking at human reasoning as reflected in language
> emphasizes this point."
I propose that the concept of a theory with a context be formalized.
A theory in logic has a set of true sentences (axioms) and a set of
inference rules which are used to derive theorems from axioms -
therefore, it can be described with a 2-tuple
<axioms, inference_rules>.
A theory with a context would be a 3-tuple
<axioms, inference_rules, context>
where "context" is a set of sentences.
Someone might create interesting theoretical philosophy or mathematical
logic research of this.
References:
/1/ Stuart Russell: The Compleat Guide to MRS, Stanford University
/2/ Antti Hautamaeki, a philosopher friend of mine, personal
communication.
------------------------------
Date: 8 Jul 88 20:11:48 GMT
From: ssc-vax!bcsaic!rwojcik@beaver.cs.washington.edu (Rick Wojcik)
Subject: Theoretical vs. Computational Linguistics (was Me and Karl
Kluge...)
In article <1342@crete.cs.glasgow.ac.uk> Gilbert Cockton writes:
>... but I don't know if a non-computational linguist
>working on semantics and pragmatics would call it advanced research work.
Implicit in this statement is the mistaken view that non-computational
linguists get to define 'advanced' research work. Computational
linguists often are fully qualified theoretical linguists, not just
computer scientists with a few courses in linguistics. But the concerns
of the computational linguist are not always compatible with those of 'pure'
theoretical linguists. Since many linguistic theories do not attempt to
model the processes by which we produce and comprehend language (i.e. they
concern themselves primarily with the validation of grammatical form), they
fail to address issues that computational linguists are forced to ponder.
For example, non-computational linguists have largely ignored the questions
of how one disambiguates language or perceives meaning in ill-formed
phrases. The question is not just how many possible meanings a form can
express, but how the listener arrives at the correct meaning in a given
context. Given that theoretical linguists seldom have to demonstrate
concrete effects of their research, it is difficult to get them to focus
on these issues. You should regard theoretical linguists as striving for
a partial theory of language, whereas computational linguists have to go
after the whole thing. A major limitation for computational linguists is
that they must confine themselves to operations that they can get a
machine to perform.
--
Rick Wojcik csnet: rwojcik@boeing.com
uucp: uw-beaver!ssc-vax!bcsaic!rwojcik
address: P.O. Box 24346, MS 7L-64, Seattle, WA 98124-0346
phone: 206-865-3844
------------------------------
End of AIList Digest
********************