Copy Link
Add to Bookmark
Report

Machine Learning List Vol. 1 No. 06

eZine's profile picture
Published in 
Machine Learning List
 · 1 year ago

 
Machine Learning List: Vol. 1 No. 6
Saturday, August 12, 1989

Contents:
Experimentation (4)
Learning from Natural Language
Notes: ML-LIST will be delayed because of CogSci & IJCAI
ML-LIST back issues available by FTP
Replying to ML-LIST messages


The Machine Learning List is moderated. Contributions should be relevant to
the scientific study of machine learning. Mail contributions to ml@ics.uci.edu.
Mail requests to be added or deleted to ml-request@ics.uci.edu. Back issues
of Volume 1 may be FTP'd from /usr2/spool/ftp/pub/ml-list/V1/<N> where N
is the number of the issue; Host ics.uci.edu; Userid & password: anonymous

- ----------------------------------------------------------------------
Date: Sun, 6 Aug 89 15:43:57 PDT
From: Tom Dietterich <tgd@cs.orst.edu>
To: ml@ICS.UCI.EDU
Subject: Experimentation

RE: Pazzani's comments on experimentation

1. Currently, in ML, experimentation is done poorly. First, many
experiments are not run to test a particular hypothesis. Instead
they are exploratory post hoc data analyses.

Experiments need not test a particular hypothesis--there are
exploratory experiments, after all.

While there may be room for improvement, I have found most published
experiments to be well-designed and quite valuable. Significance
testing is of limited utility, since the assumptions underlying the
test are rarely satisfied exactly. More valuable, at least to me, is
some report of the standard deviation in addition to the variance.

2. Experimentation on "real-world" data has little scientific value.

I'm stunned! Experimentation on real world data is critical. Because
no inductive learning program can learn more than a small fraction of
all possible concepts, it is crucial to determine what kinds of
concepts are actually "out there" in the world. Suppose someone comes
up to you and says "I have a great algorithm for learning polyhedra
with an odd number of faces, it is immune to noise and it works great
on artificial datasets". What is your first response? Mine is "Do
you have any reason to believe that there are natural domains in which
the concepts to be learned can be represented as odd-sided faced
polyhedra?" The field is in serious danger right now of
proliferating weird and strange algorithms without testing them to see
if their biases are at all useful in real-world domains.
The day will soon arrive when we should not publish a new inductive
learning algorithm unless it demonstrates performance superior to
existing algorithms in some real-world domain (and preferably in
several different domains). We aren't insisting on this right now,
but papers that include it have an easier time getting accepted.

The problem with artificial domains is that you are making lots of
assumptions about what is important when you construct them. E.g.,
experiments with artificial (uncorrelated) noise showed that ID3(w/ Chi-Squared
cutoff) works well, but subsequent experimentation in real-world
domains with noise suggests otherwise (e.g., Shavlik, et al at ML-89).
So maybe there isn't much noise of this type in the real world.

Artificial data is not required in order to conduct experiments aimed
at understanding why one algorithm outperforms another. One can also
construct and test modifications of the algorithms.

3. Overemphasis on performance measures obscures analysis of why algorithms
work or fail to work. An algorithm that cannot learn x-or can
perform very well even if the "correct" hypothesis requires an x-or.
Many experiments confound the class of situations on which an algorithm
will fail with how often that situation occurs in a given data set. The
former is more important to machine learning than the latter.

I agree in part. I don't understand the last sentence, however. It
is equally important to understand (a) those concepts that an
algorithm cannot learn (even approximately) and (b) how often such
concepts arise in real-world domains.

- --Tom
- ----------------------------------------------------------------------
From: Ross Quinlan <munnari!cluster.cs.su.OZ.AU!quinlan@uunet.uu.NET>
Date: Tue, 8 Aug 89 07:54:31 +1000
Subject: Experimentation


I must disagree with Pazzani's comment in MLL 1,5 that experimentation on
real-world data has "little scientific value". If the goal of
learning research is to produce systems that can operate in the
real world, testing on live data tells us how close we are getting
and whether our algorithms are failing to address fundamental
issues of practical learning tasks. (I'm aware that not everyone
would agree with this goal, but a little controversy never hurt.)
For example, if a particular algorithm can't handle XOR, but XOR
turns out to be vanishingly rare in nature, then who cares? But
if the algorithm can't handle disjunctive concepts at all, and these
turn out to be prevalent, then it's a matter of some importance.

I agree in a qualified way with the statement that some experimentation
is done poorly. We all need to improve our act in this regard; but
that's hardly a criticism of experimentation per se.

Finally, experiments are the fundamental mechanism for determining
whether an algorithm works or fails to work. Pazzani's point 3 says that
this is not the end of the story, and I heartily agree.
>3. Overemphasis on performance measures obscures analysis of why algorithms
> work or fail to work.
As Buchanan puts it:
"Many experimental researchers stop after the demonstration step
in the hope that the power of their ideas is now obvious ...
However, the needs of science are not met -- and progress in AI
is impeded -- until there is considerable work on steps 5
(analysis) and 6 (generalization) of the process."

Ross
- ----------------------------------------------------------------------
Date: Wed, 9 Aug 89 23:09 EST
From: 09-Aug-1989 2307 <LEWIS@cs.umass.EDU>
Subject: Experimentation in ML

Some comments on the last two of Michael Pazzani's concerns (ML-LIST 1.5)
on experimental work in machine learning (re the first concern, one can
hardly argue against the idea that experiments should be clearly thought out):

> 2. Experimentation on "real-world" data has little scientific value....

This implies that we're only interested in understanding machine learning
algorithms in some abstract mathematical sense, not in understanding
machine learning tasks and how to solve them (by hook or crook). If you
want, you can call that the "engineering" part of ML, and not the "science"
part, but it's currently a part engendering great interest. Additionally,
the regularities of some domains (e.g. natural language) are understood far
too poorly to create convincing artificial data. When we do have a sense of
the regularities we're interested in, artificial data can be great (Rendell
& Cho 1988, for example).

> 3. Overemphasis on performance measures obscures analysis of why algorithms
> work or fail to work. An algorithm that cannot learn x-or can
> perform very well even if the "correct" hypothesis requires an x-or.
> Many experiments confound the class of situations on which an algorithm
> will fail with how often that situation occurs in a given data set. The
> former is more important to machine learning than the latter.

Again, this is the engineering vs. theory issue. An empirical demonstration
that an algorithm achieves good performance on a number of real world data
sets, with perhaps loosely understood characteristics, can be as interesting
as a proof of bad worse case performance on certain classes of data sets.
Both pieces of information are useful. ---Dave

David D. Lewis ph. 413-545-0728
Information Retrieval Laboratory BITNET: lewis@umass
Computer and Information Science (COINS) Dept. ARPA/MIL/CS/INTERnet:
University of Massachusetts, Amherst lewis@cs.umass.edu
Amherst, MA 01003
USA UUCP: ...!uunet!cs.umass.edu!lewis@uunet.uu.net
- ----------------------------------------------------------------------
Subject: Re: Experimentation
Date: Tue, 08 Aug 89 14:44:32 -0700
From: Michael Pazzani <pazzani@ICS.UCI.EDU>

tgd@cs.orst.edu writes:
>Experiments need not test a particular hypothesis--there are
>exploratory experiments, after all.
>While there may be room for improvement, I have found most published
>experiments to be well-designed and quite valuable. Significance
>testing is of limited utility, since the assumptions underlying the
>test are rarely satisfied exactly. More valuable, at least to me, is
>some report of the standard deviation in addition to the variance.

I'm skeptical of the conclusions that can be drawn from exploratory
experiments. The reason is that the experimenter can choose to
explore the relationships between many dependent and independent
variables. Let's assume that there are 20 such relationships being
compared and we find one very interesting unexpected relationship that
is significant at the .05 level. This is not a solid foundation upon
which to base future findings. However, it is perhaps a sign that
a more rigorous study should be run with a carefully selected hypothesis.

I feel that there is room for improvement. Most papers I read
describe experiments such as "We ran algorithm X and algorithm Y on
randomly selected training sets. After every 10 examples, we measured
the accuracy by testing the algorithm on 100 randomly drawn examples".
It is unclear whether algorithm X and Y were run on the same randomly
selected sets or different ones. Different tests (paired and
unpaired) can be used to analyze the significance of the results.
Similarly, different tests are used if the same 100 random examples
are used at each test point than if a new set of random examples is
generated at each test point. I'm not asking just for clarity in the
writing, which differs on papers. If one were to test the
significance of the results, then the standard deviation is
interpreted differently in each case.

tgd@cs.orst.edu writes
> 2. Experimentation on "real-world" data has little scientific value.

>I'm stunned! Experimentation on real world data is critical. ...
>Suppose someone comes up to you and says "I have a great algorithm for
>learning polyhedra with an odd number of faces, it is immune to noise
>and it works great on artificial datasets". What is your first
>response? Mine is "Do you have any reason to believe that there are
>natural domains in which the concepts to be learned can be represented
>as odd-sided faced polyhedra?"

and munnari!cluster.cs.su.OZ.AU!quinlan@uunet.uu.NET writes
>I must disagree your comment in MLL 1,5 that experimentation on
>real-world data has "little scientific value". If the goal of
>learning research is to produce systems that can operate in the
>real world, testing on live data tells us how close we are getting
>and whether our algorithms are failing to address fundamental
>issues of practical learning tasks.

I don't think our views are too few apart. The "real world" tells us
which problems are important, and how well we are doing on the
important problems. I was taking a rather narrow view of goals of the
science of machine learning: To gain an understanding of the class of
problems solved by a class of computational processes. You seem to
want to broaden the goals to include: To gain an understanding of the
class of learning problems in the "real world". I am now willing to agree
that this a noble and worthwhile research topic for machine learning,
and that this goal need not be "demoted" to just an engineering concern.
I'd rather ML have the union of these goals than the intersection. And
I'd like to also include the study of the processes that humans (and
animals) use to learn. To me, the most interesting work is at the
interesection of these three.

tgd@cs.orst.edu writes
>Artificial data is not required in order to conduct experiments aimed
>at understanding why one algorithm outperforms another. One can also
>construct and test modifications of the algorithms.

When someone reports that algorithm X outperforms algorithm Y on the
breast cancer database, the first question that I think of is "Why?".
Too few papers are presenting an insightful analysis of the results of
the experiment. I don't think that we want results such a Y is better
with modification X at breast cancer and mushrooms, but Y is better
without modification X at thyroid diseases and congressional voting.
It might be interesting if the results were algorithm A is better than
B on medical diagnosis (cf. Spackman ML88). However, my expectation
that the results will be "X is better than Y at linearly separable
concepts, and that Z approximates 2-DNF better than Y when there are
many irrelevant features" led me to stress the artificial experiments
as a tool toward gaining understanding of our algorithms. If the
algorithms were understood better, by testing them on "real world"
domains, we could find what sort of concepts occur naturally.
However, I'm open to other approaches to discovering what's out there.

- -Mike
- ----------------------------------------------------------------------
Date: Sun, 6 Aug 89 15:12:09 PDT
From: Tom Dietterich <tgd@cs.orst.edu>
Subject: Learning from Natural Language


The reason I suggested looking at learning from natural language is
that I think it would help us define interesting new learning tasks.
That is what makes for a good exploratory learning task.

Of course we won't succeed in "learning from natural language".
Indeed, that is still an ill-defined problem. But, by analyzing the
task and looking at specific cases and domains, I think we can define
some problems that CAN be solved. Many of the comments in ML-LIST
have suggested such problems (e.g., knowledge integration within a
given formalism--although that still is not sufficiently defined).

Some of the comments, however, have said "gee, the problems are so
hard, let's look elsewhere." To these, I would respond that it is
important to go exploring in difficult terrain--you learn a lot more
than if you just stay in your comfortable paradigm. It also takes
courage--because it is harder to publish and because you can't
guarantee success at the outset.

To those who say "the hard problems are NLP problems", I have two
responses. (a) I don't believe you--I'll bet there are hard ML
problems too. (b) Exactly where does NLP end and ML begin? The
boundary is very fuzzy, and perhaps both fields would be enriched by
trying to work at their intersection.

Exploration isn't for everyone--and it would be bad for the field if
everyone dropped what they are doing and started exploring NLP/ML.
But I'd like to see 3-5 groups exploring this area. (I know of two.)

- --Tom

- ----------------------------------------------------------------------
From: ml-request@ics.uci.edu
Subject: Notes

ML-LIST 1.7 will be sent in the last week of August (because the moderator
is attending the Cognitive Science and IJCAI conferences). Back issues
of Ml-LIST are available by anonymous ftp from ICS.UCI.EDU. The userid
and password are anonymous. After connecting, type
cd ml-list
cd V1
to change to the proper directory and then
get 4
to get Ml-LIST 1.4. (FTP commands may vary on different machines.)

If you respond to a message on ML-LIST, please send a copy directly
to the author of the message (if possible). This gives the original
author a chance to respond immediately and avoids the difficult task
of holding a conversation with a week delay between replies.
- ----------------------------------------------------------------------
END of ML-LIST 1.6

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT