Copy Link
Add to Bookmark
Report

Machine Learning List Vol. 2 No. 12

eZine's profile picture
Published in 
Machine Learning List
 · 11 months ago

 
Machine Learning List: Vol. 2 No. 12
Thursday, July 5, 1990

Contents:
Classification Society of North America
Discovery in databases
Chess
Request for Summary of Business Meeting at MLC

The Machine Learning List is moderated. Contributions should be relevant to
the scientific study of machine learning. Mail contributions to ml@ics.uci.edu.
Mail requests to be added or deleted to ml-request@ics.uci.edu. Back issues
may be FTP'd from ics.uci.edu in /usr2/spool/ftp/pub/ml-list/V<X>/<N> or N.Z
where X and N are the volume and number of the issue; ID & password: anonymous

------------------------------
Date: Thu, 28 Jun 90 15:13:11 EDT
From: dubes@pixel.cps.msu.edu
Subject:Classification Society of North America

Dear Colleague:

The Classification Society of North America (CSNA) is a nonprofit
interdisciplinary organization whose purposes are to promote the
scientific study of classification and clustering (including
systematic methods of creating classifications from data), and to
disseminate scientific and educational information related to its
fields of interest. CSNA attracts researchers from many fields:
two-thirds of its members have as their specialties: psychology, statistics,
computer science, biology, business applications, education, engineering,
mathematics, and sociology. CSNA provides its members with a forum in which
to discuss theories or methodologies relating to classification or
clustering problems irrespective of the particular fields in which
such problems arise.

CSNA's activities include publications and meetings. The Journal of
Classification publishes original and valuable papers on the theory
and methodology of classification. The Classification Literature
Automated Search Service publishes annually a bibliography and
indices of journal papers on classification. A recent volume of the
Service described 1141 papers from 553 journals -- an impressive
indication of how difficult it is to keep abreast of the
classificatory literature. Newsletters containing current announcements,
capsule summaries of research topics, meeting announcements, and current
book information are mailed five times a year. All publications are
included in return for 1990 annual dues of US $45.00.

CSNA's annual scientific meeting encourages and emphasizes
communication of current research in all areas of classification and
among all disciplines. The 1990 meeting was held at Utah State University
June 21-23. 1990. CSNA regularly cooperates with other
learned societies to sponsor international meetings concerning
classification and related topics. In 1991, for example, CSNA is
participating in the Third Conference of the International Federation of
Classification Societies at Edinburgh, Scotland.

I invite you to consider becoming a member. Please contact either
of us for more information or for a membership application.

Shizuhiko Nishisato Richard C. Dubes,
Chair, Membership Committee, CSNA President, CSNA
The Ontario Inst. for Studies in Educ. Computer Science Dept.
252 Bloor St. West Michigan State Univ.
Toronto, Ontario M5S 1V6 East Lansing, MI 48824
Canada U.S.A.
S_NISHIS@UTOROISE.BITNET dubes@cpswh.cps.msu.edu
fax: (517) 336-1061
------------------------------
Date: Tue, 3 Jul 90 14:58:56 -0400
From: Gregory Piatetsky-Shapiro <gps0@gte.COM>
Subject: Discovery in Databasesl

-------- DISCOVERY in DATABASES -- MYTH or REALITY? --------
An invitation to attend a lively (maybe) and controversial (hopefully)
panel discussion with audience participation.

Panel members: Brian Gaines, George Grinstein, Mary McLeish,
Gregory Piatetsky-Shapiro, and Jan Zytkow.
Place: At AAAI-90 in Boston, Hynes Auditorium, Room 208
Time: August 2, 1990, 10:30 AM - 12:30 PM

Computers have promised us a fountain of wisdom, but delivered a flood of
information. The rapid growth of databases creates both a need and an
opportunity for tools and methods for extracting knowledge from data.
The IJCAI-89 workshop on Knowledge Discovery in Databases has brought together
many of the leading researchers in the areas of machine learning, intelligent
databases, and knowledge acquisition.

This year we are planning an informal meeting during AAAI-90, with panel
discussion and audience interaction.
Potential Topics of Discussion include:

Visual and Perceptual ways of Discovery in Data (with a demo)
Use and Re-use of Domain Knowledge in Discovery
Guidelines for Selecting an Application for Discovery in Data
Propriety of Discovery in Social and Demographic Databases
How Discovery is different from Learning

This meeting conveniently falls between two sessions on Machine Learning at
AAAI-90. The room will fit about a 100 people.

------------------------------
Date: Mon, 2 Jul 90 17:31:01 PDT
From: Prasad Tadepalli <tadepall@turing.cs.orst.edu>
Subject: chess as a domain for ML


Is Chess a ``bad domain'' for ML research?

In the most recent machine learning conference, Bruce Porter
made the comment that chess is a bad domain for ML research.
Porter must have meant it as a sort of tongue-in-cheek statement,
as he was careful to preface it with the adjective ``controversial.''
Since his statement generated so little controversy, it appears
that many researchers do believe it, at least, at some unconscious
level. As a person who did his thesis in this domain, I find the trend
somewhat disturbing, and I believe it is time to make explicit
some of the biases and explode some of the myths that surround chess
as a domain. I welcome any criticism and/or debugging of my views.

It is not clear what exactly people mean when they complain about
chess. Let us consider some of the possible meanings.

Meaning #1: It is essentially a solved problem.

Response: Well, it is solved, if ``solved'' means that there is
a program that plays very good chess. But, for us, who are in the
busines of learning, the problem is not to play chess, but to learn
to play chess. This problem remains not only unsolved; if I might
hazard a guess none of the existing techniques including empirical
and/or explanation-based approaches come even close! EBL doesn't work
well because of the intractable theory problem and the utility problem.
And empirical learning doesn't work well without careful engineering
of the feature language even in small subdomains like rook-knight
endings.

Meaning #2: There is no economic impact of solving this problem.

Response: To evaluate the economic impact of a piece of basic
research is a non-trivial undertaking, to say the least. It is my
opinion that a number of basic algorithms and techniques originated
by careful studying of the so called ``toy domains.'' Various search
techniques from alpha-beta to A* first originated in puzzles and games.
How can one confidently estimate the economic impact of
EBL or A*? Who is to say that there aren't any such fundamental
techniques waiting to be uncovered by studying chess? If history
is to be the guide, this is already obvious. A lot of early work
on empirical learning by Quinlan, Michie et al. was done in the
domain of chess. Nick Flann's work on treating universal quantification
in EBL is a fundamental technique that can be used in any domain.

Meaning #3: It is so impoverished a domain that none of the real problems
that need to be solved in AI research occur in chess.

This meaning again takes several forms:

3(a) Chess has a clean theory that can be stated in a few lines of
code. None of the real world theories have this property.

Yes and No. Yes because the rules of chess can be
compactly described and in some sense can be treated as the
``theory'' of chess. But this is not a reasonable interpretation
of a ``theory.'' [I plead guilty for being partly responsible for
propagating this interpretation of ``theory.'' It makes sense for
the problem I wanted to solve; but not a useful notion,
otherwise.]

This ``theory'' is no more useful for making a move than quantom
mechanics is useful for making a cup of coffee! One can argue that
the real physical world has a simply stated theory as well
-- the Maxwell's equations or some such. If you take a look at
a chess book, you will find that very few pages are devoted to
explaining the rules. The major portion of the book is used to
explain the strategies, common errors, tactics, what to do,
and what not to do, and why. In other words the theory of chess
is as complex, as interesting, and as challenging as that of any
other domain like economics or design. No wonder people take
years of study and playing to become grandmasters. The fact that
chess operates on simple rules shouldn't blind us to its
complexity! In fact, a good challenge problem for CYC is to take a
page in a typical chess book, represent it, and draw the
inferences drawn in that page!

3(b) Chess assumes certainity; the real world is uncertain.

There is again a conceptual confusion here. Chess is certain
in the sense that you can clearly see what piece is where.
But that is not what grandmasters see (ref. DeGroot). They
see ``threats,'' ``open lines,'' ``possible attacks,''
``chunks of pieces,'' ``controlled centers,'' ``blocked
paths,'' ``safe regions,'' etc. ad infinitum. It is by now
widely believed that ``perception'' is at the heart of good
play of chess. If that is true, chess is as certain as
perception is! The point is that a scene which is
``certain'' at one level can be ``uncertain'' at a higher
level of abstraction. If it still looks like low level
features somehow ``determine'' the high level features,
ponder this question: how does a chess player determine that
his king is safe? Is there a chance for a player to be
surprised about his perception of safety?

3(c) Chess theory is correct and complete. None of the real world
theories are.

Again, this is only true if you interpret the rules of chess as
its ``theory.'' If you take a more realistic definition -- what
you find in a chess book, say -- it has all the properties of a
``real world'' theory. It is incomplete, incorrect, inconsistent,
and intractable.

3(d) It is completely artificial and does not share the properties of
``natural theories.''

Sorry, but this isn't true either. Chess problem-solving routinely
uses complex real world concepts -- like going around
barricades, reaching a point faster than the enemy, travelling by
the shortest route, accomplishing multiple goals, preventing the
opponent from preventing our own success etc, etc.

3(e) Chess is too hard. The real world domains are actually easier.

I sympathize with this view to some extent. But, I don't believe
that we can get human-like intelligence without understanding
chess. This is a problem that must be solved. It is challenging,
and interesting. We may not be able to solve all aspects of
chess--from chess learning to perception of chess in our life
time. But we keep making progress gradually -- trying to apply the
lessons of chess in other domains. But this can only be done if
people take a critical, rational view and not be carried away by
its superficial simplicity.

3(f) The solutions people find for chess problems are not easily
transferable to other domains.

SOME solutions are not transferable to SOME other domains.
Certainly, I can't see how DEEP-THOUGHT's principles can be
applied to economic forecasting. The trick in using chess for
AI research -- as opposed to building chess machines -- is
exactly this: posing the problem in a way that its solution
has the potential to transfer to other domains.

3(g) Nobody is interested in chess.

Yes! But why?

------------------------------------------

Prasad Tadepalli
Department of Computer Science
Oregon State University

------------------------------
Date: Tue, 3 Jul 90 11:19:49 PDT
From: Tom Dietterich <tgd@turing.cs.orst.edu>
Subject: chess as an AI domain

It seems to me that we must ask "for what research questions is chess
a good domain or a bad domain?". Without addressing particular
research questions, we can't decide.

I think chess can be a good domain for studying knowledge compilation
tasks. Like many engineering tasks, there is a complete and correct,
but extremely intractable, domain theory available for chess. The
learning problem is to convert this domain knowledge into a form that
gives efficient performance of the highest possible quality (ideally,
efficient and correct performance). With this goal in mind, let us
consider the various points you made:

Meaning #1: It is essentially a solved problem.

The knowledge compilation problem for chess has hardly even been
attempted, let alone solved.

Meaning #2: There is no economic impact of solving this problem.

This is true, but IF the methods developed in chess generalize, then
there would be tremendous payoff.

Meaning #3: It is so impoverished a domain that none of the real problems
that need to be solved in AI research occur in chess.

3(a) Chess has a clean theory that can be stated in a few lines of
code. None of the real world theories have this property.

I know of at least 2 domains that have simple, complete, correct
domains theories. These are scheduling problems and structural design
problems.

3(b) Chess assumes certainty; the real world is uncertain.

In scheduling and structural design, the particular problem instance
that must be solved is known with certainty.

3(c) Chess theory is correct and complete. None of the real world
theories are.

Same as 3(a)


3(d) It is completely artificial and does not share the properties of
``natural theories.''

This is somewhat true. Chess and other games explore problems that
arise from geometric permutations. They do not, for example, require
reasoning about real-valued attributes of objects. Most objects in
engineering domains have such attributes, so this increases the risk
that methods developed in chess will not generalize to other domains.

3(e) Chess is too hard. The real world domains are actually easier.

This is also somewhat true. There are efficient methods for reasoning
about real-valued quantities (e.g., differential equations) that may
make it easier to do knowledge compilation in engineering fields than
in chess.

3(f) The solutions people find for chess problems are not easily
transferable to other domains.

This is the key question that we have already addressed.

3(g) Nobody is interested in chess.

Because it is a game.


One important advantage of chess is that it is a well-known,
non-proprietary domain. Audiences are familiar with it, so it is
easier to present talks and conduct research on chess than on some
other proprietary or classified domains.


--Tom

------------------------------
Subject: Request for Summary of Business Meeting at MLC
Date: Thu, 05 Jul 90 13:20:08 -0700
From: Michael Pazzani <pazzani@ICS.UCI.EDU>
Message-ID: <9007051320.aa29736@ICS.UCI.EDU>

Will someone who attended the business meeting at the machine learning
conference be kind enough to post a summary? The only thing I recall
with certainty is that the next workshop will be held at Northwestern
University. A call for topics will be issued shortly.

Mike
------------------------------

END of ML-LIST 2.12


← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT