Copy Link
Add to Bookmark
Report

Machine Learning List Vol. 2 No. 13

eZine's profile picture
Published in 
Machine Learning List
 · 11 months ago

 
Machine Learning List: Vol. 2 No. 13
Tuesday, July 17, 1990

Contents:
Chess vs. Structural Design
Summary of Business Meeting at MLC
ML at ECAI
Archive Server for Machine Learning Databases at UCI
Using learning to learn about learning

The Machine Learning List is moderated. Contributions should be relevant to
the scientific study of machine learning. Mail contributions to ml@ics.uci.edu.
Mail requests to be added or deleted to ml-request@ics.uci.edu. Back issues
may be FTP'd from ics.uci.edu in /usr2/spool/ftp/pub/ml-list/V<X>/<N> or N.Z
where X and N are the volume and number of the issue; ID & password: anonymous

------------------------------
Subject: Chess vs. Structural Design
Date: Fri, 06 Jul 90 09:57:51 EDT
From: Yoram.Reich@natasha.mach.cs.cmu.edu

In ML-LIST Vol. 2 No. 12 Prasad Tadepalli raised an important issue about
Chess, which is then commented by Tom Dietterich.

The motivation for this post is Tom's comparison between Chess and
structural design. Its aim is to explain why structural design isn't and
never will be like Chess. This is not to say that there are not things that
can be learned in Chess and transfer to structural design.


My comments are based on my own experience as practitioner and researcher in
structural design over the last 10 years ... but if this is not enough,
I'll use Simon's paper "The structure of ill structured problems" as an
additional support.

In the post
> refers to Prasad's statements and
>> refers to Tom's statements.


> 3(a) Chess has a clean theory that can be stated in a few lines of
> code. None of the real world theories have this property.
>
>>I know of at least 2 domains that have simple, complete, correct
>>domains theories. These are scheduling problems and structural design
>>problems.


The theory of Chess states exactly (1) how to represent states, at least the
ground level representation; (2) what are legal moves; and (3) what are the
starting and termination states. These aspects define completely, without any
uncertainty, the problem space in which one COULD generate a solution. The
only problem is that there is an "immense gap between computability in
principle and practical computability in problem spaces as large as those of
game like chess. (Simon, 73)" The problem in Chess is ONLY due to
computational difficulties. The COULD is capitalized since one might want to
choose another representation to alleviate the computational difficulty, but
in principal the ground representation is a complete problem definition.



> This ``theory'' is no more useful for making a move than quantum
> mechanics is useful for making a cup of coffee! One can argue that
> the real physical world has a simply stated theory as well
> -- the Maxwell's equations or some such.


NOT TRUE. It is very tempting to accept your argument, but there is a major
flaw in it: In Chess, there is no uncertainty or ambiguity in the ground
theory. In structural design (slightly more complex than making a cup), we
don't know for sure how structures behave (not to mention that quantum
mechanics is a statistical theory). If we would, the Tacoma Bridge wouldn't
have fallen due to aerodynamic instability, the Highway in Oakland wouldn't
have fallen during the last earthquake, and my car airconditioner wouldn't'
have stopped working yesterday due to a leak-- that is if everything is known
and predictable in design.

This is exactly why there are factors of safety in design. This is why
journals in engineering are full with with papers on probability, fuzziness,
etc. to deal with uncertainties.


> 3(b) Chess assumes certainty; the real world is uncertain.

>>In scheduling and structural design, the particular problem instance
>>that must be solved is known with certainty.

See above comments +
I don't quite understand Tom's comment stating that the particular problem
instance is known with certainty. But let me try to refute it.

The simplest way to look at design (and this is indeed a simplification) is
as a multi-stage process of:

(1) problem formulation: the understanding process taking place while
receiving the problem specifications;

(2) synthesis: the generation of design alternatives;

(3) analysis: calculating the behavior of the alternatives;

(4) redesign: modifying alternatives to conform to design codes; and

(5) evaluation: choosing, based on subjective criteria such as
aesthetics, the most appealing candidate.

Of course, design usually loops several times in a conceptual, preliminary
and detail phases. Tom's comment can be refuted since non of these stages
is well defined, well specified, or accurate. Specifications of a problem
are always underdefined (this does not contradict that some aspects might be
contradictory), but the problem formulation evolves when the process of
design progresses. There is NO theory of systematic synthesis in structural
design. I don't know of a good problem space a state or a legal move
representations. Although analysis is perceived as accurate, it is in fact
not true. The loads are not known or are probabilistic. The model of the
artifact prepared for analysis makes assumptions that are only
approximations. The interpretation of results are then only approximations.
Redesign share some characteristics with synthesis and analysis; it is also
not well-defined. Finally, evaluation is subjective. There is no real
termination stage except for the boss sitting on you.



> 3(e) Chess is too hard. The real world domains are actually easier.

>>This is also somewhat true. There are efficient methods for reasoning
>>about real-valued quantities (e.g., differential equations) that may
>>make it easier to do knowledge compilation in engineering fields than
^^^^ ^^^^^^^^^ ^^^^^^^^^^^
>>in chess.

Differential equations, matrix algebra, time series analysis are all good
tools for handling numbers but ONLY for analysis and partially for redesign
purposes. They do not have anything to do with synthesizing real objects.
Even good optimization techniques do not scale up to real problems. As for
the compilation aspect of numeric values, my experience that it is much
harder than dealing with nominal values.



> and what not to do, and why. In other words the theory of chess
> is as complex, as interesting, and as challenging as that of any
> ^^^^^^^^^^^ ^^^^^^^^^^^
> other domain like economics or design. No wonder people take
> years of study and playing to become grandmasters.

Interesting or challenging are a subjective viewpoints. They would probably
won't bring grant money unless you can specify what is the impact. This is
why not ALL of us play chess as a way of living, although it is interesting
and challenging.

The crucial point is not whether chess is an appropriate domain, but if it
can help us understand general principles. It is the researcher bias to
choose a specific domain for his research, but it is his obligation (if he
wishes to get attention) to generalize from his results.

If you care for my own opinion on Chess, I agree with Tom's general view
although not with his specific comments. I think that Chess is a valuable
domain for learning especially since it is precisely defined at one level
and most of the grandmasters perceive it at a higher level. Transforming
from one to another seems to have an impact on other domains rather than be
idiosyncratic of Chess. To phrase it differently, in the area of intractable
problems, Chess can serve as an excellent "toy" domain.



Yoram Reich
Department of Civil Engineering and
Engineering Design Research Center
Carnegie Mellon University
------------------------------
From: Jeff.Schlimmer@cs.cmu.edu
Date: Fri, 06 Jul 90 10:30:18 EDT
Subject: Summary of Business Meeting at MLC


As far as the upcoming meetings, here is the scoop:

------------------------------------------------------------------
What Where When
------------------------------------------------------------------
Workshop Northwestern, IL Summer 1991
(Chair: Larry Birnbaum, birnbaum@ils.nwu.edu)
Conference Aberdeen, Scotland Summer 1992
(Chair: Derek Sleeman, sleeman%abdn.cs@nsfnet-relay.ac.uk)
Conference U. Mass, Amherst Summer 1993
(Chair: Paul Utgoff, utgoff@cs.umass.edu)


For your reference, I thought I'd review the basic points of the
general meeting held at last week's conference, including `straw'
votes taken of those in attendance.

Three issues were discussed at some length: (a) size and expected
functions of a machine learning society, (b) policy on multiple,
concurrent submissions, and (c) policy on submitting workshop
publications to conferences.

MACHINE LEARNING SOCIETY

The appeal of a machine learning society is be based on two
primary capabilities: carrying over funds for annual meetings and
negotiating a better subscription rate for the journal. On the first
point, John Laird had previously commented that Michigan was unable to
pass along surplus from the conference in 1988 to subsequent meetings.
Conversely, Bruce Porter mentioned that the current funds at Austin
were designed to be passed along to the institution supporting the
next annual meeting. Previous discussions were inconclusive as to
whether AAAI would be able to perform this function for us. Tom
Dietterich volunteered to get a definitive answer from AAAI.

The second point, negotiating a better subscription rate for the
journal (about 25% discount), is dependent on our guaranteeing a
certain number of subscriptions (based on the membership), and taking
over some of the administrative load (such as maintaining an address
list), but not necessarily on guaranteeing a large number of
subscriptions. However, it appears to be the case that the more
subscriptions we could guarantee, the greater the discount.

A third, supplementary point is that the society might provide an
elected body to make decisions on behalf of researchers interested in
machine learning. Legally, at least a treasurer and one other board
member are required.

Four options regarding the machine learning society were put to a
vote. They are:

- Avoid forming a machine learning society (0 votes);
- Form the minimal society to carry over funds and negotiate
subscription rates (8 votes);
* Form a society that also includes a small elected board to
make decisions for the field as a whole (32 votes); and
- Form a society that includes a large, well-structured board
to make decisions for the field as a whole (1 vote).

POLICY ON MULTIPLE CONCURRENT SUBMISSIONS

Authors sometimes submit the same, or nearly the same draft to
more than one workshop or conference at the same time. This increases
the likelihood that the submission will be published, but because it
can only be published in one place, multiple submissions also increase
the burden on reviewers and those planning the conference schedule.
Existing policies range from disallowing multiple submissions (e.g.,
IJCAI) to allowing them as long as they are labeled as such (e.g.,
AAAI). In either case, the call for submissions should clearly state
the intended policy. For any given policy, moving the submission and
notification dates may help implement the policy. For example, if
multiple submissions are allowed, making conference submission dates
simultaneous may encourage submission of exact copies and thus ease
reviewing. If multiple submissions are discouraged, making the
notification date of one conference before the submission date of
another may help minimize duplicates without putting authors in a
bind.

Two policy options were put to a vote. They are similar to those
used by IJCAI and AAAI, respectively:

? Disallow multiple submissions of any kind; those found to be
highly similar to other submission in the same time frame
are rejected (17 votes); and
? Allow multiple submissions so long as they are labeled as
such (24 votes).
- (The option of being silent on multiple submissions but
unofficially allowing them is inconsistent.)

This lack of general consensus was taken as an indication that the
next program chairs should select a policy at their discretion.

POLICY ON SUBMITTING WORKSHOP PUBLICATIONS TO CONFERENCES

The differential role of workshop versus conference publications
is relatively unclear, and this leaves authors and reviewers wondering
about the appropriateness of submitting work already published in a
workshop to a conference. In the past, a workshop publication has
varied in status from that of an invited paper, an extended abstract,
or a short but reviewed paper. In some of these cases, publication of
a longer version of the research in a conference format may be
appropriate. Moreover, many workshop proceedings have only been
informally distributed, but some have been published through the same
channels normally associated with conferences. This year's program
committee adopted the policy that submissions similar to those
published in last year's workshop would be considered case-by-case on
the basis of significance. Though several policies were discussed,
the assembly did not feel comfortable with voting on options. For
reference, the options were:

- Allow submissions of work previously published in any
workshop;
- Allow submissions of work previously published in workshops
other than machine learning;
- Allow submissions only if not previously published in
workshops; and
- Consider previously appearing submissions case-by-case on
the basis of significance.

The first two policies may also be modified to account for the
potential difference between workshops that do not have distributed
proceedings and those that have published proceedings. Again, this
issue was left open, presumably to be decided by the next program
chairs. Since the next meeting is a workshop, some intent of policy
should appear in the call for submissions indicating whether or not
papers appearing in that workshop can be submitted to other
conferences.
------------------------------
Subject: 9th European Conference on Artificial Intelligence
Date: 6 Jul 90 22:00:42 GMT
Sender: denny@tss.com (Denny Page)

[I have editted this long message about ECAI to highlight the ML talks- Mike]

ECAI90
9th European Conference on Artificial Intelligence
August 6-10, 1990
Stockholm, Sweden


Organizers:
European Coordinating Committee for Artificial Intelligence (ECCAI)
In collaboration with Swedish Artificial Intelligence Society (SAIS)

For information on registration, hotel reservation, social program etc.
ECAI
c/o Stockholm Convention Buerau
P O Box 6911
S-102 39 Stockholm, Sweden
Telephone: +46 8 23 09 90
Telex: 11556 Congres S
Telegram: Congressus
Telefax: +46 8 34 84 41

Technical Program - Wednesday
=============================

WC2 - Machine Learning 1
* Automating the Refinement of Knowlege-Based Systems. 14:30
S. Craw, D. Sleeman
* Learning from Observation in Noisy Environments Via 14:55
Integration of EBL and SBL Techniques.
L. Di Pace, F. Fabrocini
* Heuristic Refinement of Logic Programs. 15:20
M. Aben, M. van Someren
* Knowledge-Intensive Case-Based Reasoning and 15:45
Sustained Learning.
A. Aamodt

Technical Program - Thursday
============================
TC3 - Machine Learning 2
* A Hybrid-Rule-Based/Bayesian Classifier. 16:30
P. Smyth, R. M. Goodman, C. Higgins
* Combining Similarity and Causality in Creative Analogy. 16:55
Y. Kodratoff
* Saturation: Postponing Choices when Inverting Resolution. 17:20
C. Rouveirol
* A Hybrid Genetic Algorithm for a Logic Problem. 17:45
R. Young, A. Reel

Technical Program - Friday
==========================
FC1 - Machine Learning 3
* On Negation and Three-Valued Logic in Interactive 10:30
Concept-Learning.
L. De Raedt, M. Bruynooghe
* Biasing Induction by Using a Domain Theory: An Experimental 10:55
Evaluation.
F. Bergadano, A. Giordana, L. Saitta
* PCS: A Classifier System that Builds a Predictive Internal 11:20
Worlds Model.
P. Spiessens
* Combining EBL from Success and EBL from Failure with 11:45
Parameter Version Spaces.
C. Carpineto
* Estimating Probabilities: A Crucial Task in Machine 12:00
Learning.
B. Cestnik
* General Limitations on Machine Learning. 12:15
A. Hoffman

FD2 - Summary Speeches 2
* Machine Learning. 14:00
P. Bradzil

FC3 - Machine Learning 4
* Analytical Learning of Inductive Inference. 16:30
M. Numao
* Lamarckian Sub-Goal Reward in Genetic Algorithm 16:55
Y. Davidor
* Genetic Programming: Evolution of a Time Dependent Neural 17:20
Network Module that Teaches a Part of Stick Legs to Walk.
H. de Garis
------------------------------
From: nagel@wintermute.ICS.UCI.EDU (Mark Nagel)
Subject: email archive server on ics.uci.edu
Message-ID: <5007.648144813@wintermute.ics.uci.edu>
Newsgroups: ics.system
Reply-To: nagel@ICS.UCI.EDU
Organization: University of California, Irvine - Dept of ICS
Lines: 19
To: ics.system@wintermute.ICS.UCI.EDU
Date: 16 Jul 90 09:14:08 PDT
Phone: (714) 856-5039

[Note: This archive server may be used to recieve via e-mail
the machine learning databases- MP]

An archive server package has been installed on ics that provides
email access to files in our anonymous ftp/uucp area (~ftp). If
people have no other access to our archives, you can tell them to
send mail to:

archive-server@ics.uci.edu

Commands to the server may be given in the body. Some of these
commands are:

help
send <archive> <file>
find <archive> <string>

The help command replies with a useful help message.
------------------------------
Subject: Using learning to learn about learning
Date: Wed, 11 Jul 90 10:02:45 EDT
Message-ID: <7540.647704965@NATASHA.MACH.CS.CMU.EDU>
From: Yoram.Reich@natasha.mach.cs.cmu.edu



Over the last decade a vast amount of work in machine learning has produced
many systems, algorithms, and techniques for concept learning. Many of these
techniques are modifications, enhancements, variations, or (hopefully not
even one) duplications of existing work. Lets for example take the arbitrary
list of 20 programs: Focussing, HYDRA, ID4, GID3, AGAPE, AQ11, ML-SMART,
HILLARY, STAGGER, OTIS, CHARADE, L-ML, BEAGLE, CONFUCIUS, GEM, VERSEL, LAIR,
DISCIPLE, METAXA, NEDDIE (which is only a small portion of the set of
techniques) and tell me who is doing what and what are the underlying
differences between them. Moreover, tell me when should I use each of the
systems and why.

There has been several pieces of work who tried to compare learning
techniques. For example (Bundy, Silver, & Plummer,1985; AI J). Many recent
papers also have compared some techniques to others, but these comparisons
are scattered in many various papers.

What is missing is a paper or survey such as the knowledge acquisition
people have produced: (Boose, 1989) in Knowledge Acquisition vol1. titled: A
survey of knowledge acquisition techniques. The interesting aspect is not
the survey itself which is very important but the technique of producing it.
Apparently, knowledge acquisition researchers BELIEVE in their own tools to
the extent that they extract knowledge about their domain with a knowledge
acquisition system, in this case the system is AQUINAS. I think it is about
time that the machine learning community does the same.

There are several steps to conduct such a survey:
(1) Try to generate a comprehensive list of dimensions along which a
learning system should be described (a suggestion is appended at the
end of this message).
(2) Post the list to the community and correct it based on criticism.
This stage relies on contributions from the researchers.
A good mechanism to see if the list is appropriate is to explain
your systems with it. If important aspects are missing, the list is
incomplete.
(3) Post the corrected list with some examples of systems and invite
researchers to input the data on their own programs. Researchers with
experience on other techniques will be invited to input their
comments as well.
(4) Collect the input and assemble it in the form of a database and ship
it to UCI collection.
(5) At this stage, any researcher would be able to use the database with
its own program, discover clusters, generate concepts, etc. This
will not be just another REAL-WORLD domain, this can direct people
at valuable references, gaps in our research etc.
(6) Beside the effort that each individual researcher would be able to
do, a central effort of analyzing the database will be valuable.
Such a study can be conducted with various clustering techniques.

If we are not prepared to learn on our domain using our tools, how can we
convince anybody else that machine learning has any value.

As a first pass I suggest to view the appended list of dimensions as step
(1). I volunteer to collect the comments and post the corrected list. Also I
would like to limit the survey to mainly SBL techniques (those SBL that make
use of domain theory are also ok).

Its time to go to work!

- Yoram

yoram@cs.cmu.edu



List of dimensions
-----------------------------------------------------------------------------
(additions, alterations, or any other corrections are welcome):
Note that in the current format no learning program can operate on the raw
data, but this can change, or data can be converted intelligibly.

-----------------------------------------------------------------------------
property name type possible values
-----------------------------------------------------------------------------
********** general characteristics **********
-----------------------------------------------------------------------------
function nominal concept learning
concept formation (clustering)

handles noise boolean yes
no

performance nominal only learning, learned knowledge
should be converted before use
integrated with learning
...

implementation language nominal Lisp
Prolong
...
Assembly (... but the work is
not done yet)

first published continuous (so we'll know who should get the
credit ...)
-----------------------------------------------------------------------------
********** learning protocol **********
-----------------------------------------------------------------------------
teacher intervention ordinal (1) unsupervised
(2) asking questions
(3) supervised

presentation ordinal 1 (batch)
2 (with deferred learning, e.g.,
Lebowitz, ML W, 1988)
3 (incremental)

example classification nominal (?) none necessary
positive only
positive + negative

-----------------------------------------------------------------------------
********** bias **********
-----------------------------------------------------------------------------
example descriptors set (*) boolean
structured
continuous
ordinal
...

use background know. boolean yes
no

type of knowledge set description-language (the list of
properties and their values)
domain axioms

concept representation nominal CNF
DNF
probabilistic hierarchy
symbolic + numeric weights
decision tree
production rules
...

-----------------------------------------------------------------------------
********** internal mechanisms **********
-----------------------------------------------------------------------------
learning operators (**) set



save examples ordinal no. just concept description
partial set
all

time complexity ? ...

-----------------------------------------------------------------------------
********** performance **********
-----------------------------------------------------------------------------
performance on standard
databases list-set (***) Soybean, % accuracy, type of experiment
Breast cancer, % accuracy, ...
...

theoretical results list-set exponential time to learn DNF
...

-----------------------------------------------------------------------------
********** source of power **********
-----------------------------------------------------------------------------
good list a partial set of the previous
properties who impact the most
on the good performance (****)

bad list similar for the bad aspects



-----------------------------------------------------------------------------
(*) a set type is a nominal type that can get several values in the
description of a single example
(**) This might be a property that characterizes families of programs. For
example, concept learning programs can have the following operators:
generalization, specialization (triggered by any negative example or
just by a near miss), contradiction handling (e.g., focussing can alter
description trees), dropping a property, adding a property, creating
a property, replacing a constant by a variable, create a range, add a
disjunct, split disjunct, etc.

(***) List-set is a property with several values where each value contains a
tuple (database name, % accuracy, type of experiment)


(****) This is a tough section. The best information should not be in the
form of a list of properties; but rather, for each property,
give a simple explanation in the form of chain of facts: for example:
(1) probabilistic concept representation ->
handles imprecise concepts ->
handles noise, or
(2) saving examples -> ability to handle contradictions (by adding a
disjunct)
(3) incremental mode -> exhibit order effect in performance quality

------------------------------
END of ML-LIST 2.13

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT