Copy Link
Add to Bookmark
Report

Machine Learning List Vol. 5 No. 01

eZine's profile picture
Published in 
Machine Learning List
 · 1 year ago

 
Machine Learning List: Vol. 5, No. 1
Monday, January 4, 1993

Contents:
IJCAI-93 Workshop on Machine Learning and Knowledge Acquisition
Minimum Description Length & Transformations in Machine Learning

The Machine Learning List is moderated. Contributions should be relevant to
the scientific study of machine learning. Mail contributions to ml@ics.uci.edu.
Mail requests to be added or deleted to ml-request@ics.uci.edu. Back issues
may be FTP'd from ics.uci.edu in pub/ml-list/V<X>/<N> or N.Z where X and N are
the volume and number of the issue; ID: anonymous PASSWORD: <your mail address>

----------------------------------------------------------------------

Date: Tue, 29 Dec 92 13:51:08 EST
From: Gheorghe Tecuci <tecuci@aic.gmu.EDU>
Subject: IJCAI-93 Workshop on Machine Learning and Knowledge Acquisition


CALL FOR PAPERS

IJCAI-93 WORKSHOP
MACHINE LEARNING AND KNOWLEDGE ACQUISITION:
Common Issues, Contrasting Methods, and Integrated Approaches

29 August 1993, Chambery, France

Machine learning and knowledge acquisition share the common goal
of acquiring and organizing the knowledge of a knowledge-based
system. However, each field has a different focus, and most
research is still done in isolation from each other. The focus of
knowledge acquisition has been to improve and partially automate
the acquisition of knowledge from human experts. In contrast,
machine learning focuses on mostly autonomous algorithms for
acquiring or improving the organization of knowledge, often in
simple prototype domains. Also, in knowledge acquisition, the
acquired knowledge is directly validated by the expert that
expresses it, while in machine learning, the acquired knowledge
needs an experimental validation on data sets independent of those
on which learning took place. As machine learning moves to more
'real' domains, and knowledge acquisition attempts to automate
more of the acquisition process, the two fields increasingly find
themselves investigating common issues with complementary methods.
However, lack of common research methodologies, terminology, and
underlying assumptions often hinder a close collaboration.
The purpose of this symposium is to bring together machine
learning and knowledge acquisition researchers in order to
facilitate cross-fertilization and collaboration, and to promote
integrated approaches which could take advantage of the
complementary nature of machine learning and knowledge
acquisition.

Topics of interest include, but are not limited to, the following:
Case Studies
Case studies of integrated ML/KA methods, with analysis of
successes/failures; integrated architectures for ML and KA;
interactive learning systems, automated knowledge acquisition
systems;
Comparative Studies
Comparative studies of KA and ML methods solving similar
problems (e.g., knowledge base refinement methods in KA versus
theory revision methods in ML, constructive induction in ML
versus knowledge elicitation in KA). Analysis of the
complementarity of the KA and ML approaches to knowledge base
construction (e.g. KA primarily addresses the problems of KB
elicitation and refinement, while ML primarily addresses issues
of KB refinement and optimization).
Hard Problems
Analysis of hard problems in KA or ML that could be simplified
by employing techniques from the other area, as well as
presentation of specific solutions (e.g. the problem of new
terms in ML could be simplified by employing knowledge
elicitation techniques developed in KA; the credit/blame
assignment problem in ML could be simplified by employing
knowledge refinement techniques developed in KA; KA of problem
solving rules could be automated by using apprenticeship
learning techniques);
Knowledge Representation
Knowledge representation issues in KA and ML (adequate
representations for KA, adequate representations for ML,
approaches to knowledge representation in integrated ML/KA
systems like translation between representations, common
representations, etc.);
Key Issues
Key issues in ML or KA (e.g. dynamic versus static knowledge
acquisition or learning, the role of explanations in KA and ML,
the validation of knowledge in KA and ML);
Overviews
Overviews of the state-of-the-art of ML, KA or of the
integration of ML and KA,
Position Papers
Position papers on methodology for integrated ML/KA systems or
on improving the collaboration between the ML and KA
communities.

It is recommended that the papers make explicit the research
methodology, the underlying assumptions, definitions of technical
terms, important future issues, and potential points of
collaboration. They should not exceed 15 pages. The organizers
intend to publish a selection of the accepted papers as a book or
the special issue of a journal. They encourage the authors to take
this into account while preparing their papers.
The format of the workshop will be paper sessions with discussion
at the end of each session, and a concluding panel on the
integrated approaches, guidelines for successful collaboration,
and concrete action items. The number of the participants to the
workshop is limited to 40.
Each workshop attendee must also register for the IJCAI conference
and must pay an additional 300FF (about $60) fee for the workshop.
One student attending the workshop and being in charge of taking
notes will be exempted from the additional 300 FF fee. Volunteers
are invited.

WORKSHOP Co-CHAIRS

Smadar Kedar Yves Kodratoff Gheorghe Tecuci
NASA Ames & Inst.for CNRS & Universite George Mason Univ.&
Learning Sciences de Paris-Sud Romanian Academy
(kedar@ils.nwu.edu) (yk@lri.lri.fr) (tecuci@aic.gmu.edu)

PROGRAM COMMITTEE

Ray Bareiss, Institute for the Learning Sciences
Catherine Baudin, NASA Ames
Guy Boy, European Inst. of Cognitive Sciences and Eng.
Brian Gaines, University of Calgary
Matjaz Gams, Jozef Stefan Institute
Jean-Gabriel Ganascia, Univ. Pierre and Marie Curie
Nathalie Mathe, European Space Agency and NASA Ames
Ryszard Michalski, George Mason University
Raymond Mooney, University of Texas at Austin
Katharina Morik, Dortmund University
Mark Musen, Stanford University
Michael Pazzani, Univ. of California at Irvine
Luc De Raedt, Catholic University of Leuven
Alan Schultz, Naval Research Laboratory
Mildred Shaw, University of Calgary
Maarten van Someren, University of Amsterdam
Walter Van de Velde, University of Brussels

ADDRESS FOR CORRESPONDENCE

Gheorghe Tecuci
Artificial Intelligence Center, Computer Science Department
George Mason University, 4400 University Dr., Fairfax, VA 22030
email: mlka93@aic.gmu.edu, fax: (703)993-3729

SUBMISSIONS

Four copies of the papers (five to fifteen pages in length) should
arrive at the above address by March 31, 1993.
Notification of acceptance or rejection will be sent by May 10.
Final papers should arrive by June 10, 1993.

Those who would like to attend without a presentation should send
a one to two-page description of relevant research interests and a
list of selected publications.

------------------------------

Subject: Minimum Description Length & Transformations in Machine Learning
From: aboulang@bbn.COM
Date: Sat, 2 Jan 93 19:00:10 EST

Minimum Description Length & Transformations in Machine Learning

Or, Is there a Principle of Least Action for Machine Learning?

In this short note I want to posit that MDL-like methodologies will
become the unifying "Least Action Principles" of machine learning.
Furthermore, machine learning architectures will evolve to include a
fundamental capability for doing coordinate transformations and this
capability will be intimately tied to the use of MDL-like
methodologies in Machine Learning.

By MDL-like methodologies I mean the use information-theoretic metrics
on the results of any machine learning algorithm in its generalization
phase. This metric is used a a decision criterion for over training
by comparing the MDL-like metric of the results or the machine
learning algorithm against the data itself. MDL-like methodologies
are applicable to supervised and unsupervised learning. What I want to
mean by the term "MDL-like" is that there is an applicable body of
work in this area -- including the work of Wallace, Akaike and
Rissanen. It is possible to use MDL-like metrics in the generation
phase as well.

Transformations and Machine Learning

Many paradigmnamic problems in machine learning become
"embarrassingly" simple under straightforward coordinate
transformations. For instance, the two spirals problem becomes two
simple lines under a polar coordinate transformation. Much of the
activity of a physicist is in examination of appropriate coordinate
system hosting of the problem to exploit symmetries of the problem. I
posit that at least one phase of any machine learning system should
include a search for appropriate coordinate system hosting.

These transformations come in many different colors. For example,
temporal differences is a relativising transformation in time
coordinates. Another example is the growing use of wavelets for
time-frequency features.

A significant contributor to the complexity of the description of a
problem is its chosen coordinate-system hosting. Coordinate
transformations can be of two types: local and global. An example of a
global transformation is the aforementioned polar hosting for the two
spirals problem. The Fukashima network makes use of local
transformations for robust pattern recognition. MDL can be used as
the selection criteria in the transformation search.

MDL as a Least Action Principle for Machine Learning

MDL-like methods holds a promise to be a unifying principle in machine
learning -- much like Lagrangian methods that make use of action and
its minimization is *the* unifying approach in physics, cutting across
classical physics, relativistic physics, and quantum mechanics.
MDL-like metrics are a type of *action* for machine learning. (In fact
for certain types of search in machine learning, Lagrangian optimization
can be used.)

(Recent work in machine vision at MIT has suggested the use of MDL as
a principle for 3-d object recognition and disambiguation. It is
posited that what is perceived is related to a MDL description of the
3d-scene. By the way, who is doing this work?)

There are a couple of long-standing conceptual issues in machine learning:

The relationship between learning methodologies - supervised,
unsupervised, reinforcement learning, etc. Somehow, one would like a
unifying framework for all of them. The fact that MDL-like methods
can be used in several methodologies means that it could help in
building such a framework.

The relationship between optimization and machine learning. MDL-like
metrics are posited to be the *general* optimization criterion for
machine learning.

MDL has broad applicability in machine learning. It can be used to
guide search in both unsupervised and supervised learning. It can be
used as the common optimization criterion for "multi-algorithm machine
learning systems". Finally it can be used to tie the search in feature
space with that of the search for coordinate system hosting.


Seeking a higher form for machine learning,
Albert Boulanger
aboulanger@bbn.com

------------------------------

End of ML-LIST (Digest format)
****************************************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT