Copy Link
Add to Bookmark
Report
Machine Learning List Vol. 4 No. 11
Machine Learning List: Vol. 4 No. 11
Tuesday, May 26, 1992
Contents:
Position at Nasa Ames
ML92 provisional program
Georgia Tech TR's
Parallel problem solving from nature conference
The Machine Learning List is moderated. Contributions should be relevant to
the scientific study of machine learning. Mail contributions to ml@ics.uci.edu.
Mail requests to be added or deleted to ml-request@ics.uci.edu. Back issues
may be FTP'd from ics.uci.edu in pub/ml-list/V<X>/<N> or N.Z where X and N are
the volume and number of the issue; ID: anonymous PASSWORD: <your mail address>
----------------------------------------------------------------------
Date: Fri, 15 May 92 20:05:29 PDT
From: Steve Minton <minton@ptolemy.arc.nasa.GOV>
Subject: Position at NASA AMES
Fulltime research assistant/programmer in the Artificial Intelligence
Research Branch at NASA/Ames in Mountain View. We are looking for
someone interested in machine learning and problem solving. The
project involves automatically optimizing a constraint satisfaction
engine that is used to solve combinatorial problems (e.g., scheduling
problems). This position is appropriate for someone with a good
Artificial Intelligence and Lisp programming background, who is
interested in participating in basic research in AI.
Ames Research Center is located in the San Francisco Bay area, about
15 minutes from Stanford University. If you are interested in finding out
more, please send a resume to:
Steve Minton
NASA Ames Research Center
Mail Stop 269-2
Moffett Field, CA 94035
Telephone: 415-604-6522
Email: Minton@ptolemy.arc.nasa.gov
------------------------------
Date: Mon, 18 May 92 14:14:39 BST
From: Derek Sleeman <sleeman@computing-science.aberdeen.ac.UK>
Subject: ML92 provisional program
FURTHER REMINDER NORMAL REGISTRATION DEADLINE: Friday 29th MAY
(Accommodation is only guaranteed upto that date.)
*************************************
Machine Learning Conference, July 1-4 1992
CONFERENCE PROGRAM (Provisional)
Tuesday 30th June
18.30 - 21.30 REGISTRATION & RECEPTION, Elphinstone Hall.
Wednesday 1st JULY
8.15 - 13.00 REGISTRATION, Foyer of Fraser Noble Building
9.00 - 10.30 INVITED TALK: "Machine Learning & Qualitative Reasoning"
Ivan BRATKO (Ljubljana, Slovenia)
Chairman: Derek SLEEMAN
Lecture Theatre 1 Fraser Noble Building
10.30 - 11.00 COFFEE
11.00 - 12.30 Paper Session I: Chair:
Lecture Theatre 1 Fraser Noble Building
Henrik Bostr^om
Eliminating redundancy in explanation-based learning
Peter Clark & Rob Holte
Lazy partial evaluation: an integration of explanation-based generalization
and partial evaluation
Devika Subramanian & Scott Hunter
Measuring utility and the design of provably good EBL algorithms
12.30 - 14.00 LUNCH
14.00 - 15.30 Paper Session II: Chair:
Lecture Theatre 1 Fraser Noble Building
Oren Etzioni
An asymptotic analysis of speedup learning
Philip Laird
Dynamic optimization
Jonathan Gratch & Gerald DeJong
An analysis of learning to plan as a search problem
15.30 - 16.00 TEA
16.00 - 17.30 Paper Session III: Chair:
Lecture Theatre 1 Fraser Noble Building
Alan D Christiansen
Learning to predict in uncertain continuous tasks
Sridhar Mahadevan
Enhancing transfer in reinforcement learning by building stochastic
models of robot actions
Claude Sammut, Scott Hurst, Dana Kedzier & Donald Michie
Learning to fly
18.30 Buses leave Crombie-Johnston Halls for
CIVIC RECEPTION, TOWN HOUSE, Union Street, Central Aberdeen.
Thursday 2nd July
9.00 - 10.30 INVITED TALK: "Children, adults and machines as
discovery systems"
David KLAHR (CMU, Pittsburgh)
Chairman: Ryszard Michalski
Lecture Theatre 1 Fraser Noble Building
10.30 - 11.00 COFFEE
11.00 - 12.30 Paper Session IV: Chair:
Lecture Theatre 1 Fraser Noble Building
Jerzy W Bala, Ryszard S Michalski & Janusz Wnek
The principal axes method for constructive induction
Tom E Fawcett & Paul E Utgoff
Automatic feature generation for problem solving systems
Cullen Schaffer
Deconstructing the digit recognition problem
12.30 - 14.00 LUNCH
14.00 - 15.30 Paper Session V: Chair:
Lecture Theatre 1 Fraser Noble Building
David W Aha
Generalizing from case studies: a case study
Jianping Zhang
Selecting typical instances in instance-based learning
Jeffery A Clouse & Paul E Utgoff
A teaching method for reinforcement learning
15.30 - 16.00 TEA
16.00 - 16.30 Paper Session VI: Chairman: Peter Edwards
Lecture Theatre 1 Fraser Noble Building
Lawrence Hunter, Nomi L Harris & David J States
Mega-classification: finding motifs in massive datastreams
16.30 - 18.00 Previews of Long Posters: Chairman: Peter Edwards
Lecture Theatre 1 Fraser Noble Building
Peter C-H Cheng & Herbert A Simon
The right representation for discovery: finding the conservation of
momentum
Timothy M Converse & Kristian J Hammond
Learning to satisfy conjunctive goals
Cao Feng & Stephen Muggleton
Towards inductive generalisation in higher order logic
Ray J Hickey
Artificial universes - towards a systematic approach to evaluating
algorithms which learn from examples
Wayne Iba & Pat Langley
Induction of one-level decision trees
Cezary Z Janikow
Combining competition and cooperation in supervised inductive learning
Kenji Kira & Larry A Rendell
A practical approach to feature selection
Rey-Long Liu & Von-Wun Soo
Augmenting and efficiently utilizing domain theory in explanation-based
natural language acquisition
Chengjiang Mao
THOUGHT: an integrated learning system for acquiring knowledge structure
Thierry Van de Merckt
NFDT: a system that learns flexible concepts based on decision trees for
numerical attributes
Satinder P Singh
Scaling reinforcement learning algorithms by learning variable temporal
resolution models
18.00 - 19.00 COMMUNITY MEETING, LT1 Fraser Noble building.
19.00 - 21.00 POSTER SESSION & RECEPTION, Elphinstone Hall
Friday 3rd July
9.00 - 10.30 INVITED TALK: "Combining symbolic and neural learning"
Jude SHAVLIK (Wisconsin-Madison)
Chairman: Tom Mitchell
Lecture Theatre 1 Fraser Noble Building
10.30 - 11.00 COFFEE
11.00 - 12.30 Paper Session VII: Chair:
Lecture Theatre 1 Fraser Noble Building
Attilio Giordana & Claudio Sale
Learning structured concepts using genetic algorithms
John J Grefenstette & Connie L Ramsey
An approach to anytime learning
Padhraic Smyth & Jeff Mellstrom
Detecting novel classes with applications to fault diagnosis
12.30 - 14.00 LUNCH
14.00 - 15.30 Paper Session VIII: Chair:
Lecture Theatre 1 Fraser Noble Building
Gerald J Tesauro
Temporal difference learning of backgammon strategy
Hussein Almuallim & Thomas G Dietterich
On learning more concepts
William W Cohen
Compiling prior knowledge into an explicit bias
15.30 - 16.00 TEA
16.00 - 17.30 Paper Session IX: Chair:
Lecture Theatre 1 Fraser Noble Building
St^ephane Lapointe & Stan Matwin
Sub-unification: a tool for efficient induction of recursive programs
Stephen Muggleton, Ashwin Srinivasan & Michael Bain
Compression, significance and accuracy
Somkiat Tangkitvanich & Masamichi Shimura
Refining a relational theory with multiple faults in the concept and
subconcepts
17.30 - 18.00 CLOSING SESSION, LT1, Fraser Noble building
19.15 Buses leave Crombie-Johnston Halls & Central Hotels for
CONFERENCE BANQUET at PITTODRIE HOUSE HOTEL
*******************************
Saturday 4th July
The informal Workshops will be held in several rooms on the central Campus.
(See posters around the Conference area, and your individual Workshop
Proceedings, for details.)
------------------------------
Date: Tue, 26 May 92 15:45:17 EDT
From: Ashwin Ram <ashwin@cc.gatech.EDU>
Subject: New tech reports and papers available by anonymous FTP
The following technical reports and electronic reprints of published
articles have been added to the archives on ftp.cc.gatech.edu
(130.207.3.245), and can be FTP'd anonymously from the directory
pub/ai. (Downloading information follows the abstracts.) The current
list of titles includes:
git-cc-91-37 Learning Momentum: On-line Performance Enhancement for Reactive
Systems
git-cc-92-02 A Theory of Questions and Question Asking
git-cc-92-03 Indexing, Elaboration and Refinement: Incremental Learning of
Explanatory Cases
git-cc-92-04 The Use of Explicit Goals for Knowledge to Guide Inference and
Learning
git-cc-92-19 Introspective Reasoning using Meta-Explanations for
Multistrategy Learning
er-90-01 Incremental Learning of Explanation Patterns and their Indices
er-90-02 Knowledge Goals: A Theory of Interestingness
er-90-03 Decision Models: A Theory of Volitional Explanation
er-91-01 Learning Indices for Schema Selection
er-91-02 A Goal-based Approach to Intelligent Information Retrieval
er-91-03 Evaluation of Explanatory Hypotheses
er-91-04 Using Introspective Reasoning to Select Learning Strategies
er-91-05 Interest-based Information Filtering and Extraction in Natural
Language Understanding Systems
er-92-01 Knowledge-Based Diagnostic Problem Solving and Learning in the
Test Area of Electronics Assembly Manufacturing
er-92-03 Multistrategy Learning with Introspective Meta-Explanations
er-92-04 An Architecture for Integrated Introspective Learning
er-92-05 Learning to Troubleshoot in Electronics Assembly Manufacturing
er-92-06 An Explicit Representation of Forgetting
Abstracts of the new additions follow:
File: git-cc-92-19.ps.Z
TECH-REPORT: GIT-CC-92/19
TITLE: Introspective Reasoning using Meta-Explanations for
Multistrategy Learning
AUTHORS: Ashwin Ram and Michael T. Cox
APPEARS-IN: Machine Learning: A Multistrategy Approach, Vol. IV,
R.S. Michalski and G. Tecuci (eds.), Morgan Kaufman,
1992, to appear.
ABSTRACT: In order to learn effectively, a reasoner must not only
possess knowledge about the world and be able to improve that
knowledge, but it also must introspectively reason about how it
performs a given task and what particular pieces of knowledge it needs
to improve its performance at the current task. Introspection
requires declarative representations of meta-knowledge of the
reasoning performed by the system during the performance task, of the
system's knowledge, and of the organization of this knowledge. This
paper presents a taxonomy of possible reasoning failures that can
occur during a performance task, declarative representations of these
failures, and associations between failures and particular learning
strategies. The theory is based on Meta-XPs, which are explanation
structures that help the system identify failure types, formulate
learning goals, and choose appropriate learning strategies in order to
avoid similar mistakes in the future. The theory is implemented in a
computer model of an introspective reasoner that performs
multistrategy learning during a story understanding task.
File: er-92-01.ps.Z
TITLE: Knowledge-Based Diagnostic Problem Solving and Learning
in the Test Area of Electronics Assembly Manufacturing
AUTHORS: S. Narayanan, A. Ram, S.M. Cohen, C.M. Mitchell, T.
Govindraj
APPEARS-IN: SPIE Symposium on Applications of AI X: Knowledge-Based
Systems, Orlando, FL, April 1992
ABSTRACT: A critical area in electronics assembly manufacturing is the
test and repair area. Computerized decision aids at this area can
facilitate enhanced system performance. A key to developing
computer-based aids is gaining an understanding of the human problem
solving process in the complex task of troubleshooting in electronics
manufacturing. In this paper, we present a computational model of
troubleshooting and learning in electronics assembly manufacturing.
The model is based on a theory of knowledge representation, reasoning,
and learning, which is grounded in observations of human problem
solving. The theory provides a foundation for developing applications
of AI in complex, real world domains.
File: er-92-03.ps.Z
TITLE: Multistrategy Learning with Introspective Meta-Explanations
AUTHORS: Michael T. Cox and Ashwin Ram
APPEARS-IN: Machine Learning: Ninth International Conference,
Aberdeen, Scotland, July 1992, to appear
ABSTRACT: Given an arbitrary learning situation, it is difficult to
determine the most appropriate learning strategy. The goal of this
research is to provide a general representation and processing
framework for introspective reasoning for strategy selection. The
learning framework for an introspective system is to first perform
some reasoning task. As it does, the system also records a trace of
the reasoning itself, along with the results of such reasoning. If a
reasoning failure occurs, the system must retrieve and apply an
introspective explanation of the failure in order to understand the
error and repair the knowledge base. A knowledge structure called a
Meta-Explanation Pattern is used to both explain how conclusions are
derived and why such conclusions fail. If reasoning is represented in
an explicit, declarative manner, the system can examine its own
reasoning, analyze its reasoning failures, identify what it needs to
learn, and select appropriate learning strategies in order to learn
the required knowledge without overreliance on the programmer.
File: er-92-04.ps.Z
TITLE: An Architecture for Integrated Introspective Learning
AUTHORS: A. Ram, M.T. Cox, S. Narayanan
APPEARS-IN: Machine Learning: Ninth International Conference,
Workshop on Computational Architectures, Aberdeen,
Scotland, July 1992, to appear
ABSTRACT: This paper presents a computational model of introspective
learning, which is a deliberative learning process in which a reasoner
introspects about its own performance on a reasoning task, identifies
what it needs to learn to improve its performance, formulates learning
goals to acquire the required knowledge, and pursues its learning
goals using multiple learning strategies. We discuss two case studies
of integrated introspective learning in two different task domains.
The first case study deals with learning diagnostic knowledge during a
troubleshooting task, and is based on observations of human operators
engaged in a real-world troubleshooting task at an electronics
assembly plant. The second case study deals with learning multiple
kinds of causal and explanatory knowledge during a story understanding
task. The model is computationally justified as a uniform and
extensible framework for deliberative learning using multiple learning
strategies, and cognitively justified as a plausible model of human
deliberative learning.
File: er-92-05.ps.Z
TITLE: Learning to Troubleshoot in Electronics Assembly Manufacturing
AUTHORS: S. Narayanan and Ashwin Ram
APPEARS-IN: Ninth International Machine Learning Conference, Workshop on
Integrated Learning in Real-world Domains, Aberdeen,
Scotland, July 1992, to appear
ABSTRACT: This paper presents a case study on the application of
multistrategy learning methods through introspective reasoning in the
complex task of troubleshooting in electronics assembly manufacturing.
The paper describes a computational model of problem solving and
learning which was built based on observations of troubleshooting
operators and protocol analysis of the data gathered in the test area
of an operational manufacturing plant. The learning model is based on
the theory of Meta-XPs, which are meta-explanation structures that
help the system identify reasoning failures, formulate learning goals,
and choose appropriate learning strategies in coping effectively with
the complexity of the domain.
File: er-92-06.ps.Z
TITLE: An Explicit Representation of Forgetting
AUTHORS: Michael T. Cox and Ashwin Ram
APPEARS-IN: Sixth International Conference on Systems Research,
Informatics and Cybernetics, Baden-Baden, Germany,
August, 1992, to appear
ABSTRACT: A pervasive, yet much ignored, factor in the analysis of
processing failures is the problem of misorganized knowledge. If a
system's knowledge is not indexed or organized correctly, it may make
an error, not because it does not have either the general capability
or specific knowledge to solve a problem, but rather because it does
not have the knowledge sufficiently organized so that the appropriate
knowledge structures are brought to bear on the problem at the
appropriate time. In such cases, the system can be said to have
"forgotten" the knowledge, if only in this context. This is the
problem of forgetting or retrieval failure. This research presents an
analysis along with a declarative representation of a number of types
of forgetting errors. Such representations can extend the capability
of introspective failure-driven learning systems, allowing them to
reduce the likelihood of repeating such errors. Examples are
presented from the Meta-AQUA program, which learns to improve its
performance on a story understanding task through an introspective
meta-analysis of its knowledge, its organization of its knowledge, and
its reasoning processes.
ABOUT THE ARCHIVE:
This archive contains technical reports published by the AI Group, College of
Computing, Georgia Tech, as well as electronic reprints of articles from major
journals and conferences. Each publication is available as a text file in
standard Postscript format, and should be printable using any standard method of
printing Postscript files. Let me know if you have any problems with
downloading or printing.
Check the README file for more details, and the ABSTRACTS file for more
information, including abstracts, authors and publication information.
HOW TO DOWNLOAD:
All files are retrievable using anonymous FTP from ftp.cc.gatech.edu
(130.207.3.245) from the directory /pub/ai. Login as anonymous and enter
your real name as the password. Here is a sample session illustrating how
to download the ABSTRACTS file:
% ftp ftp.cc.gatech.edu
Connected to solaria.cc.gatech.edu.
220 solaria FTP server (SunOS 4.1) ready.
Name (ftp.cc.gatech.edu:ashwin): anonymous
331 Guest login ok, send ident as password.
Password:
230 Guest login ok, access restrictions apply.
ftp> cd pub/ai
250 CWD command successful.
ftp> ls
200 PORT command successful.
150 ASCII data connection for /bin/ls (130.207.4.33,1518) (0 bytes).
ABSTRACTS
README
er-90-01.ps.Z
er-90-02.ps.Z
er-90-03.ps.Z
er-91-02.ps.Z
er-91-03.ps.Z
git-cc-92-02.ps.Z
git-cc-92-03.ps.Z
git-cc-92-04.ps.Z
226 ASCII Transfer complete.
135 bytes received in 0.64 seconds (0.21 Kbytes/s)
ftp> get ABSTRACTS
200 PORT command successful.
150 ASCII data connection for ABSTRACTS (130.207.4.33,1519) (15222 bytes).
226 ASCII Transfer complete.
local: ABSTRACTS remote: ABSTRACTS
15461 bytes received in 1.9 seconds (8 Kbytes/s)
ftp> quit
221 Goodbye.
--
Ashwin Ram <ashwin@cc.gatech.edu>
Assistant Professor, College of Computing
Georgia Institute of Technology, Atlanta, Georgia 30332-0280
(404) 853-9372
------------------------------
Date: Mon, 25 May 92 19:28:42 +0200
From: Bernard Manderick <bernard@arti1.vub.ac.be>
Subject: Parallel problem solving from nature conference
PPSN92
PARALLEL PROBLEM SOLVING FROM NATURE CONFERENCE
ARTIFICIAL INTELLIGENCE LAB
FREE UNIVERSITY OF BRUSSELS
BELGIUM
28 - 30 SEPTEMBER 1992
General Information
===================
The second Parallel Problem Solving from Nature-conference (PPSN92) will be
held at the Free University of Brussels, September 28-30, 1992.
The unifying theme of the PPSN-conference is natural computation,
i.e. the design, the theoretical and empirical understanding, and the
comparison of algorithms gleaned from nature and their
application to real-world problems in science and technology.
Examples are genetic algorithms, evolution strategies, algorithms
based on neural networks and immune systems.
Since last year there is a collaboration with the International
Conference on Genetic Algorithms (ICGA) resulting in an alternation of both
conferences. The ICGA-conferences will be held in the US during the
odd years while the PPSN-conferences will be held in Europe during the
even years.
The major objective of this conference is to provide a biannual
international forum for scientists from all over the world where they
can discuss new theoretical, empirical and implementational
developments in natural computation in general and evolutionary
computation in particular together with their applications in science,
technology and administration.
The conference will feature invited speakers, technical and poster
sessions.
More information and a registration form is available from:
Bernard Manderick
Artificial Intelligence Lab
VUB
Pleinlaan 2 Phone +32 2 641 35 75
B-1050 Brussels Fax +32 2 641 35 82
BELGIUM Email ppsn@arti.vub.ac.be
End of ML-LIST 4.11 (Digest format)
****************************************