Copy Link
Add to Bookmark
Report
Machine Learning List Vol. 5 No. 13
Machine Learning List: Vol. 5 No. 13
Monday, June 21, 1993
Contents:
The Machine Learning List is moderated. Contributions should be relevant to
the scientific study of machine learning. Mail contributions to ml@ics.uci.edu.
Mail requests to be added or deleted to ml-request@ics.uci.edu. Back issues
may be FTP'd from ics.uci.edu in pub/ml-list/V<X>/<N> or N.Z where X and N are
the volume and number of the issue; ID: anonymous PASSWORD: <your mail address>
----------------------------------------------------------------------
Date: Tue, 15 Jun 93 13:09:20 -0400
From: Rich Sutton <sutton@gte.COM>
Subject: ML93 Reinforcement Learning Workshop - Prelim Program
LAST CALL FOR PARTICIPATION
"REINFORCEMENT LEARNING: What We Know, What We Need"
an Informal Workshop to follow ML93 (10th Int. Conf. on Machine Learning)
June 30 & July 1, University of Massachusetts, Amherst
Reinforcement learning is a simple way of framing the problem of an
autonomous agent learning and interacting with the world to achieve a goal.
This has been an active area of machine learning research for the last 5
years. The objective of this workshop is to present concisely the current
state of the art in reinforcement learning and to identify and highlight
critical open problems.
The intended audience is all learning researchers interested in reinforcement
learning. The first half of the workshop will be mainly tutorial while the
second half will define and explore open problems. The entire workshop will
last approximately one and three-quarters days. It is possible to register
for the workshop but not the conference, but attending the conference is
highly recommended as many new RL results will be presented in the
conference and these will not be repeated in the workshop. Registration
information is given at the end of this message.
Program Committee: Rich Sutton (chair), Nils Nilsson, Leslie Kaelbling,
Satinder Singh, Sridhar Mahadevan, Andy Barto, Steve Whitehead
.............................................................................
PROGRAM INFORMATION
The following draft program is divided into "sessions", each consisting of a
set of presentations on a single topic. The earlier sessions are more "What
we know" and the later sessions are more "What we Need", although some of
each will be covered in all sessions. Sessions last 60-120 minutes and are
separated by 30 minute breaks. Each session has an organizer and a series of
speakers, one of which is likely to be the organizer herself. In most cases
the speakers are meant to cover a body of work, not just their own, as a
survey directed at identifying and explaining the key issues and open
problems. The organizer works with the speakers to assure this (the organizer
also has primary responsibility for picking the speakers, and chairs the
session).
*****************************************************************************
PRELIMINARY SCHEDULE:
June 30:
9:00--10:30 Session 1: Defining Features of RL
10:30--11:00 Break
11:00--12:30 Session 2: RL and Dynamic Programming
12:30--2:00 Lunch
2:00--3:30 Session 3: Theory: Stochastic Approximation and Convergence
3:30--4:00 Break
4:00--5:00 Session 4: Hidden State and Short-Term Memory
July 1:
9:00--11:00 Session 5: Structural Generalization: Scaling RL to Large
State Spaces
11:00--11:30 Break
11:30--12:30 Session 6: Hierarchy and Abstraction
12:30--1:30 Lunch
1:30--2:30 Session 7: Strategies for Exploration
2:30--3:30 Session 8: Relationships to Neuroscience and Evolution
*****************************************************************************
PRELIMINARY PROGRAM
===========================================================================
Session 1: Defining Features of Reinforcement Learning
Organizer: Rich Sutton, rich@gte.com
"Welcome and Announcements" by Rich Sutton, GTE (10 minutes)
"History of RL" by Harry Klopf, WPAFB (25 minutes)
"Delayed Reward: TD Learning and TD-Gammon" by Rich Sutton, GTE (50 minutes)
The intent of the first two talks is to start getting across certain key
ideas about reinforcement learning: 1) RL is a problem, not a class of
algorithms, 2) the distinguishing features of the RL problem are
trial-and-error search and delayed reward. The third talk is a tutorial
presentation of temporal-difference learning, the basis of learning methods
for handling delayed reward. This talk will also present Gerry Tesauro's
TD-Gammon, a TD-learning system that learned to play backgammon at a
grandmaster level. (There is still an outside chance that Tesauro will be able
to attend the workshop and present TD-Gammon himself.)
===========================================================================
Session 2: RL and Dynamic Programming
Organizer: Andy Barto, barto@cs.umass.edu
"Q-learning" by Chris Watkins, Morning Side Inc (30 minutes)
"RL and Planning" by Andrew Moore, MIT (30 minutes)
"Asynchronous Dynamic Programming" by Andy Barto, UMass (30 minutes)
These talks will cover the basic ideas of RL and its relationship to dynamic
programming and planning. Including Markov Decision Tasks.
===========================================================================
Session 3: New Results in RL and Asynchronous DP
Organizer: Satinder Singh, singh@cs.umass.edu
"Introduction, Notation, and Theme" by Satinder P. Singh, UMass
"Stochastic Approximation: Convergence Results" by T Jaakkola & M Jordan, MIT
"Asychronous Policy Iteration" by Ron Williams, Northeastern
"Convergence Proof of Adaptive Asynchronous DP" by Vijaykumar Gullapalli, UMass
"Discussion of *some* Future Directions for Theoretical Work" by ?
This session consists of two parts. In the first part we present a new and
fairly complete theory of (asymptotic) convergence for reinforcement learning
(with lookup tables as function approximators). This theory explains RL
algorithms as replacing the full-backup operator of classical dynamic
programming algorithms by a random backup operator that is unbiased. We
present an extension to classical stochastic approximation theory (e.g.,
Dvoretzky's) to derive probability one convergence proofs for Q-learning,
TD(0), and TD(lambda), that are different, and perhaps simpler, than
previously available proofs. We will also use the stochastic approximation
framework to highlight the contribution made by reinforcement learning
algorithms such as TD, and Q-learning, to the entire class of iterative
methods for solving the Bellman equations associated with Markovian Decision
Tasks.
The second part deals with contributions by RL researchers to
asynchronous DP. Williams will present a set of algorithms (and convergence
results) that are asynchronous at a finer grain than classical asynchronous
value iteration, but still use "full" backup operators. These algorithms are
related to the modified policy iteration algorithm of Puterman and Shin, as
well as to the ACE/ASE (actor-critic) architecture of Barto, Sutton and
Anderson. Subsequently, Gullapalli will present a proof of convergence for
"adaptive" asynchronous value iteration that shows that in order to ensure
convergence with probability one, one has to place constraints on how many
model-building steps have to be be performed between two consecutive updates
of the value function.
Lastly we will discuss some pressing theoretical questions
regarding rate of convergence for reinforcement learning algorithms.
===========================================================================
Session 4: Hidden State and Short-Term Memory
Organizer: Lonnie Chrisman, lonnie.chrisman@cs.cmu.edu
Speakers: Lonnie Chrisman & Michael Littman, CMU
Many realistic agents cannot directly observe every relevant aspect of their
environment at every moment in time. Such hidden state causes problems for
many reinforcement learning algorithms, often causing temporal differencing
methods to become unstable and making policies that simply map sensory input
to action insufficient.
In this session we will examine the problems of hidden state and of learning
how to best organize short-term memory. I will review and compare existing
approaches such as those of Whitehead & Ballard, Chrisman, Lin & Mitchell,
McCallum, and Ring. I will also give a tutorial on the theories of Partially
Observable Markovian Decision Processes, Hidden Markov Models, and related
learning algorithms such as Balm-Welsh/EM as they are relevant to
reinforcement learning.
Note: Andrew McCallum will present a paper on this topic as part of the
conference; that material will not be repeated in the workshop.
===========================================================================
Session 5: Structural Generalization: Scaling RL to Large State Spaces
Organizer: Sridhar Mahadevan, sridhar@watson.ibm.com
"Motivation and Introduction" by Sridhar Mahadevan, IBM
"Neural Nets" by Long-Ji Lin, Siemens
"CMAC" by Tom Miller, Univ. New Hampshire
"Kd-trees and CART" by Marcos Salganicoff, UPenn
"Learning Teleo-Reactive Trees" by Nils Nilsson, Stanford
"Function Approximation in RL: Issues and Approaches" by Richard Yee, UMass
"RL with Analog State and Action Vectors", Leemon Baird, WPAFB
RL is slow to converge in tasks with high-dimensional continuous state
spaces, particularly given sparse rewards. One fundamental issue in
scaling RL to such tasks is structural credit assignment, which deals
with inferring rewards in novel situations. This problem can be
viewed as a supervised learning task, the goal being to learn a
function from instances of states, actions, and rewards. Of course,
the function cannot be stored exhaustively as a table, and the
challenge is devise more compact storage methods. In this session we
will discuss some of the different approaches to the structural
generalization problem.
Note: Steve Whitehead & Rich Sutton will present a paper on this topic as
part of the confernece; that material will not be repeated in the workshop.
===========================================================================
Session 6: Hierarchy and Abstraction
Organizer: Leslie Kaelbling, lpk@cs.brown.edu
Speakers: To be determined
Too much of RL is concerned with low-level actions and low-level (single time
step) models. How can we model the world, and plan about actions, at a higher
level, or over longer time scales? How can we integrate models and actions at
different time scales and levels of abstraction? To address these questions,
several researchers have proposed models of hierarchical learning and
planning, e.g., Satinder Singh, Mark Ring, Chris Watkins, Long-ji Lin, Leslie
Kaelbling, and Peter Dayan & Geoff Hinton. The format for this session will
be a brief introduction to the problem by the session organizer followed by
short talks and discussion. Speakers have not yet been determined.
Note: Kaelbling will also speak on this topic as part of the conference; that
material will not be repeated in the workshop.
=============================================================================
Session 7: Strategies for Exploration
Organizer: Steve Whitehead, swhitehead@gte.com
Exploration is essential to reinforcement learning, since it is through
exploration, that an agent learns about its environment. Naive exploration
can easily result in intractably slow learning. On the other hand,
exploration strategies that are carefully structured or exploit external
sources of bias can do much better.
A variety of approaches to exploration have been devised over the last few
years (e.g., Kaelbling, Sutton, Thrun, Koenig, Lin, Clouse, Whitehead). The
goal of this session is to review these techniques, understand their
similarities and differences, understand when and why they work, determine
their impact on learning time, and to the extent possible organize them
taxonomically.
The session will consist of a short introduction by the session organizer
followed by a open discussion. The discussion will be informal but aimed at
issues raised during the monologue. An informal panel of researchers will be
on hand to participate in the discussion and answer questions about their
work in this area.
=============================================================================
Session 8: Relationships to Neuroscience and Evolution
Organizer: Rich Sutton, rich@gte.com
We close the workshop with a reminder of RL's links to neuroscience and to
Genetic Algorithms / Classifier Systems:
"RL in the Brain: Developing Connections Through Prediction" by R Montague, Salk
"Classifier Systems as Reinforcement Learners" by Stewart Wilson, Rowland
Institute
Abstract of first talk:
Both vertebrates and invertebrate possess diffusely projecting
neuromodulatory systems. In the vertebrate, it is known that these systems
are involved in the development of cerebral cortical structures and can
deliver reward and/or salience signals to the cerebral cortex and other
structures to influence learning in the adult. Recent data in primates
suggest that this latter influence obtains because changes in firing in
nuclei that deliver the neuromodulators reflect the difference in the
predicted and actual reward, i.e., a prediction error. This relationship is
qualitatively similar to that predicted by Sutton and Barto's classical
conditioning theory. These systems innervate large expanses of cortical and
subcortical turf through extensive axonal projections that originate in
midbrain and basal forebrain nuclei and deliver such compounds as dopamine,
serotonin, norepinephrine, and acetylcholine to their targets. The small
number of neurons comprising these subcortical nuclei relative to the extent
of the territory their axons innervate suggests that the nuclei are reporting
scalar signals to their target structures. These facts are synthesized into a
single framework which relates the development of brain structures and
conditioning in adult brains. We postulate a modification to Hebbian accounts
of self-organization: Hebbian learning is conditional on a incorrect
prediction of future delivered reinforcement from a diffuse neuromodulatory
system. The reinforcement signal is derived both from externally driven
contingencies such as proprioception from eye movements as well as from
internal pathways leading from cortical areas to subcortical nuclei. We
suggest a specific model for how such predictions are made in the vertebrate
and invertebrate brain. We illustrate the framework with examples ranging
from the development of sensory and sensory-motor maps to foraging behavior
in bumble-bees.
******************************************************************************
GENERAL INFO ON REGISTERING FOR ML93 AND WORKSHOPS:
Tenth International Conference on Machine Learning (ML93)
=========================================================
The conference will be held at the University of Massachusetts in Amherst,
Massachusetts, from June 27 (Sunday) through June 29 (Tuesday). The con-
ference will feature four invited talks and forty-six paper presentations.
The invited speakers are Leo Breiman (U.C. Berkeley, Statistics), Micki Chi
(U. Pittsburgh, Psychology), Michael Lloyd-Hart (U. Arizona, Adaptive Optics
Group of Steward Observatory), and Pat Langley (Siemens, Machine Learning).
Following the conference, there will be three informal workshops:
Workshop #A:
Reinforcement Learning: What We Know, What We Need (June 30 - July 1)
Organizers: R. Sutton (chair), N. Nilsson, L. Kaelbling, S. Singh,
S. Mahadevan, A. Barto, S. Whitehead
Workshop #B:
Fielded Applications of Machine Learning (June 30 - July 1)
Organizers: P. Langley, Y. Kodratoff
Workshop #C:
Knowledge Compilation and Speedup Learning (June 30)
Organizers: D. Subramanian, D. Fisher, P. Tadepalli
Options and fees:
Conference registration fee $140 regular
$110 student
Breakfast/lunch meal plan (June 27-29) $33
Dormitory housing (nights of June 26-28) $63 single occupancy
$51 double occupancy
Workshop A (June 30-July 1) $40
Workshop B (June 30-July 1) $40
Breakfast/lunch meal plan (June 30-July 1) $22
Dormitory housing (nights of June 29-30) $42 single occupancy
$34 double occupancy
Workshop C (June 30) $20
Breakfast/lunch meal plan (June 30) $11
Dormitory housing (night of June 29) $21 single occupancy
$17 double occupancy
Administrative fee (required) $10
Late fee (received after May 10) $30
To obtain a FAX of the registration form, send an email request to Paul Utgoff
ml93@cs.umass.edu or utgoff@cs.umass.edu
------------------------------
Date: Mon, 7 Jun 1993 16:20:12 -0400 (EDT)
From: Suzanne E Bulson <sbkluwer@world.std.COM>
Subject: Announcments from Kluwer
ROBOT LEARNING, Edited by Jonathan H. Connell and Sridhar Mahadevan
(T.J. Watson Research Center, IBM); 1993; 256 pages; ISBN 0-7923-
9365-1; $88.00
25% PRE-PUBLICATION DISCOUNT - DISCOUNT PRICE $66.00
VALID UNTIL 7/31/93 AND VALID ONLY THROUGH
THE NORTH AMERICAN OFFICE
Instead of relying on the simple simulations found in many papers
on this subject, ROBOT LEARNING provides the detailed experimental
results from a number of real robot systems. Explore the new
generation of robotics with ROBOT LEARNING, the insider's guide to
the leading research projects in artificial intelligence. With the
resurgence of interest in robot learning, the editors have provided
an easily accessible collection of papers on the subject, many of
which contain new material that has yet to be published elsewhere.
The research for these papers is being conducted at leading
universities and research labs, including MIT, Carnegie Mellon
University, Brown University, the University of Texas - Austin,
Rochester and IBM.
The main research directions covered in ROBOT LEARNING include:
reinforcement learning; behavior-based architectures; neural
networks; map learning; action models; navigation; and guided
exploration. This book is an ideal reference to a wide variety of
readers, including computer scientists, roboticists, mechanical
engineers, psychologists, ethologists, neurophysiologists,
mathematicians, and philosophers. ROBOT LEARNING also includes a
detailed bibliography of research papers and books on robot
learning for further reading.
CONTENTS:
CONTRIBUTORS
PREFACE
INTRODUCTION TO ROBOT LEARNING/Jonathan H. Connell and Sridhar
Mahadevan (T.J. Watson Research Center, IBM)
KNOWLEDGE-BASED TRAINING OF ARTIFICIAL NEURAL NETWORKS FOR
AUTONOMOUS ROBOT DRIVING/Dean M. Pomerleau (School of Computer
Science, Carnegie Mellon University)
LEARNING MULTIPLE GOAL BEHAVIOR VIA TASK DECOMPOSITION AND DYNAMIC
POLICY MERGING/Steve Whitehead, Jonas Karlsson and Josh Tenenberg
(Department of Computer Science, University of Rochester)
MEMORY-BASED REINFORCEMENT LEARNING: CONVERGING WITH LESS DATA AND
LESS REAL TIME/Andrew W. Moore and Christopher G. Atkeson
(Artificial Intelligence Laboratory, Massachusetts Institute of
Technology)
RAPID TASK LEARNING FOR REAL ROBOTS/Jonathan H. Connell and Sridhar
Mahadevan (T.J. Watson Research Center, IBM)
THE SEMANTIC HIERARCHY IN ROBOT LEARNING/Benjamin Kuipers, Richard
Froom, Wan Yik Lee and David Pierce (Artificial Intelligence
Laboratory, University of Texas at Austin)
UNCERTAINTY IN GRAPH-BASED MAP LEARNING/Thomas Dean, Kenneth Basye,
and Leslie Kaelbling (Department of Computer Science, Brown
University)
REAL ROBOTS, REAL LEARNING PROBLEMS/Rodney A. Brooks and Maja J.
Mataric (Artificial Intelligence Laboratory, Massachusetts
Institute of Technology)
BIBLIOGRAPHY
INDEX
*******************************************************************
KLUWER ACADEMIC PUBLISHERS
PRE-PUBLICATION OFFER
ORDER FORM
Please send me ____ copy(ies) of ROBOT LEARNING at this special
pre-publication price of $66.00
1993 256 pages ISBN 0-7923-9365-1 $88.00
____ Check Enclosed for $____________
____ Send me/my office a pro-forma invoice
Charge my: ____ VISA ____ Mastercard ____ American Express
Account Number _______________________________ Exp. Date __________
Signature _________________________________________________________
Daytime Phone _____________________________________________________
Send the books to:
Name ______________________________________________________________
Address ___________________________________________________________
City ____________________________ State ____________ Zip __________
Free shipping and handling in North America on pre-paid book
orders - Overseas orders please add $10.00 for shipping and
handling - Massachusetts residents please add 5% sales tax - Pre-
publication price available to pre-paid orders only - Prices
subject to change without notice - Additional discounts are not
applicable to the pre-publication price
RETURN THIS FORM TO:
KLUWER ACADEMIC PUBLISHERS, Order Department, PO Box 358, Accord
Station, Hingham, MA 02018-0358; Phone (617) 871-6600; Fax (617)
871-6528; E-mail kluwer@world.std.com
MACHINE LEARNING
Full contents, as well as instructions for authors, aims and scope,
and ordering information, are available via anonymous ftp from
world.std.com. The filenames are mach.inf (for authors' info, aims
and scope, and ordering) and mach.toc (for table of contents)
The 1993 (Volumes 10-13, 3 issues/volume) institutional
subscription rate is $524.00/Dfl.972.00.
The special subscription rate for AAAI members is $132.00.
CONTENTS VOLUME 13
VOLUME 13, ISSUE 1, September 1993
- Cost-Sensitive Learning of Classification Knowledge and Its
Applications in Robotics by Ming Tan
- Explanation-Based Learning for Diagnosis by El Fattah/O'Rorke
- Extracting Refined Rules from Knowledge-Based Neural Networks
by Towell/Shavlik
- Prioritized Sweeping: Reinforcement Learning with Less Data and
Less Real Time by Moore/Atkeson
- Research Note on Decision Lists by R. Kohavi and S. Benson
- Technical Note
Selecting a Classified Methode by Cross-Validation by Cullen
Schaffer
- Book Review and Response
-Review by Lisa Hellerstein of Machine Learning: A
Theoretical Approach by Balas K. Natarajan
-Response from B.K. Natarajan about the review by L.
Hellerstein
CONTENTS VOLUME 12
VOLUME 12, ISSUE 1/2/3, August 1993
SPECIAL ISSUE ON MACHINE DISCOVERY
- Introduction: Cognitive Autonomy in Machine Discovery by Jan M.
Zytkow
- An Integrated Framework for Empirical Discovery by Bernd
Nordhausen and Pat Langley
- Experience Selection and Problem Choice in an Exploratory
Learning System by Paul D. Scott and Shaul Markovitch
- Discovery by Minimal Length Encoding: A Case Study in Molecular
Evolution by Aleksandar Milosavljevic and Jerzy Jurka
- Design Methods for Scientific Hypothesis Formation and Their
Application to Molecular Biology by Peter D. Karp
- Machine Discovery of Effective Admissible Heuristics by Armand
E. Prieditis
- Discovery as Autonomous Learning from the Environment by Wei-
Min Shen
Bivariate Scientific Function Finding in a Sampled, Real-Data
Testbed by Cullen Schaffer
- The Design of Discrimination Experiments by Shankar A.
Rajamoney
CONTENTS VOLUME 11
VOLUME 11, ISSUE 2/3, May/June 1993
SPECIAL ISSUE ON MULTISTRATEGY LEARNING
- Introduction by R.S. Michalski
- Inferential Theory of Learning as a Conceptual Basis for
Multistrategy Learning by R.S. Michalski
- Multistrategy Leaning and Theory Revision by L. Saitta and M.
Botta
- Learning Causal Patterns: Making a Transition from Data-driven
to Theory-driven Learning by M. Pazzani
- Refining Algorithms with Knowledge-based Neural Networks:
Improving the Chou-Fasman Algorithm for Protein Folding by
R. Maclin and J.W. Shavlik
- Balanced Cooperative Modeling by K. Morik
- Plausible Justification Trees: A Framework for Deep and Dynamic
Integration of Learning Strategies by G. Tecuci
VOLUME 11, ISSUE 1, April 1993
- Coding Decision Trees by C.S. Wallace and J.D. Patrick
- Active Learning Using Arbitrary Binary Valued Queries by S.R.
Kulkarni, S.K. Mitter & J.N. Tsitsiklis
- Noise-Tolerant Occam Algorithms and their Applications to
Learning Decision Trees by Y. Sakakibara
- Very Simple Classification Rules Perform Well on Most Commonly-
Used Datasets by Robert C. Holte
- An Analysis of the Witt Algorithm by
Talmon/Braspenning/Fonteijn
------------------------------
Subject: ICGA workshop proposal/participation request
Date: Tue, 08 Jun 93 20:35:30 -0600
From: "Robert Elliott Smith.dat" <rob@comec4.mh.ua.EDU>
Call for Workshop Proposals
and Workshop Participation
ICGA-93
The Fifth International Conference on
Genetic Algorithms
17-21 July, 1993
University of Illinois at
Urbana-Champaign
Early this Spring, the organizers of ICGA solicited proposals for workshops.
Proposals for six workshops have been received and accepted thus far.
These workshops are listed below.
ICGA attendees are encouraged to contact the organizers of workshops in
which they would like to participate. Email addresses for workshop
organizers are included below.
The organizers would also like to encourage proposals for additional
workshops. If you would like to organize and chair a workshop, please
submit a one-paragraph proposal, including a description of the workshop's
topic, and some idea of how the workshop will be organized.
Workshop proposals will be accepted by email only at
icga93@pele.cs.unm.edu
At the ICGA91 (in San Diego), the workshops served an important role,
providing smaller, less formal meetings for the discussion of specific topics
related to genetic algorithms research. The organizers hope that this
tradition will continue at ICGA93.
ICGA93 workshops (if you wish to partipate, please write directly to the
workshop's organizer):
========================================================================
Genetic Programming
Organizer: Kim Kinnear (kim.kinnear@sun.com)
Engineering Applications of GAs (structural shape and topology optimization)
Organizer: Mark Jakiela (jakiela@MIT.EDU)
Discovery of long-action chains and emergence of hierarchies in classifier
systems
Organizers: Alex Shevorshkon
Erhard Bruderer (Erhard.Bruderer@um.cc.umich.edu)
Niching Methods
Organizer: Alan Schultz (schultz@aic.nrl.navy.mil)
Sam Mahfoud (mahfoud@gal4.ge.uiuc.edu)
Combinations of GAs and Neural Nets (COGANN)
Organizer: J. David Schaffer (ds1@philabs.Philips.Com)
GAs in control systems
Organizer: Terry Fogarty (tc_fogar@pat.uwe-bristol.ac.uk)
------------------------------
Date: Wed, 2 Jun 93 10:20:43 +1000
From: Sabrina Sestito <sestito@latcs1.lat.oz.au>
Subject: Call for participation - Australian AI'93 Machine Learning workshop
CALL FOR PARTICIPATION
Australian AI'93 workshop on Machine Learning :
recent progress and future directions
Topic and Issues
Machine learning is concerned with building and using computer programs
able to construct new knowledge or to efficiently use knowledge already known.
All natural intelligent systems have a mechanism for adaptive response.
Machine learning is a mechanism whereby implicit information in corporate and
scientific records can be extracted. Without a learning mechanism, all
knowledge used by a program must be explicity represented by the programmer.
Consequently, progress in machine learning has become central to the
development of the field of artificial intelligence as a whole and affects
almost all of its sub-areas, as learning can accompany any kind of problem
solving or process.
The topics of this workshop include, but are not limited to:
- symbolic methods
- sub-symbolic methods
- integrating different strategies
- experimental comparisons of methods
- theoretical comparisons of methods
- choice of method for a particular domain
- the future of machine learning
A major issue to be discussed in this workshop include the pragmatics of using
machine learning in the construction of classifiers and expert systems.
The workshop will provide a forum for machine learning researchers (both
nationally and internationally) to come together and exchange ideas in an
informal manner. It will be held as part of the Australian Joint AI
conference (AI'93) in Melbourne.
Format
The aim of the workshop is to offer an opportunity for researchers working and
interested in the field of machine learning to exchanges ideas and interact
with others. Participation in the workshop will be decided by a submitted
extended abstract of no more than 6 pages. Overall topics of discussion will
be extracted from these abstracts in order to produce a coherent programme.
Some participants will be invited to give formal presentations (approx 10-15
minutes each), while others will be invited to attend. The organizers intend
to collate the extended abstracts into a handout. Submissions of controversial
and/or innovative abstracts are encouraged.
Deadlines
Submission deadline: 9th August, 1993
Participants notified: 10th Sepember, 1993
Workshop date: 16th November (morning), 1993
COST: $30
VENUE: Grand Hyatt, Melbourne
123 Collins St.
How to participate
Participants should submit 4 copies of the extended abstract. This abstract
should be in 12pt font with 1inch margins on all sides of a A4 page. Send
submissions to:
Dr Sabrina Sestito
Aeronautical Research Laboratory
506 Lorimer St.,
Fishermen's Bend,
Victoria, AUSTRALIA 3207
Phone: (03) 647 7271 Fax: (03) 646 7868
Email: sestito@latcs1.oz.au
Workshop Organisers
Dr. Sabrina Sestito, Air Operations Division,
DSTO-Aeronautical Research Laboratory,
506 Lorimer St., Fishermen's Bend 3207,
Tel: 03-647 7271, Fax: 03 646 7868, Email: sestito@latcs1.oz.au
Dr. Simon Goss, Air Operations Division,
DSTO-Aeronautical Research Laboratory,
506 Lorimer St., Fishermen's Bend 3207,
Tel: 03-647 7274, Fax: 03 646 7868, Email: goss@mulga.cs.mu.oz.au
Mr. Gary Briscoe, Vision Laboratory,
University of Melbourne,
723 Swanston St, Carlton
Tel: 03-419 5403, Fax: 282 2490, Email: briscoe@vis.citri.edu.au
Prof. Terry Caelli, Vision Laboratory,
University of Melbourne,
723 Swanston St, Carlton
Tel: 03-282 2400, Fax: 282 2490, Email: tmc@vis.citri.edu.au
------------------------------
Date: Thu, 17 Jun 93 12:03:44 EDT
From: schultz@aic.nrl.navy.MIL
Subject: GA Conference workshop; CFP
CALL FOR PARTICIPATION
Workshop on Niching Methods at ICGA-93
When: July 20 or 21st during ICGA (Exact date/time to be decided)
Description: This workshop will examine current and proposed methods
for the formation and maintenance of subpopulations in the GA. Topics
include, but are not limited to:
1. Niching Methodology:
--Modifications to the GA which induce subpopulations
--Modifications to the GA which maintain stable subpopulations
--Full niching algorithms
--Expected Behavior
--Theoretical Results
--Maintenance of Diversity
2. Applications of Niching Methods:
--Multimodal function optimization
--Multi-objective function optimization
--Machine Learning and Classification
--Simulating biological, adaptive, and ecological systems
--Practical/Real-World Applications
The workshop will consist of short summaries of current research,
followed by a group discussion in which all attendees are encouraged to
make comments or pose questions.
Research Summaries: If you would like to present a short summary
(3-5 minutes) of your research related to niching, please send a 1 or 2
paragraph description of the research, along with your name and e-mail
address to one of the organizers (e-mail addresses below). The exact
time limit for summaries will depend upon the number of participants,
but will not exceed 5 minutes and will be strictly enforced. An overhead
projector will be available; we recommend no more than 3 transparencies
per summary. Participants are encouraged to bring copies of their
recent, related research to distribute, including drafts of preliminary
papers or lists of their publications. DEADLINE: July 1st.
Other Attendees: To register, please send your name and e-mail address
to one of the organizers (e-mail addresses below), so that they can
adequately plan for attendance and keep you informed of the workshop.
DEADLINE: July 1st.
Organizers: Sam Mahfoud (mahfoud@gal4.ge.uiuc.edu)
Alan Schultz (schultz@aic.nrl.navy.mil)
------------------------------
End of ML-LIST (Digest format)
****************************************