Copy Link
Add to Bookmark
Report
Machine Learning List Vol. 6 No. 02
Machine Learning List: Vol. 6 No. 2
Tuesday, January 25, 1994
Contents:
MLnet NEWS 2.1 (November '93)
The Machine Learning List is moderated. Contributions should be relevant to
the scientific study of machine learning. Mail contributions to ml@ics.uci.edu.
Mail requests to be added or deleted to ml-request@ics.uci.edu. Back issues
may be FTP'd from ics.uci.edu in pub/ml-list/V<X>/<N> or N.Z where X and N are
the volume and number of the issue; ID: anonymous PASSWORD: <your mail address>
----------------------------------------------------------------------
Date: Mon, 24 Jan 94 14:44:22 0000
From: MLnet Admin <mlnet@computing-science.aberdeen.ac.uk>
Subject: MLnet NEWS 2.1 (November '93)
*********************************************************************
*********************************************************************
MLnet NEWS 2.1 (November 1993)
The Newsletter of the European Network of Excellence in Machine Learning
(Electronic version)
*********************************************************************
*********************************************************************
The MLnet Community at Blanes
Eighty or so people from the Network and three invited speakers (Guy Boy,
Toulouse; Leslie Kaelbling, Brown, USA & Jan Zytkow, Kansas) met for a 2
1/2 day Familiarization Workshop in Blanes at the end of September.
Four workshops on "Learning and Problem Solving" (coordinator: Maarten
van Someren; Amsterdam), Multi-Strategy Learning (coordinator: Lorenza
Saitta, Torino), "Machine Discovery" (coordinator: Pete Edwards, Aberdeen)
and "Learning in Autonomous Agents" (coordinator: Walter van de Velde,
Brussels) were held.
The principal aims of the several workshops were to give young researchers
an opportunity to present their ideas and plans, receive feedback, and meet,
in an informal setting, other workers (both young and old) in their areas of
specialisation. A further useful feature of the workshops was the final session
where each of the workshop coordinators presented the key issues which had
arisen in their sessions. This led to a very important exchange of ideas,
where technical details, issues of research methodology and industrial
relevance were intermingled. (For more details of the highlights of the
workshops see the coordinators' summaries on p. 6)
The meeting also gave the community a chance to consider what the network
had so far organized, what was planned, and to discuss in some detail its goals
and rationale. (see pp. 1-2 for details).
I believe the workshop has been very successful and has, by and large,
achieved its several objectives. Certainly it has been very well and very
discreetely organized by Ramon Mantaras and his team. However I think
if/when we run such familiarisation workshops in the future, we should also
have a commentator at the end of a paper session - preferably two
commentators, so presenters can get the benefit of both an academic and an
industrial perspective on their work.
At the community meeting, we discussed again the major role which the EEC
sees for Networks of Excellence, namely, a wide dissemination of relevant
expertize, in the case of MLnet of Machine Learning and Knowledge
Acquisition techniques, to enquirers, to enable a greater awareness of these
important technologies. But we were agreed that there needs to be a two way
flow of this expertize. Not only should the NoEs be "beacons" for enquirers,
they should also seek to be "beacons" for the Commission, attempting to
influence future programs, so that the technologies which we represent are
appropriately featured, in future programs.
The framework-4 program is currently being formulated by the
Commission, and I can assure you that the dialogue with the Commission has
already started.
Derek Sleeman
*********************************************************************
NEWS From the Technical Committees
At the community meeting in Blanes, reports were given on each of the
committee's activities/plans:
Electronic Communication Technical Committee
The Convener (Bob Wielinga) reported on the meeting with other Networks
held in Amsterdam in May and specifically on the decision taken by several
Networks to adopt the distributed computer environment "Andrew File
System" as the infrastructure & networking main facility. No decision was
taken by the MLnet Committee on whether to adopt the system but further
investigation was planned.
Amsterdam have implemented the ML mailing list and GMD have implemented
an ftp repository (see page 13).
The problem of nodes without ftp service arose and the decision was made to
investigate the costs of potential installations.
Industrial Liaison Technical Committee
The Convener (Yves Kodratoff) reported on the failure of the compilation of a
DB of Active Researchers/Research in European IT Industries in the fields of
ML/KA, due to the almost complete absence of feedback from the industries
contacted. It was decided to undertake the compilation of an informal DB which
could then elicit some response once circulated.
The opportunity of a regular column on Industrial ML/KA Products &
Activities in the newsletter was discussed and the decision was taken to start
the column in the first issue of MLnet NEWS in 1994.
Research Technical Committee
Lorenza Saitta reported that the ML Journal board at its June meeting
decided:
a) IML94 would be held at Rutgers, New Brunswick, and that the paper
deadline is 8th of February.
b) IML95 is to be held at Assilomar, South of San Francisco.
c) IML96 will be held in Europe.
d) A three year cycle has been agreed for IML between the East Coast of North
America, the West coast of North America and Europe.
e) ML Journal will be available in electronic form on a trial period of 2
years from 1994, to all institutions which have an institutional
subscription.
Training Technical Committee
Derek Sleeman (acting Convener) reported that several strong proposals had
been received to run the 1994 Summer School, and after considerable
discussion it had been decided that this should be held near Paris (with Celine
Rouveirol being the Summer School's course director). The provisional date
of early July was discussed, but there are problems with this as the German
Universities are still in session. Celine promised to look into alternative
dates. (September was suggested)
Derek Sleeman also reported that the TC was collecting information on
courses in ML & KA, and that once this had been collected it would be made
available through a Database. We then discussed how this information could
be made available to people/students outside the network.
Written Communication Technical Committee
The Convener (Derek Sleeman) reported that about 1500 copies of the "new
style" Newsletter had been circulated; copies were made available at ML93,
AAAI and IJCAI 93. Additionally, it had been distributed in text only form via
ML LIST.
Derek Sleeman asked for items for future issues, and ideas for new regular
features. He asked whether short technical notes should be included, and
noted he hoped to include notes from each of the Invited Speakers at the
workshop in thre Spring '94 issue.
*********************************************************************
The 10th International Machine Learning Conference ML-93
Amherst 27-29 June 1993
reviewed by Claudio Carpineto
(FUB, Roma)
The 10th International Machine Learning Conference was held in the peaceful
setting of the University of Massachusetts at Amherst, 27 June to 29 June.
The conference brought together over 250 researcher; 44 papers were
selected among 164 submissions and, for the first time, the program was
organised into plenary and parallel sessions, without poster presentations.
The papers covered a wide range of problems in concept induction, neural
networks, genetic algorithms, reinforcement learning, although most papers
did not fit neatly in these conventional subareas. Much attention was paid to
the use of multiple learning paradigms for which novel integrated techniques
were proposed, such as combining model-based and instance-based learning,
qualitative models and inductive learning, queries and learning from
examples. Also, in expanding the focus of application, a number of different
learning domains were explored, including problem solving, databases,
natural language, qualitative reasoning, robotics.
On the whole, the papers seemed to strike a better balance between
theoretical contributions and experimental applications than past
conferences, except for the research on inductive logic programming which
was probably under-represented. Two papers at the extremes of this
spectrum were especially noteworthy. Mitchell and Thrun presented an
extension of the Explanation-Based Learning framework to neural networks
showing how well-known symbolic concepts map into their neural analogs.
Fayyad, Weir and Djorgovski presented a system for cataloguing photographic
sky surveys showing that simple decision tree based learning techniques can
solve scientifically significant problems.
In addition to the technical papers, there were a few invited talks mainly
aiming at sharing research and perspectives with long-established fields
related to machine learning, such as statistics and psychology. The need of
reaching a wider audience and to learn about their perception of machine
learning was strongly emphasized in the closing humorous talk given by
Langley.
The conference program featured also three workshops - Reinforcement
Learning: What We Know What We Need, Fielded Applications of Machine
Learning, Knowledge Compilation and Speedup Learning - that ran in parallel
for two days after the end of the main conference.
*********************************************************************
BENELEARN-93
reviewed by
Maarten van Someren and Peter Terpstra
(SWI University of Amsterdam)
BENELEARN is the Dutch-Belgian workshop on Machine Learning that is
organised almost every year. Because different languages are spoken in the
lowlands, the workshop language is English. This has the additional benefit of
making the workshop accessible to others. This year the third BENELEARN
workshop was held on June 1, in Brussels, organised by Walter van der Velde
and his colleagues from the Vrije Universiteit Brussel. There were about 40
participants. The programme consisted of 12 presentations, including an
invited talk by Luc de Raedt on ILP and by Anke Rieger on robot learning.
Luc de Raedt (KU Leuven) gave an overview of the main concepts and
variations of ILP. Anke Rieger and Katharina Morik (Univ. Dortmund)
presented a model of robot perception and action that includes multiple levels
of coupling between perception and action. This raises the fascinating question
if/why we need higher conceptual levels to find the right action and how the
knowledge at these levels can be acquired. Francis van Aeken and Geert
Sinnave (Vrije Universiteit Brussel) showed a video of a robot learning to
find its way in a room with obstacles. This robot was based directly on
"subsymbolic" perception-action coupling. It learned by a Pavlov style
conditioning and provided an interesting reference point in relation to
Riegers talk: where are the limits of the Pavlov approach? Why do we build
concepts?
There were two talks on the use of ML in the context of knowledge acquisition.
Herman van Dompselaar (with Maarten van Someren, Univ. of Amsterdam)
presented a system that learns knowledge in the context of KADS models,
exploiting these models as bias and Peter Terpstra (Univ. of Amsterdam)
described an interactive induction system based on an extended version of the
AQ algorithm. The system basically follows AQ's set covering algorithm but at
any time the user can guide the learning process, add or select additional
examples, add initial rules, etc.
Jan Bioch, Wilem Verbeke en Marc van Dijk (Erasmus Universiteit,
Rotterdam) presented a comparison between neural nets and statistical
analysis techniques. There was one presentation about learning natural
language. Walter Daelemans, Steven Gillis and Gert Durieux (Universitaire
Instelling Antwerpen) applied ML to learning rules that indicate the position
of stress in words ("posItion" and not "pOsition"). Rules for this can well
be learned from a set of examples. Jan Paredis (RIKS, Maastricht) studied
the effect of genetic algorithms in constraint propagation. In some conditions
GA give an improvement over standard techniques. Niels Taatgen (RU
Groningen) presented his work on the effect of chunking on the complexity of
human problem solving, using think aloud protocols.
There were 3 papers on ILP, all from the Erasmus Universiteit. M. Polman
with S.H. Nienhuys-Cheng study PAC learnability of (clausal) logical
languages, Nienhuys-Cheng en Van der Laag gave a specialisation method that
is complete (finds all possible specialisations) and Vermeer and Nienhuys-
Cheng presented a search strategy for generalisation of clauses that searches
both "down" from general to specific and "up" which allows more flexible
search.
The proceedings Benelearn-93 can be obtained from Walter van de Velde
W. van de Velde,
Artificial Intelligence Laboratory,
Vrije Universiteit Brussel,
Pleinlaan 2, B-1050 Brussel
email: walter@arti.vub.be
The next BENELEARN workshop will be held in spring or summer 1994 at the
Erasmus Universiteit Rotterdam, the Netherlands.
*********************************************************************
The 11th National Conference on Artificial Intelligence AAAI-93
Washington, DC, 11-15 July 1993
reviewed by Marco Botta
(University of Torino)
The Eleventh National Conference on Artificial Intelligence (AAAI-93) was
held in Washington, DC, on July 11-15, 1993. As in past years, the
Intelligent Systems at Work, Fifth Innovative Applications of Artificial
Intelligence Conference (IAAI-93), was held at the same time.
The conference program, cochaired by Richard Fikes (Stanford University)
and Wendy Lehnert (University of Massachusetts, Amherst), included twenty
tutorials, sixteen workshops and ten sessions of three parallel subsections
each, covering all aspects of Artificial Intelligence. Moreover, nine invited
talks were presented and a videotape of a Allen Newell's Lecture showed.
In particular, the following tutorials covered Machine Learning topics:
- Symbolic and Neural Network Approaches to Machine Learning (Haym
Hirsh and Jude Shavlik)
- Multistrategy Learning (Ryszard Michalski and George Tecuci)
- Genetic Algorithm and Genetics-Based Machine Learning (David Goldberg
and John Koza)
Two workshops were dedicated to Machine Learning:
- Learning Action Models (organized by Wei-Min Shen)
- Knowledge Discovery in Databases (organized by Gregory Piatetsky-
Shapiro)
Among the invited presentations, the following ones deserve a special
mention: the keynote address by Herbert Simon, whose talk was entitled
Artificial Intelligence as an Experimental Science, discussed why a large part
of our understanding of intelligence - artificial as well as natural - will
continue to depend upon experimentation, and why much theory in AI will be
relatively qualitative and informal. The talk by Richard Sutton on Temporal
Difference Learning, surveyed the theory and practice of temporal-difference
learning and compared it with conventional supervised learning methods for
prediction. Finally, the talk by Raymond Reiter on Thirteen Years of
Nonmonotonic Reasoning Research: Where (and What) is the Beef?, provided
examples where nonmonotonic theorizing has provided important insights
about, and solutions to, many outstanding problems, not only in AI but in
computer science in general.
Only one session out of 32 was specific to Machine Learning; three other
sessions covered Novel Methods in Knowledge Acquisition, Plan Learning and
Complexity in Machine Learning, with a total of sixteen research papers. Due
to the limited number of sessions on the topic, it is impossible to draw
conclusions about new trends in the field; however, from the computational
point of view, interesting results were presented by W. Cohen and by S.
Koenig and R. Simmons. Moreover, it is worth mentioning the paper by M.
Pazzani and C. Brunk, that describes an extension of FOCL.
The conference was attended by more than 1500 people, mostly from the USA,
but also from Europe, Canada, Australia and Japan. As the sessions on machine
learning and related topics were on the last day of the conference, only the
morning sessions had a large attendance.
*********************************************************************
The Second International Workshop on Multistrategy Learning, MSL-93
Harpers Ferry, May 1993
reviewed by Nicolas Graner
(University of Aberdeen & LRI Paris)
The Second International Workshop on Multistrategy Learning was held in
May 1993 in Harpers Ferry (West Virginia, USA), organised by Ryszard
Michalski and Gheorghe Tecuci. Although the audience was 80% North
American, Europe was also well represented.
Multistrategy learning is a fairly recent area of Machine Learning, concerned
with the issues arising when more than one learning paradigm is used to solve
a single problem. The papers presented at MSL-93 covered a broad range of
topics, from theoretical results concerning cooperating learners, through
techniques for combining several learning algorithms or representation
formalisms into a single system, to specific applications where more than one
strategy is required. Two main approaches can be identified: multiple
strategies can be applied "in parallel", each strategy being used to acquire or
refine part of the required knowledge, or "sequentially", where several
learning techniques cooperate to produce one piece of knowledge.
General considerations ranged from an analytical theory of how several
classifiers can collectively perform better than one, and a PAC-like model of
multi-source learners, to a psychological investigation of the tradeoffs
between analogical reasoning and problem solving.
A number of papers discussed multistrategy knowledge base revision,
including extending the inference capabilities of a knowledge base with
plausible reasoning, learning to plan in an unknown environment through
experimentation and induction, and combining ML with knowledge elicitation.
Proposed cooperations of several learning techniques applied to a single data
set included combinations of numeric and symbolic methods, induction and
deduction, data-driven and model-driven methods, and the generation of
several independent decision trees combined by a majority vote. Relations
between symbolic and sub-symbolic learning (neural networks and genetic
algorithms) were also investigated.
Finally, a number of practical applications, at various stages of development,
were presented, in the fields of 3D vision, robot navigation, game playing,
document understanding, and public health data analysis.
As compared with, for instance, ECML-93, there were relatively few
presentations of fielded real world applications. This is probably due to the
relative youth of the field of multistrategy learning, but possibly also to a
lesser emphasis on applications in the American ML community than in
Europe. Another striking difference with other conferences was the virtual
absence of Inductive Logic Programming, which plays an increasingly
prominent role in ML publications.
The workshop had been organised as a highly interactive forum, with plenty
of time for questions and discussion after each presentation, and a final
general discussion about the future of Multistrategy Learning. Several
demonstrations of operational systems also helped to prove that multistrategy
learning, while still struggling for recognition as a full-fledged area of ML,
has already some significant contributions and results to offer to the
community.
The proceedings of MSL-93 are available from the Center for Artificial
Intelligence, George Mason University, Fairfax, Virginia, USA.
*********************************************************************
Machine Learning Conference 1996
During the Editorial Board Meeting of the Machine Learning Journal, held in
Amherst (USA) in July 1993, it has been decided that the Machine Learning
Conference 1996 shall be held in Europe, according to the "East Coast - West
Coast - Europe" cycle.
Any site willing to host the Conference should present a proposal, specifying
as many details as possible regarding:
- Chairperson
- Format of the conference
- Location of the conference and hosting institution
- Accommodation facilities and relative costs
- Travelling information
- Conference budget
The proposals should be sent to the Convener of ML Network, Professor D
Sleeman, and to Professor Tom Dietterich. The proposals will be examined at
the MLnet's Research Technical Committee Meeting in April 1994 and the
final decision will be taken during the next Machine Learning Conference, in
July 1994.
Lorenza Saitta
Convener of MLnet's
Research Technical Committee
*********************************************************************
Focus on Machine Learning Research at the
University of Dortmund
by Katharina Morik
The chair for Artificial Intelligence (LS8) at the University of Dortmund was
implemented in 1991. The leader is Katharina Morik (full professor of
computer science). The three assistants are Siegfried Bell, Anke Rieger, and
Steffo Weber. A project researcher is Volker Klingspor. Six master students
and 12 PhD students are supervised by Katharina Morik. Most PhD students
do not have a position at the university of Dortmund, some are even from
other institutions. A close collaboration with the German National Research
Institute for Computer Science (GMD) is due to Katharina Morik leading the
GMD group of the ESPRIT project P2154 "Machine Learning Toolbox". The
group has developed the integrated knowledge acquisition, learning, and
revision system, MOBAL. This system, its underlying theoretical framework
as well as its applications is documented in a recent book (Morik, Wrobel,
Kietz, Emde 1993). Particular case studies are available (Morik et al.
1993; Sommer et al. 1993). Currently, the system has been extended to
serve as the ILP-toolbox, integrating ILP learning algorithms in a
comfortable environment.
In research, the AI chair emphasizes work on machine learning in the logic
paradigm. It is involved in the following projects:
B-LEARN II (ESPRIT P7274 with additional funding of Nordrhein-
Westfalen) The work package of the University of Dortmund is to apply
inductive logic learning to robotics. From the sensor data of a moving robot,
action-oriented features are learned. These features are then used for
learning operational concepts, such as moving through a doorway (Morik,
Rieger, 1993).
ILP (ESPRIT P6020) Work at Dortmund deals with the semantic foundation
of representations used for learning. In particular, multi-valued logics are
investigated. The theoretical work is applied to learning in databases (Bell,
Weber,1993).
Human and Machine Learning (ESF) Inductive logic systems can be used to
acquire new knowledge, but also in order to perform knowledge revision. The
interaction of inductive and deductive reasoning is studied as a formal model
of children's development of complex concepts. The empirical data of Stella
Vosniadou about children's explanations of the day-night-cycle are modeled
using the MOBAL system which was developed at GMD. Order effects are
investigated in this formal representation.
The aim of the AI chair at Dortmund is to apply inductive logics to a variety of
performance systems (e.g., robots, databases, expert systems) and to use
formal models of learning as an experimental environment for cognitive
studies.
A series of technical reports has been started:
% Katharina Morik, 1993: Maschinelles Lernen, LS-8 Report No.1 (in
German)
% Anke Rieger,1993: Neuronale Netzwerke, LS-8 Report No. 2 (in German)
% Katharina Morik, Anke Rieger, 1993: Learning Action-Oriented Perceptual
Features for Robot Navigation, LS-8 Report No. 3
% Siegfried Bell, Steffo Weber, 1993: A Three-valued Logic for Inductive
Logic Logic Programming, LS-8 Report No. 4
Other publications:
% Katharina Morik, 1989: Sloppy Modeling, in: Morik (ed): Knowledge
Representation and Organization in Machine Learning, Springer, pp. 107 -
134.
% Katharina Morik, 1990: Integrating Manual and Automatic Knowledge
Acquisition, in: McGraw and Westfal (eds): Readings in Knowledge Acquisition
- Current Practices and Trends, Ellis Horwood, pp. 213 - 232.
% Joerg-Uwe Kietz, Katharina Morik, 1991: Constructive Induction -
Learning Concepts for Learning, Arbeitspapiere der GMD No. 543, St.
Augustin.
% Volker Klingspor, 1991: MOBAL's Predicate Structuring Tool,
Arbeitspapiere der GMD No. 592, St. Augustin.
% Katharina Morik, 1991: Underlying Assumptions of Knowledge Acquisition
and Machine Learning, in Knowledge Acquisition Journal, No.3, pp. 137 -
156.
%JAnke Rieger, 1991: Modellbasierte Objekterkennung und neuronale
Netzwerke, TASSO-Report No. 9, GMD, St. Augustin.
% Anke Rieger, 1991: Matching-Verfahren fuer die wissensbasierte
Interpretation von Bildern, Arbeitspapiere der GMD No. 475, St. Augustin.
% Birgit Tausend, Siegfried Bell, 1992: Analogical Reasoning for Logic
Programming, in: Muggleton (ed) Inductive Logic Programming, Academic
Press, pp. 397 - 408.
% Katharina Morik, 1992: Applications of Machine Learning, in: Wetter et al.
(eds): Current Developments in Knowledge Acquisition, Springer.
% Siegfried Bell, Steffo Weber, 1993: On the Close Relationship between FOIL
and the frameworks of Helft and Plotkin, in: Procs. of the 3rd international
workshop on inductive logic programming.
% Joerg-Uwe Kietz, Katharina Morik, 1993: A Polynomial Approach to
Constructive Induction of Structural Knowledge, Arbeitspapiere der GMD No.
716, St. Augustin, to appear in Machine Learning Journal 1994.
% Katharina Morik, 1993: Balanced Cooperative Modeling, in: Machine
Learning Journal, 11, pp. 217 - 235.
% Katharina Morik, 1993: Maschinelles Lernen, in: Goerz (ed): Einfuehrung
in die Kuenstliche Intelligenz, Addison Wesley, pp. 247 - 301.
% Katharina Morik, George Potamias, Vassilis Moustakis, George Charissis,
1993: Using Model-Based Learning to Support Clinical Decision Making - A
Case Study, in: Procs. 4th Conference on AI in Medicine 1993.
%JKatharina Morik, Stefan Wrobel, Joerg-Uwe Kietz, Werner Emde, 1993:
Knowledge Acquisition and Machine Learning - Theory, Methods and
Applications, Academic Press.
% Edgar Sommer, Katharina Morik, Jean-Michel Andre, Marc Uszinsky,
1993: What Online Machine Learning Can Do For Knowledge Acquisition - A
Case Study, Arbeitspapiere der GMD No. 757, St. Augustin.
For further information contact:
Prof. Dr. Katharina Morik
Univ. Dortmund, Fb4 LS VIII
44221 Dortmund Germany
Tel.: +49 231 755 5101
Fax: +49 231 755 5105
email: morik@ls8.informatik.uni-dortmund.de
*********************************************************************
Familiarization at Blanes: The Workshops
*********************************************************************
Report on WS1: Learning and Problem Solving
by Maarten van Someren and Remco Straatman
(SWI University of Amsterdam)
During the workshop on Learning and Problem Solving three topics emerged
that were addressed in several presentations:
(a) How can performance feedback and bias be represented and acquired?
(b) How can information about the problem solver, the current knowledge
and performance feedback be used to improve the knowledge of a learning
system?
(c) How can learning processes (e.g. involving different learning
techniques), problem solving processes and acquisition of bias and feedback
be controlled (or even planned)?
REPRESENTING AND ACQUIRING FEEDBACK AND BIAS
An important topic in machine learning is induction of general knowledge
from examples. Finding good generalisations and finding them efficiently
requires some kind of bias. In many systems this bias must be acquired from
the user. Often the examples must also be acquired from the user (or by
directed exploration or experimentation). How can bias be represented and
how can feedback be obtained? We list some ideas presented in the workshop
(without claiming that these are new or that the collection is exhaustive):
* Let the user solve the credit assignment task and ask him for examples in
the context of the task. APT (developed at LRI, Paris-Sud) is an integrated
knowledge acquisition tool that combines knowledge elicitation and machine
learning to acquire rules. APT's problem solver first tries to solve a
problem using the domain thory and the existing rules. The user is then
asked to comment on the trace of the problem solving process, machine
learning techniques are used to find suggestions for new rules or new
concepts.
* Let the user specify a metalevel description of the knowledge to be
acquired and use this as bias. Use special, user-oriented languages for
acquiring this. The ENIGME system (developped at LAFORIA, Paris) uses
the "inference layer" and "task layer" of models that are used in the
KADS method for knowledge acquisition as bias for learning rules from
examples.
* First elicit information that is easy to obtain such as terminology and a
first draft of the knowledge base, then use a simple, strongly biased learning
system to refine and generalise this. Use failure of this learning as a guide to
modify terminology. In SCALE (Cupit, Nottingham) this does not lead to
impressive learning but it facilitates collaboration with the user because it
is easy to understand what the learning system is doing.
USING PROBLEM SOLVING INFORMATION FOR LEARNING
If learning is driven by impasses in problem solving then the learning
process needs information about the impasse and its context. Which
information is needed and how can it be used?
* In an action planner, the derivation of an action plan along with
observations about the effect of actions can be used to refine the planning
knowledge. A declarative representation (in this case the event calculus)
makes it possible to abstract from search control in finding the plan. This
model naturally includes generating observations (in the style of CLINT and
declarative debugging). Sablon and Bruynooghe (Leuven) presented a system
based on this method.
* Plaza and Arcos (Blanes) use a full-scale model of the problem solving
architecture to assign credit and blame to the knowledge that was used in
problem solving. This means that learning is meta-level inference requiring
a self-model of the system. The self-model is needed to integrate learning
methods and problem solving systems by providing a model of what are
'successes' and 'failures' in the architecture. Learning operators use the
selfmodel and performance feedback to construct new knowledge.
* For many induction systems statistical properties of the training problems
are important. These can be used to select and combine learning systems. This
was mentioned by several authors.
CONTROLLING LEARNING AND PROBLEM SOLVING
The third issue that came up in the workshop was how learning and problem
solving control each other. Currently most systems that integrate learning
and problem solving are based on "failure-driven learning": performance
failure in problem solving hands control over to a (single) learning
component. Examples of the need for multiple learning methods and flexible
switching between learning and problem solving are:
* If a system can undertake actions that are specifically oriented to
learning (e.g. for a knowledge-based system: asking questions to a
domain expert, for a robot: exploring the environment, for a scientific
discovery system: making observations or performing experiments, for all
systems: looking for general structures and patterns; forgetting
observations) then it faces a control problem: at a particular moment, will
it try to learn or will it try to perform its "performance task"? If it
decides to learn, which learning operation will it select?
* Although most learning systems are based on algorithms, there are many
learning tasks for which no algorithms exist while combinations of already
known learning algorithms may be able to solve the problem. These tasks
seem to involve knowledge about the system itself, learning goals and
knowledge about the environment (what can be expected from observations
and exploration or what are characteristics of the domain).
Four presentations contributed to this issue. One (by Mladenic, Ljubljana)
concerned the combination of several learning systems to find a best
generalisation for a set of cases, which can be viewed as a search process
(currently performed by hand but clearly a candidate for an automated
method). Mladenic presented empirical results of combinations of stochastic
and deterministic local optimization algorithms in different domains.
Combinations of algorithms enabled good speed vs. accuracy tradeoffs.
Three architectures were presented where the learning component was in
control instead of the problem solver. These addressed the problem of
selecting a combination of learning systems to achieve a complex learning
goal. These architectures use information about available knowledge (or
knowledge that can be elicited), about available problem solvers and
available learning systems to select systems which together achieve the
goal: constructing a knowledge-based system.
Nicolas Graner described MUSKRAT, a knowledge refinement and acquisition
tool that assists the user in choosing a knowledge acquisition tool, be it
machine learning, knowledge elicitaion or knowledge refinement tools. To do
this problem solvers are related to tools by descriptions of the kind of data
they require and their functionality. A common representation language for
data (CKRL) is used to enable sharing and reuse of data between tools and
problem solvers.
Maarten van Someren discussed a similar architecture. Given learning data
and a learning goal it is still difficult to select and combine machine-
learning techniques that achieve the goal. A language that describes
learning tasks that can be used to specify and reduce learning goals was
presented. This language describes learning goals as knowledge systems
(a combination of a problem solver and knowledge base), the form of
the learning data, and learning techniques as operators that produce
knowledge systems from learning data. A prototype succeeded in selecting
learning systems for multiple learning tasks.
Aamodt and Althoff try to aid developing systems that integrate learning and
problem solving into sustained learning from problem solving experience.
Case-based reasoning is used as method for learning from experience within
these sytems because it is simple and well-suited for this. Describing the
function learning and problem solving systems in a uniform way, enables one
to select/instantiate a system by starting at the "knowledge level", after
which a symbol-level architecture can be chosen and instantiated.
Interesting enough, the issue of controlling learning, problem solving and
acquisition of feedback and other data was a theme that came up in the other
workshops as well: building "multi-strategy learning systems" by
combining existing methods and systems tends to introduce a control problem,
controlling data acquisition is a key issue in "discovery learning" and also in
"autonomous learning systems".
The invited speaker, Guy Boy discussed his experience with machine
learning and knowledge acquisition in different project in the aerospace
industry. One intriguing observation was that machine learning research and
applications usually assume a task environment that is fixed. However, in
practice the result of learning is often not just better performance of a
system on the same task but redefinition of the task. Boy gave an example of
learning to operate a system via a special user interface. The process of
learning to operate the system motivated a series of improvements in the
interface that actually changed the initial task. One could say that the
result of learning appeared as better knowledge in the controller but also in a
better definition of the task.
The workshop has clearly fulfilled its role to make participants acquainted
with each others work and to clarify the issues related to learning and
problem solving.
*********************************************************************
Report on WS2: Multi-Strategy Learning
by Lorenza Saitta
(University of Torino)
In the workshop on Multistrategy Learning, fourteen papers have been
presented, spanning a wide range of differentiated topics, both methodological
and applicative.
The main issue emerged from the presentations and from the discussion
seemed to concern the very core of the subfield: "What is multistrategy
learning"? In fact, this label has been applied with different semantics to
different projects. In some of them, multistrategy may be located at the
approach level (for instance, combining genetic or statistic and symbolic
approaches), whereas, in others, multistrategy envisages the presence of
different methodologies within the same approach (for instance inductive and
deductive methods in symbolic learning). Finally, multistrategy has also
been equated with the availability of a library of systems or algorithms to a
supervisor, which has to choose among them.
Also the ways in which a multistrategy learning process is supposed to
operate differ: in some cases the various learning components (let them be
approaches or systems or methods) offer alternative solutions to the same
task and the supervisor has to select among them a "most suitable" one,
according to its current learning goal. In other cases all the components shall
be used (in different phases or at the same time) in order to achieve a
common goal, which no single component could achieve alone.
Because of this variety of definitions, one may wonder whether the denotation
"multistrategy learning" can be meaningfully characterized as a unique
approach or whether "multistrategy learning" is not simply coincidental
with "learning".
One difficulty in finding an appropriate definition, and, hence, in answering
the preceding questions, seems to arise from the undefined semantics of the
word "strategy" as applied to learning. A strategy, in general, is a planned
sequence of elementary steps, directed to reach a specified goal. How can
basic moves be defined in learning? In what way may they differ? How can
they be combined to form strategies?
The workshop left these questions unanswered, leaving them as future
challenges.
*********************************************************************
Report on WS3: Machine Discovery
by Peter Edwards
(University of Aberdeen)
The workshop attracted a considerable number of submissions, 12 of which
were accepted for presentation. A broad range of discovery-related activities
were represented, including law discovery (Dzeroski, Van Laer, Moulet,
Cheng), knowledge discovery in databases/large datasets (Klosgen, Richeldi,
Carpineto, Wallis), experimentation (Wendel), and theory refinement
(Alberdi & Sleeman, Gordon, Metaxas). A wide variety of applications were
described: chemical kinetics (Dzeroski), evaluation of DBMS performance
(Moulet), communication network management (Richeldi), neurophysiology
(Wendel), botany (Alberdi), solution chemistry (Gordon), NMR of
carbohydrates (Metaxas), Alzheimer's disease (Wallis).
The workshop began with an invited talk by Jan Zytkow of Wichita State
University, USA. In his talk, entitled "Putting Together a Machine
Discoverer: Basic Building Blocks", Zytkow gave an overview of the field of
Machine Discovery, before addressing issues such as the difference between
discovery and learning, and the requirements for a discoverer. The main
thrust of his first argument being that discoverers are autonomous, whereas
learners depend on a teacher. According to Zytkow, the aim of Machine
Discovery is thus to limit the amount of "external" assistance. Statements
such as: "All good learners are still discoverers" and "We were discoverers
before we became learners" led to some interesting discussion! Zytkow listed
the following techniques which he felt were required to build a powerful
discoverer: linkage to empirical systems, experimentation strategies, theory
formation from data, recognition of the unknown, identification of similar
patterns.
The first session contained four papers all of which addressed the issue of law
discovery. Dzeroski described the LAGRANGE system, which extends the scope
of discovery techniques to deal with dynamic systems. LAGRANGE is able to
find a set of differential and/or algebraic equations which govern the
behaviour of a dynamic system. This is in contrast to existing systems such as
BACON, which have focused on laws describing static situations. The second
paper (Van Laer) described an extension to the CLAUDIEN inductive logic
programming system to allow it to handle numerical information. A simple
conjunctive learning algorithm is employed, capable of finding inequalities
such as: X * Y <=3D 5.17. An application-oriented view of numerical
discovery was presented by Moulet. The ARC.2 system has been applied to the
evaluation of Database Management System (DBMS) performance. The system
attempts to discover a "cost model", relating the time spent executing a
database query to the characteristics of the query. ARC.2 has discovered a
number of simple cost models for the GeoSabrina DBMS, which are in
agreement with those derived by human experts. The HUYGENS system
(Cheng) uses a different approach to discover quantitative laws. A search
through a space of diagrammatic representations of the problem is
performed, rather than a search through algebraic formulas. Cheng described
a series of diagrammatic operators and heuristics which were used to
simulate the discovery of BlackUs Law.
The second session focused on discovery of knowledge in databases (and large
datasets). Klosgen described the Explora KDD system, which employs a
number of techniques to reduce the potentially huge search space encountered
when searching for regularities in databases. The system constructs a
hierarchical search space of hypotheses based on a user-defined pattern and
organises and controls the search for interesting patterns within this space.
The system also supports the presentation of discovered information through
a graphical user-interface. In the next presentation, Richeldi described a
comparison of statistical and connectionist approaches for detection of
relevant features in a large database containing information on telephone
network maintenance activities. The data contain large numbers of irrelevant
and redundant features, as well as inter-dependent features. The GALOIS
system, for incremental determination of concept lattices was described by
Carpineto. A concept lattice is a set of conceptual clusters linked by the
general/specific relation. A number of applications of concept lattices were
presented, including discovery of dependencies in databases. A large scale
medical application was discussed by Wallis (25,000 examples, approx. 300
attributes). A pre-processing method employing transformations such as
elimination of irrelevant features, application of functional dependencies and
aggregation of attributes was outlined.20
The final session focused on experimentation and theory refinement. The
MOBIS system (Wendel) is a case-based tool designed to assist
neurophysiologists in the design and analysis of simulation experiments with
biological neural networks. The system performs experiments with varying
parameter settings in order to identify surprising new phenomena in a
networkUs behaviour. Wendel described techniques used to transform
numerical data derived from the simulation into a symbolic description of
neuronal behaviour which is compared with previous experimental cases. The
remaining presentations addressed issues in theory refinement; Alberdi
described a psychological study to determine the search strategies and
heuristics employed by expert botanists when performing plant classification
tasks. Subjects were presented with puzzling phenomena and their
refinement strategies were studied. A computational model of theory revision
in the context of scientific classification was proposed based on the
psychological results. The next speaker (Gordon) described the HUME system
which employs simple qualitative models as part of a multistrategy
architecture which integrates both data and theory-driven discovery
methods. Gordon presented an overview of experimental results from
eighteenth and nineteenth century solution chemistry and demonstrated the
utility of qualitative models in guiding the theory construction process. The
final workshop paper (Metaxas) discussed the CRITON system - an
incremental concept learning system operating in the domain of NMR of
carbohydrates. CRITON is able to deal with an incomplete instance language by
introducing new descriptors. It does this by incorporating descriptors from a
library or through interaction with an oracle (user).
*********************************************************************
Report on WS4: Learning in Autonomous Agents
by Walter Van de Velde
(VUB, Brussels)
1 Workshop Topic and Scope
This workshop focussed on the issue of learning and autonomy. Since the
beginnings of Machine Learning the idea of learning as a means for making
systems more autonomous has been widely publicized. However, to date the
demonstrations of autonomy through learning have not been very convincing.
Learning systems have to be carefully engineered, primed with a language or
lots of background knowledge, and carefully trained by a benevolent teacher.
The learnable part is usually only a fraction of the total system, and what is
learned is more often than not foreseen by the teacher or designer.
Genuine autonomy is the capacity to develop intelligent behavior in a wide
variety of situations that can not be enumerated in advance, i.e., at design
time. We call a system intelligent if it acts rationally, i.e., according to some
knowledge about the world that we ascribe to it (whether it is explicitly
represented in the system or not is irrelevant at this point). Since it is in
general impossible to foresee beforehand what the characteristics and factors
are that will determine whether an action is rational or not, the behavior of
an autonomous system can not be programmed. It needs to be developed by the
agent and is aimed at self-preservation while acting in the environment.
This workshop focused on learning techniques in autonomous systems. In
particular it looked at those techniques that fall within the broad paradigm of
behavior orientation, as opposed to knowledge orientation. Learning tasks
within this scheme include the learning of new behaviors and the coordination
of behaviours. Also learning related to the link between behaviour orientation
and symbolic knowledge fall within the scope of this workshop.
2 Organization
This workshop was organized in the context of the MLNet familiarization
workshop, Blanes, September 23-25, 1993. Workshop chair is Walter Van
de Velde. Co-organizers are Attilio Giordana and Joel Quinqueton. Thanks to
Derek Sleeman and Ramon Lopez de Mantaras for the overall organization of
the MLNet familiarization workshop.
3 Workshop Schedule
Friday 24
% 9:30-10:30 Invited Talk (Leslie Pack Kaelbling)
% 10:30-11:00 Introduction to the workshop (Walter Van de Velde)
% 11:15-13:15 MOSCA: A Multi-Agent Framework for Machine Learning
(Philippe Reitz and Joel Quinqueton).
Learning for Multi-agents Problem Solving (Laurence Vignal).
Memory Limitations and Optimization of Training Sequences for Incremental
Learners (Antoine Cornuejols).
%J15:00-17:00 New challenges from Autonomous Agents Research (Walter
Van de Velde).
Techniques for Adaptability (Leslie Pack Kaelbling).
Discussion.
4 The papers
The invited talk by Leslie Kaelbling was a perfect introduction to the
workshop. In her clear and crisp presentation she described the problem of
building autonomous agents that learn to act, and a common framework for
reinforcement type of learning algorithms (e.g. WatkinsU Q-learning). A
series of experiments of increasing complexity was reported on. Leslie also
indicated current problems with the approach of reinforcement learning.
These were discussed at greater length during her second presentation in the
afternoon.
The first abstract that was presented at the workshop described MOSCA.
MOSCA is a contribution to a descriptive framework for learning in a
learner-teacher situation. The MOSCA model is based on five elements, the
Master, the Oracle, the Probe, the Customer and the Apprentice. These
elements are roles played by at least two agents, a learner and a teacher. It
was shown how various messages are used to control knowledge evolution.
The MOSCA work at first seemed to be far from autonomous learning.
However, it was clarified in the discussion that a learning scenario involving
a teacher does not make the learner necessarily less autonomous, as long as
the latter maintains the autonomy with respect to exploiting the resources
that it has for learning (of which the teacher is one). It was also noted that
the MOSCA model has actually, as a special case, the learning scenario of
reinforcement learning that was presented by Leslie. The framework might
therefore be used to study, for example, the meaning of optimality in various
learning situations.
The second abstract presented a multi-agent system for the analysis of
cromosome sequences. The main idea presented was to use multiple agents
that look at parts of the problems and that are in a kind of competition to
extend their domain. Successful agents combine into a single new one. They
may also disintegrate. The learning technique being explored here is
reminiscent of genetic algorithms. However, it starts from fairly elaborated
agents (i.e., a strong bias in the population) so that the genetic search can
progress much faster. An interesting idea would be to explore the evolution of
the set of agents over multiple problems.
The third abstract addressed the problem of characterizing the best ordering
of data to train an incremental learner to obtain a certain performance.
Autonomous learners need to learn incrementally, need to maintain a best
hypothesis to secure performance and need to do all this in real time and
within given (imposed) resource bounds. Given such constraints, what is the
best teaching strategy? Antoine investigates two criteria that involve
algorithmic information theory and minimum description length principle.
One is a constraint expressing that the learner cannot store all past
observations, the other is the expression of a minimization principle upon
the "optimal learning trajectory". It can be shown that these two criteria
yield and imply the intuitive ideas about the best teaching strategy. The model
being used is of course, fairly simple. For example it assumed that the
teacher knows a lot about how the learner learns, and it does not include the
possibility for intermediate probing of the learners evolution. Nevertheless
the general spirit of the work looked promising and generated lots of
discussion.
Walter, in his presentation, started from a perceived shift in autonomous
agents research. The traditional approach is to realize a cycle of perception,
interpretation, planning, control and execution. Machine learning applies
quite naturally to this scheme in two roles, first to enhance each of these
functions, and second to shortcut them. However, in autonomous agents'
research an important shift is taking place toward a different orthogonal
decomposition of intelligence, namely as the parallel activity of simple
behavior entities, that each in themselves implement a simple but complete
behavior. More generally the shift was characterized as a move from
knowledge to behavior, from representation to structure, from inference to
dynamics, from learning to adaptation and from cooperation to coordination.
Within the paradigm of behaviour orientation the issues for machine learning
have to be reconsidered.
The absence of discovery work from the autonomous agents workshop nicely
illustrates the particular approach that it wanted to emphasize, namely the
shift toward behaviour rather than toward knowledge. Knowledge, it is
argued, is a means to an end (intelligent behaviour) and it is worthwhile to
investigate to what extent its representation can be avoided as an
implementation technique. Nevertheless some of the abstracts still relied on
knowledge and representation. For example Joel described learning as the
acquisition of knowledge, and Leslie uses a hierarchy of more and more
abstract action maps to improve learning. Walter argued that
representations have their origin in an emergent structure outside the agent
that plays a role in the coordination of processes within the agent. These
structures, it is hypothesized, can then be "internalized" by squeezing in a
new process that, under the right conditions, functions as a catalyzer for the
same coordination. Of course, as usually happens whenever it is mentioned,
the notion of emergence in complex systems was argued about at great length.
Here, the main controversy was whether emergence is an objective notion or
not (i.e., relative to the ontology used by the observer). Also, some (e.g., Bob
Wielinga) argued that reflection is a necessary capacity for continuous
build-up of intelligence, for example, for an agent to recognize an emergent
behavior and to exploit it toward more complex or effective behavior.
Interesting as it was, these high level arguments could not be substantiated
with concrete experimental results. Techniques that are currently
experimented with go only some way in dealing with the challenges of
autonomy. For example the requirements of real-time performance (for
behaviour and for learning) remain hard to satisfy as witnessed by some of
the results obtained by Leslie and Laurence. Also, the models of learning that
are being used are often too simple. The limitations of Antoine's teaching
scenario and of the basic reenforcement model are clear. Nevertheless ideas
from game-theory, operations research and complex systems research were
put forward as hints to potential solutions. Also, although not enough yet,
concrete applications and experiments with real robots are keeping the
research grounded.
6 Conclusion
It was expected that this workshop would be exploratory in scanning what can
be done in this area. As witnessed by the heterogeneous collection of papers
the topic of autonomy was interpreted in many different ways. Yet, the
discussions demonstrated that the machine learning community is openminded
toward results that have been achieved in other paradigms, including such
non-traditional ones as behavior oriented approaches. As in those areas, the
discussions are often hampered by terminological problems and background
mismatches. It is clear that much work is still needed and only few people are
working on a comprehensive research program to achieve full autonomy. This
workshop, however, succeeded in addressing important and challenging
directions for research in machine learning.
*********************************************************************
The Fourth International Workshop on Inductive Logic Programming (ILP94)
September 12 Q 14, 1994 Bad Honnef/Bonn, Germany
Call for Papers (Short Version)
General Information
Originating from the intersection of Machine Learning and Logic
Programming, Inductive Logic Programming (ILP) is an important and
rapidly developing field that focuses on theory, methods, and
applications of learning in relational, first-order logic representations.
ILP94 is the fourth in a series of international workshops designed to bring
together developers and users of ILP in a format that allows a detailed
exchange of ideas and discussions. Reflecting the growing maturity of the
field, ILP94 for the first time will offer a systems and applications exhibit
as an opportunity to demonstrate the practical results and capabilities of
ILP. ILP94 will take place in Bad Honnef, a small resort town close to Bonn
in the Rhine valley and adjacent to the Siebengebirge nature park.
Participants will be able to take advantage of Bad Honnef's proximity to
medieval castles and of the new wine season that starts at the time of the
workshop.
Submission of papers
Reflecting the broadening scope of the field, ILP94 invites papers covering
the three main aspects of ILP, namely inductive data analysis and
learning in first-order representations, inductive synthesis of non-
trivial logic programs from examples, and inductive tools for software
engineering.
Please submit four paper copies of your paper to the workshop chair:
Stefan Wrobel GMD,
I3.KI, Schloss Birlinghoven,
53757 Sankt Augustin,
Germany.
E-Mail: ilp-94@gmd.de,
Fax:+49 2241 14 2889
Tel: +49 2241 14 2670
to be received on or before May 31, 1994.
Length of papers should be reasonable and adequate for the topic, but no more
than 20 pages. Please use LaTeX if at all possible. Authors will be notified
of acceptance or rejection by July 15, 1994, and camera-ready copy will
be due on August 9, 1994. Accepted papers will be published as a GMD
technical report to be distributed at the workshop and officially available
to others from GMD afterwards. It is planned to produce an edited book after
the workshop.
Program Committee
Francesco Bergadano (Italy) Ivan Bratko (Slovenia)
Wray Buntine (USA) William W. Cohen (USA)
Luc de Raedt (Belgium) Koichi Furukawa (Japan)
Jorg-Uwe Kietz (Germany) Nada Lavrac (Slovenia)
Stan Matwin (Canada) Stephen Muggleton (UK)
Celine Rouveirol (France) Claude Sammut (Australia)
Further Information
A full call for papers can be obtained via anonymous FTP from
the ML Archive at GMD (server ftp.gmd.de, file
/MachineLearning/general/CallsForPapers/ilp94.ascii or .ps). To
receive the complete registration brochure, as soon as it is available,
please send E-Mail to ilp-94@gmd.de, specifying your name and address, E-
Mail & Fax; would you also indicate in your message whether you plan to
submit a paper.
*********************************************************************
Procedures To Join MLnet
Initial enquiries will receive a standard information
pack (including a copy
of the Technical Annex)
All centres interested in joining MLnet are asked to send the following to
MLnet's Academic Coordinator:
% A signed statement on Institutional notepaper saying that you have read and
agreed with the general aims of MLnet given in the Technical Annex;
% One hard-copy document listing the Machine Learning (and related
activities) at the proposed node and three copies of any enclosures; the
document should include a list of scientists involved in these field(s), half
page curriculum vitae for each of these senior scientists, current research
students, lists of recent grants and relevant publications over the last 5 year
period;
% A statement of the Technical Committees which the Centre would be
interested in joining, and a succinct statement of the potential contributions
of the Centre to the Network and its Technical Committees.
Two members of the Management Board will be asked to look at the material
in detail and will present the proposal at the next Management Board meeting.
Through the Network's Coordinator, the members may ask for additional
information.
The Academic Coordinator will be in touch with the Centre as soon as possible
after the Management Board meeting.
The Management Board is not planning to set a fixed timetable for
applications, but advises potential nodes that it currently holds Management
Board meetings in November, April and September, and that papers would
have to be received at least six weeks before a Management Board meeting to
be considered.
*********************************************************************
F o r t h c o m i n g Events
Event
Date
Submission
Contact
tel.
fax.
email
DCCA-4, 4th IFIP Working Conf. on Dependable Computing for critical
Applications, San Diego, CA
4-6 Jan 94
Keith Marzullo
[+1] 619 534 3729
[+1] 619 534 7029
dcca@cs.ucsd.edu
2nd World Congress on Expert Systems, Lisbon, Portugal.
10-14 Jan 94
Congress Secretariat
[+1] 301 469 3355
[+1] 301 469 3360
8th Knowledge Acquisition for Knowledge-Based Systems Workshop, Banff,
Canada
30 Jan-4 Feb 94
Brian Gaines
[+1] 403 220 5901
gaines@cpsc.Ucalgary.ca
2nd International Conference on Expert Systems for development, Bangkok,
Thailand
28-31 Mar 94
Dr. R. Sadananda
[+66] 2 524 5702
[+66] 2 524 5721
sada@cs.ait.ac.th
3rd Annual Conference on Evolutionary Programming (EP'94), San Diego,
CA.
24-25 Feb 94
30 Jun 93
Dr. L. J. Fogel
fogel@sunshine.ucsd.edu
Knowledge-Based Artificial Intelligence Systems in Aerospace and Industry,
Orlando, Florida
5-6 Apr 94
7 Sep 93
Orlando '94
[+1] 206 676 3290
[+1] 206 647 1445
abstracts@mom.spie.org
The 7th Int. Conf. on Industrial & Engineering applications of AI & Expert
Systems, Austin, Texas
31 May-3 Jun 94
5 Oct 93
Dr. Frank D. Anger
[+1] 904 474 3022
[+1] 904 474 3129
fa@cis.ufl.edu
12th European Meeting on Cybernetics and Systems Research (EMCSR'94),
Vienna, Austria
5-8 Apr 94
8 Oct 93
I. Ghobrial-Willmann
[+43] 1 5353 2810
[+43] 1 5320 652
sec@ai.univie.ac.at
3rd Foresight Conf. on Molecular Nanotechnology: CAD of Molecular Systems,
Palo Alto, California
14-16 Oct 93
14 Oct 93
Foresight Institute
[+1] 415 324 2490
[+1] 415 324 2497
foresight@cup.portal.com
AAAI-94 Spring Symposium on AI in Medicine: Interpreting Clinical Data
(AIM-94), Stanford, CA
21-23 Mar 94
15 Oct 93
Dr. Serdar Uckun
[+1] 415 723 1915
[+1] 415 725 5850
aim-94@camis.stanford.edu
AAAI-94 Spring Symposium on Goal-Driven Learning, Stanford
21-23 Mar 94
15 Oct 93
Prof. Ashwin Ram
[+1] 404 853 9372
ashwin@cc.gatech.edu
AAAI-94 Spring Symposium on Believable Agents, Stanford
21-23 Mar 94
15 Oct 93
Joseph Bates
joseph.bates@cs.cmu.edu
AAAI-94 Spring Symposium on Active NLP: NL Understanding in Integrated
Systems, Stanford
21-23 Mar 94
15 Oct 93
Charles Martin
martin@cs.uchicago.edu
AAAI-94 Spring Symposium on Applications of Computer Vision in Medical
Image Proc., Stanford
21-23 Mar 94
15 Oct 93
William M. Wells III
[+1] 617 278 0622
sw@ai.mit.edu
AAAI-94 Spring Symposium on computational Organization Design, Stanford
21-23 Mar 94
15 Oct 93
Ingemar Hulthage
[+1] 213 740 4044
[+1] 213 740 9732
hulthage@usc.edu
AAAI-94 Spring Symposium on Decision Theoretic Planning, Stanford
21-23 Mar 94
15 Oct 93
Steve Hanks
[+1] 206 543 4784
hanks@cs.washington.edu
AAAI-94 Spring Symposium on Detecting and resolving Errors in
Manufacturing Systems, Stanford
21-23 Mar 94
15 Oct 93
Maria Gini
[+1] 612 625 5582
gini@cs.umn.edu
AAAI-94 Spring Symposium on Intelligent Multi-Media Multi-Modal
Systems, Stanford
21-23 Mar 94
15 Oct 93
Joe Marks
[+1] 617 621 6667
marks@crl.dec.com
AAAI-94 Spring Symposium on software agents, Stanford
21-23 Mar 94
15 Oct 93
Oren Etzioni
AAAI-94 Spring Symposium on Physical Interaction and Manipulation,
Stanford
21-23 Mar 94
15 Oct 93
Steven Whitehead
[+1] 617 466 2193
[+1] 890 9320
swhitehead@gte.com
Computers in Healthcare Education Symposium , Philadelphia, PA
27-29 Apr 94
15 Oct 93
Jerilyn Garofalo
garofalo@shrsys.hslc.org
First International Conference on Medical Physic And Biomedical
Engineering, Nicosia, Cyprus
5-7 May 94
15 Oct 93
E. Keravnou
[+357] 2 360 589
[+357] 2 360 881
elpida@jupiter.cca.ucy.cy
Fourth International Workshop on Computer Aided Systems Technology
(CAST'94), Ottawa, Canada
16-20 May 94
15 Oct 93
Dr. Tucer Oren
[+1] 613 564 5068
[+1] 613 564 7089
oren@csi.ottawa.ca
Florida AI Research Symposium (FLAIRS-94), Pensacola Beach, Florida
5-7 May 94
18 Oct 93
D. D. Dankel II
[+1] 904 392 1387
[+1] 904 392 1220
ddd@panther.cis.ufl.edu
AAAI Fall Symposium on Computational Biology, Pittsburgh, PA
22-24 Oct 93
22 Oct 93
[+1] 415 328 3123
fss@aaai.org
The Second International Conference on the Practical Application of Prolog,
London , UK
27-29 Apr 94
22 Oct 93
World Conference on Educational Multimedia and Hypermedia, Vancouver,
Canada
25-29 Jun 94
22 Oct 93
4th Annual Symposium on Computational Biology, Pittsburgh, PA
24-25 Oct 93
24 Oct 93
Lucille Jarzynka
[+1] 412 624 6978
jarzynka@cs.pitt.edu
15th Annual Int. Conf. of the IEEE EMBS'93, San Diego, CA
28-31 Oct 93
28 Oct 93
IEEE EMBS'93 Sec.
[+1] 619 453 6222
[+1] 619 535 3880
70750.345@compuserve.com
IEE Colloquium on Symbolic and Neural Cognitive Engineering, Savoy Place
14-Feb-94
29 Oct 93
Prof. I. Aleksander
[+44] 71 225 8501
[+44] 71 823 8125
IEE Colloquium on Molecolar Bioinformatics, Savoy Place
28-Feb-94
29 Oct 93
Dr. S. Schulze-Kremer
[+49] 30 463 3040
[+49] 30 4644 097
steffen@kristall.chemie.fu-berlin.de
Am. Ac. of Pediatrics, Section on Computers and Other Technologies (SCOT),
Washington, D.C.
31-Oct-93
31 Oct 93
M. J. Feldman, M.D.
mfeldman@warren.med.harvard.edu
First World Congress on Computational Medicine, Public Health, and
Biotechnology, Austin, TX
24-28 Apr 94
1 Nov 93
Compmed 1994
[+1] 512 471 2472
[+1] 512 471 2445
compmed94@chpc.utexas.edu
4th Int. Con.e on principles of Knowledge Representation and Reasoning
(KR'94), Bonn, Germany
24-27 May 94
8 Nov 93
Werner Horn
[+43] 1 5353 2810
[+43] 1 5320 652
werner@ai.univie.ac.at
World Congress on Medical Physics and Biomedical Engineering, Rio de
Janeiro, Brazil
21-26 Aug 94
20 Nov 93
Congrex do Brazil
[+55] 21 224 6080
[+55] 21 231 1492
14th Int. Conf. on AI KBS ES and NL, Avignon '94, avignon, France
30 May-3 Jun 94
26 Nov 93
Jean-Claude Rault
[+33] 1 4780 7000
[+33] 1 4780 6629
Artificial Life: a Bridge Towards a New AI, San Sebastian, Spain
10-11 Dec 93
30 Nov 93
Dr. Alvaro Moreno
[+34] 43 218 000
[+34] 43 311 056
biziart@si.ehu.es
ECOOP-94, The 8th European Conf. on Object Oriented Programming,
Bologna, Italy
4-8 Jul 94
1 Dec 93
Paola Mello
[+39] 51 644 3033
[+39] 51 644 3073
ECOOP94@deis33.cineca.it
Third IEEE International Conference on Fuzzy Systems, (FUZZ-IEEE '94),
Orlando, Florida
26-29 Jun 94
10 Dec 93
Meeting Management
[+1] 619 453 6222
[+1] 619 535 3880
IEEE International Conference on Neural Networks, Orlando, Florida
28 Jun-2 Jul 94
10 Dec 93
Meeting Management
[+1] 619 453 6223
[+1] 619 535 3881
The IEEE Conference on Evolutionary Computation, Orlando Florida
29 Jun-1 Jul 94
10 Dec 93
Meeting Management
[+1] 619 453 6224
[+1] 619 535 3882
Int. Symposium on Integrating Knowledge and Neural Heuristics (ISIKNH-
94), Pensacola, Florida
9-10 May 94
15 Dec 93
LiMin Fu
[+1] 904 392 1485
[+1] 904 392 1220
fu@cis.ufl.edu
AIA-94, 2nd Int. Round-Table on Abstract Intelligent Agents, Rome, Italy
23-25 Feb 94
17 Dec 93
J.M. Zytkow
[+1] 316 689 3925
[+1] 316 689 3984
zytkow@wise.cs.twsu.edu
2nd IEEE Mediterranian Symposium on New Directions in Control &
Automation, Chania, Crete
16-22 Jun 94
1 Jan 94
Kimon P. Valavanis
[+30] 318 231 5779
[+30] 318 231 5791
kimon@cacs.usl.edu
ACL-94, 32nd Meeting of the Assoc. for Computational Linguistics, Las
Cruces, New Mexico
27 Jun-1 Jul 94
6 Jan 94
James Pustejovsky
[+1] 617 736 2709
[+1] 617 736 2741
jamesp@cs.brandeis.edu
11th European Conference on Artificial Intelligence (ECAI'94), Amsterdam,
The Netherlands
8-12 Aug 94
8 Jan 94
Marcel van Marrewijk
[+31] 10 408 2302
[+31] 10 453 0784
M.M.deLeeuw@apv.oos.eur.nl
16th Annual Conf. of the Cognitive Science Society, Atlanta, Georgia
27-30 Jul 94
14 Jan 94
Kurt Eiselt
cogsci94@cc.gatech.edu
Int. Conf. on Computer-Aided Learning and Instruction in Science and
Engineering, Paris, France
31 Aug-2 Sept 94
15 Jan 94
Jean-Louis Dessalles
[+33] 1 4581 7870
[+33] 1 4581 3119
dessalles@enst.fr
6th Annual Innovative Applications of AI Conf., IAAI-94, Seattle, Washington
17 Jan 94
[+1] 415 328 3123
iaai@aaai.org
Twelfth National Conference on Artificial Intelligence (AAAI-94), Seattle,
Washington
31 Jul-4 Aug 94
24 Jan 94
Barbara Hayes-Roth
bhr@ksl.stanford.edu
Tenth Annual Conference on Uncertaity in Artificial Intelligence, Seattle,
Washington
29-31 Jul 94
1 Feb 94
David Heckerman
[+1] 206 936 2662
[+1] 206 644 1899
heckerma@microsoft.com
COLT-94, 7th ACM Conf. on Computational Learning Theory, New Brunswick,
New Jersey
12-15 Jul 94
3 Feb 94
Manfred Warmuth
colt94@research.att.com
ML94, 11th Int. Conf. on machine Learning, New Brunswick, New Jersey
10-13 Jul 94
8 Feb 94
Russell Greiner
ml94@cs.rutgers.edu
2nd Int. Conf. on Intelligent systems Engineering, Hamburgh, Germany
5-8 Sep 94
25 Feb 94
Jane Chopping
[+49] 71 344 5477
[+39] 71 497 3633
UM-94, 4th Int. Conf. on User Modeling, Hyannis, Cape Cod, MA
15-19 Aug 94
28 Feb 94
Alfred Kosba
um94-abstracts@inf-wiss.uni-konstanz.de
FME-94, Formal Methods Europe Symposium, Barcelona, Spain
24-28 Oct 94
31 Mar 94
Antoni Garrell
[+34] 3 280 1935
[+34] 3 200 4804
100140.2140@compuserve.com
Second International Colloquium on Grammatical Inference (ICGI-94),
Alicante, Spain
21-23 Sep 94
15 Apr 94
J. Oncina
[+34] 6 590 3464
JOncina@EALIUN11.BITNET
Fourth International Workshop on Inductive Logic Programing (ILP'94),
Bonn, Germany
12-14 Sep 94
31 May 94
Stefan Wrobel
[+49] 2241 142 670
[+49] 2241 142 889
ilp-94@gmd.de
*********************************************************************
Machine Learning Archive
Announcement and Call for Contributions
The ML-archive ftp.gmd.de:/MachineLearning [129.26.8.90] contains a
growing collection of Machine Learning related papers, articles, tech
reports, data, and software with a particular focus on results achieved
by the European ESPRIT research projects "Machine Learning Toolbox"
(MLT) and "Inductive Logic Programming" (ILP), the European Network of
Excellence in Machine Learning (MLnet) and the Inductive Logic
Programming Pan-European Scientific Network (ILPnet).
For example, the archive presently contains
- the source code of Stephen Muggleton's and Cao Feng's learning system
Golem (in "/MachineLearning/ILP/public/software/golem"), - a BibTex
file with around 325 entries of articles related to ILP
("/MachineLearning/ILP/public/bib"),
- the knowledge acquisition and machine learning system MOBAL 2.2 for
non-commercial academic use in
"/MachineLearning/ILP/public/software/Mobal", and
- PROLOG implementations of basic machine learning algorithms (e.g.,
COBWEB, ID3, ARCH) ("/MachineLearning/general/ML-Program-
Library"). This library is maintained by Thomas Hoppe (for more details,
see the README file in the subdirectory).
Here's how the anonymous FTP server works. To access or store material on
the server, use ftp to ftp.gmd.de, login ID "anonymous" and your full E-
Mail address as password. Change to directory /MachineLearning,
where the ML-related stuff is located. Remember, when ftping compressed
or compacted files (.Z) to use binary mode for retrieving the files.
The directory structure is subject to change.
Please note: Wherever appropriate and possible, material has been cross-
indexed between the different subdirectories using symbolic links.
You are invited to contribute your own software, papers etc. to the ML-
archive. If you have ML-related material, which might be relevant for
other researchers or potential users of Machine Learning techniques,
place it in one of the subdirectories of "/ftp/incoming/Learning"
AND also send mail to "ml-archive@gmd.de" saying what you placed in
"incoming". Our ml-archive manager Marcus Luebbe will read these mails
and install all contributions in the proper place. As for papers, please
send them in compressed PostScript (.ps.Z) form. Please send us also a file
with a plain text bibliographic entry and, if possible, a corresponding
BibTeX entry with names of all authors, title, how and where the paper
has been published. As for software, please send both a compressed tarfile
containing your software and manuals as well as a README file
describing the software and its installation. Please let us know which
name to use for the subdirectory that stores your software.
Please, include the following statement in your mail: COPYRIGHT
CLEARANCE: I understand that the material I have submitted will be made
publicly available worldwide on an anoymous FTP Server. I have made sure
that this does not conflict with any relevant copyrights on the material.
Please send questions and suggestions to: ml-archive@gmd.de
*********************************************************************
MLnet Main Nodes
Professor D Sleeman, Aberdeen University (GB)
Tel No: +44 224 27 2288/2304
Fax No: +44 224 27 3422
Professor B J Wielinga, University of Amsterdam (NL)
Tel No: +31 20 525 6789/6796
Fax No: +31 20 525 6896
Professor R Lopez de Mantaras, IIIA CEAB,
Blanes (ES)
Tel No: +34 72 336 101
Fax No: +34 72 337 806
Professor K Morik, Dortmund University (DE)
Tel No: +49 231 755 5101
Fax No: +49 231 755 5105/2047
Professor M Bruynooghe/Dr L DeRaedt, Leuven Katholieke Universiteit (BE)
Tel No: +32 16 20 10 19/15
Fax No: +32 16 20 53 08
Dr Y Kodratoff, Paris Sud University, Orsay (FR)
Tel No: +33 1 69 41 69 04
Fax No: +33 1 69 41 65 86
Professor L Saitta, Torino University (IT)
Tel No: +39 11 742 9214/5
Fax no: +39 11 751 603
Mr M Uszynski, Alcatel Alsthom Recherche (FR)
Tel No: +33 1 6449 1004
Fax No: +33 1 6449 0695
Mr T Parsons, British Aerospace plc (GB)
Tel No: +44 272 363 458
Fax no: +44 272 363 733
Mr F Malabocchia, CSELT S.p.A. (IT)
Tel No: +39 11 228 6778
Fax No: +39 11 228 5520
Mr D Cornwell, CEC Project Officer
Tel No: +32 2 296 8664/8071
Fax No: +32 2 296 8390/8397
Associate Nodes
% ARIAI, Vienna (AT) % Bari University (IT) % Bradford University (GB) %
Coimbra University (PT) % CRIM-ERA, Montpellier (FR) % FORTH, Crete
(GR) % Frankfurt University (DE) % GMD, Bonn (DE) % Kaiserslautern
University (DE) % Karlsruhe University (DE) % Ljubljana AI Labs (SL) %
Nottingham University (GB) % Oporto University (PT) % Paris VI University
(FR) % Pavia University (IT) % Reading University (GB) % Savoie
University, Chambery (FR) % Stockholm University (SE) % Tilburg
University (NL) %JTrinity College, Dublin (IE) % Ugo Bordoni Foundation,
Roma (IT) % VUB, Brussels (BE) % ISoft (FR) % Matra Marconi Space (FR) %
Siemens AG (DE)
Academic Coordinator:
Derek Sleeman
Department of Computing Science
University of Aberdeen
King's College
Aberdeen AB9 2UE
Scotland, UK
Tel: +44 224 27 2288/2304
Fax: +44 224 27 3422
email: {mlnet, sleeman}@csd.abdn.ac.uk
Documents available from Aberdeen:
State of the Art Overview of ML and KA
Recently Announced projects (ESPRIT III)
MLnet Flyer
*********************************************************************
Attention ! Attention! Communication problems
MLnet receives a large amount of email. Sometimes we are unable to work out
how to reply to some addresses. So, if you do not hear from us in 48 hours,
we suggest you send us a Fax (or preferably include your Fax & Phone
numbers in your original email message, so we can be sure to establish
contact).
*********************************************************************
*********************************************************************
If you want to receive MLnet NEWS by post please contact us at the
following address:
MLnet
Department of Computing Science
University of Aberdeen
King's College
Aberdeen AB9 2UE
Scotland, UK
Tel: +44 224 27 2304
Fax: +44 224 27 3422
email: mlnet@csd.abdn.ac.uk
*********************************************************************
************************* End of MLnetNEWS 2.1 ***********************
*********************************************************************