Copy Link
Add to Bookmark
Report

Machine Learning List Vol. 6 No. 32

eZine's profile picture
Published in 
Machine Learning List
 · 1 year ago

 
Machine Learning List: Vol. 6 No. 32
Friday, Decemeber 30, 1994

Contents:
JAIR ML pubs
Re: disjunctive concept learning
More Info, Special Issue of AIJ on Empirical AI
Workshop on Data Engineering for Inductive Learning
IJCAI95-workshop: Learning for Natural Language Processing
Connectionist-Symbolic Integration
ML in Engineering - Call for participation
Biological and Evolutionary Information Processing



The Machine Learning List is moderated. Contributions should be relevant to
the scientific study of machine learning. Mail contributions to ml@ics.uci.edu.
Mail requests to be added or deleted to ml-request@ics.uci.edu. Back issues
may be FTP'd from ics.uci.edu in pub/ml-list/V<X>/<N> or N.Z where X and N are
the volume and number of the issue; ID: anonymous PASSWORD: <your mail address>
URL- http://www.ics.uci.edu/AI/ML/Machine-Learning.html

----------------------------------------------------------------------

Date: Tue, 20 Dec 94 14:35:58 PST
From: Steve Minton <minton@ptolemy-ethernet.arc.nasa.gov>
Subject: JAIR ML pubs


Readers of this group may be interested in the following to two ML-related
papers recently published in JAIR. Instructions for obtaining JAIR articles
are given below:


Soderland, S. and Lehnert. W. (1994)
"Wrap-Up: a Trainable Discourse Module for Information Extraction",
Volume 2, pages 131-158.
Postscript: volume2/soderland4a.ps (442K)

Abstract: The vast amounts of on-line text now available have led to
renewed interest in information extraction (IE) systems that analyze
unrestricted text, producing a structured representation of selected
information from the text. This paper presents a novel approach that
uses machine learning to acquire knowledge for some of the higher
level IE processing. Wrap-Up is a trainable IE discourse component
that makes intersentential inferences and identifies logical relations
among information extracted from the text. Previous corpus-based
approaches were limited to lower level processing such as
part-of-speech tagging, lexical disambiguation, and dictionary
construction. Wrap-Up is fully trainable, and not only automatically
decides what classifiers are needed, but even derives the feature set
for each classifier automatically. Performance equals that of a
partially trainable discourse module requiring manual customization
for each domain.


Buntine, W.L. (1994)
"Operations for Learning with Graphical Models", Volume 2, pages 159-225
Postscript: volume2/buntine94a.ps (1.53M)
compressed, volume2/buntine94a.ps.Z (568K)

Abstract: This paper is a multidisciplinary review of empirical,
statistical learning from a graphical model perspective. Well-known
examples of graphical models include Bayesian networks, directed
graphs representing a Markov chain, and undirected networks
representing a Markov field. These graphical models are extended to
model data analysis and empirical learning using the notation of
plates. Graphical operations for simplifying and manipulating a
problem are provided including decomposition, differentiation, and the
manipulation of probability models from the exponential family. Two
standard algorithm schemas for learning are reviewed in a graphical
framework: Gibbs sampling and the expectation maximization algorithm.
Using these operations and schemas, some popular algorithms can be
synthesized from their graphical specification. This includes
versions of linear regression, techniques for feed-forward networks,
and learning Gaussian and discrete Bayesian networks from data. The
paper concludes by sketching some implications for data analysis and
summarizing how some popular algorithms fall within the framework
presented.

The main original contributions here are the decomposition techniques
and the demonstration that graphical models provide a framework for
understanding and developing complex learning algorithms.


The PostScript files are available via:

-- comp.ai.jair.papers

-- World Wide Web: The URL for our World Wide Web server is
http://www.cs.washington.edu/research/jair/home.html

-- Anonymous FTP from either of the two sites below:
CMU: p.gp.cs.cmu.edu directory: /usr/jair/pub/volume2
Genoa: ftp.mrg.dist.unige.it directory: pub/jair/pub/volume2

-- automated email. Send mail to jair@cs.cmu.edu or jair@ftp.mrg.dist.unige.it
with the subject AUTORESPOND, and the body GET VOLUME2/BUNTINE94A.PS
OR GET VOLUME2/SODERLAND94A.PS. (Either upper or lowercase is fine.)
Note: Your mailer might find these files too large to handle.
The compressed version of this paper cannot be mailed.

-- JAIR Gopher server: At p.gp.cs.cmu.edu, port 70.

For more information about JAIR, check out our WWW or FTP sites, or
send electronic mail to jair@cs.cmu.edu with the subject AUTORESPOND
and the message body HELP, or contact jair-ed@ptolemy.arc.nasa.gov.



------------------------------

Date: Tue, 20 Dec 1994 11:09:39 +1100
From: Ross Quinlan <quinlan@ml2.cs.su.oz.au>
Subject: Re: disjunctive concept learning


> I've been doing some experiments in multistrategy learning, and I've been
> surprised at how poorly the ml techniques I've been using (lfc++, c4.5 & NNs
> trained with conjugate gradient descent backprop) do a learning disjunctive
> concepts.
>
> For example, none of the learners do very well at learning the concept "any
> three consecutive bits on" in a length 9 bit string. The neural nets do the
> best, but are impractical for similar but larger problems. There must be
> symbolic learners that are good at disjunctive concepts, but a quick search
> of the literature (well, of the MLj) didn't turn up anything clearly
> relevant. With all the CoLT talk of k-DNF learning, I would have thought
> there would be a good one somewhere.
>
> Does anyone have any suggestions for algorithms, or, better yet, ftp-able
> programs that do well at these kinds of concepts?

No decision-tree method will do well on the 9-bit task above unless it
has a lot of data (since the tree representation of the concept is so
verbose). I would have thought that FRINGE might do well, and CN2 would
certainly be worth a try.

However, I think all attribute-value learners would require examples
and counter-examples for each of the seven possible positions for
consecutive `on' bits. First-order learners have an advantage in this
respect. For instance, this problem can be encoded for FOIL using the
relations

on(A,N) bit N is on in vector A
succ(N,N') bit N' follows bit N (i.e. the usual successor relation)
threebit(A) vector A is an instance of the concept

>From only ten randomly-selected examples and counter-examples of the concept,
FOIL generated the correct definition:

threebit(A) :- on(A,B), succ(C,B), on(A,C), succ(D,C), on(A,D).

Another advantage of using first-order learning is that the vectors do
not have to be all the same length -- the representation above allows for
arbitrary lengths.

[FOIL is available by anonymous ftp from ftp.cs.su.oz.au, file pub/foil6.sh.]



------------------------------

From: Edgar Sommer <Edgar.Sommer@gmd.de>
Subject: Re: disjunctive concept learning
Date: Mon, 19 Dec 94 10:08:20 +0100

Perhaps you'd like to stray away from the attr/val world
a bit? Disjunctive concepts are quite common in relational
learning. (Though I must admit I've never tried learning
any three consequutive bits ...)

Try Foil, Golem, or Mobal (which includes interfaces to
both).

http://nathan.gmd.de/projects/ml/home.html (the Machine Learning Group@GMD)

ftp://ftp.gmd.de/gmd/mlt/Mobal



e. sommer http://nathan.gmd.de/persons/edgar.sommer.html
AI Division GMD German National Research Center for Computer Science


------------------------------

From: Matt Schmill <schmill@earhart.cs.umass.edu>
Subject: More Info, Special Issue of AIJ on Empirical AI
Date: Thu, 22 Dec 1994 15:54:23 -0500 (EST)

This is to remind you that papers for the Special Issue of the AI
Journal on Empirical AI, edited by Paul Cohen and Bruce Porter, are
due on January 10, 1995. The Call for Papers is published at

ftp://ftp.cs.umass.edu/pub/eksl/misc/cfp.txt or
http://eksl-www.cs.umass.edu/cfp.html

Or you can send email to cohen@cs.umass.edu.

Please send three copies of your paper to:

Paul Cohen
Computer Science Department, LGRC
University of Massachusetts
Box 34610
Amherst, MA 01003-4610

------------------------------

Date: Mon, 19 Dec 1994 08:40:40 +0500
From: Peter Turney <peter@ai.iit.nrc.ca>
Subject: Workshop on Data Engineering for Inductive Learning





CALL FOR PARTICIPATION: Workshop on Data Engineering for Inductive Learning
___________________________________________________________________________

IJCAI-95, Montreal (Canada), August 19/20/21, 1995


Objective
_________

In inductive learning, algorithms are applied to data. It is
well-understood that attention to both elements is critical __ unless
instances are represented so as to make its generalization methods
appropriate, no inductive learner can succeed. In applied work, it is
not uncommon for practitioners to spend the bulk of their time
exploring and transforming data in efforts to enable the use of
existing induction techniques.

Despite widespread acceptance of these facts, however, research reports
normally give data work short shrift. In fact, a report devoted mainly
to the data in an induction problem rather than to the algorithms that
process it might well be difficult to publish in mainstream machine
learning and neural network venues.

Our goal in this workshop is to counterbalance the predominant focus on
algorithms by providing a forum in which data takes center stage.
Specifically, we invite discussion of issues relevant to data
engineering, which we define as the transformation of raw data into a
form useful as input to algorithms for inductive learning. Data
engineering is a concern in industrial and commercial applications of
machine learning, neural networks, genetic algorithms, and traditional
statistics. Among others, papers of the following kind are welcome:

1. Detailed case studies of data engineering in real-world applications
of inductive learning.

2. Descriptions of data engineering techniques or methods that have
proven useful across a number of applications.

3. Studies of the data requirements of important inductive learning
algorithms, the specifications to which data must be engineered for
these algorithms to function.

4. Reports on software tools and environments for data engineering,
including papers on "interactive induction" algorithms.

5. Empirical studies documenting the effect of data engineering on the
success of induced models.

6. Surveys of data engineering practice in related fields: statistics,
pattern recognition, etc. (but not problem-solving or
theorem-proving).

7. Papers on constructive induction, feature selection and related
techniques.

8. Papers on (re)formulating a problem, to make it suitable
for inductive learning techniques. For example, a paper
on reformulating the problem of information filtering as
learning to classify.

This workshop will enable an overview of current work in data
engineering. Since the problem of data engineering has received
relatively little published attention, it is difficult to anticipate
the work that will be presented at this workshop. We expect that the
workshop will make it possible to see common trends, shared problems,
and clever solutions that we cannot guess at, given our current,
limited view of data engineering. We have allowed ample time for
discussion of each paper (10 minutes), to foster an atmosphere
that will encourage data engineers to share their stories and to seek
common elements. We aim to leave the workshop with a vision of the
research directions that might bring science into data engineering.


Participation
_____________

During the workshop, we anticipate approximately 14 presentations. Each
paper will be given 25 minutes, 15 minutes for presentation and 10
minutes for discussion. There will be at most 30 participants in the
workshop. If you wish to participate in the workshop, you may either
submit a paper or a description of work that you have done (are doing,
plan to do) that is relevant to the workshop. Papers should be at most
10 pages long. The first page should include the title, the author's
name(s) and affiliation(s), a complete mailing address, phone number,
fax number, e-mail, an abstract of at most 300 words, and up to five
keywords. For those who do not choose to submit a paper, a description
of relevant work should be at most 1 page long and should include
complete address information. Workshop participants are required to
register for the main IJCAI-95 conference.

All submissions (papers or descriptions of relevant work) will be
reviewed by at least two members of the organizing committee. Please
send your submissions to the contact address below. Submissions should
be PostScript files, sent by e-mail. Accepted submissions will be
available before the workshop through ftp. Workshop participants will
also be given copies of the papers on the day of the workshop.

In selecting the papers, the committee will aim for breadth of coverage
of the topics listed above. Ideally, each of the eight kinds of papers
listed above would have at least one representative in the workshop.
A paper with new ideas on data engineering will be preferred to a high-
quality paper on a familiar idea.

The workshop organizers plan to publish revised versions of selected
papers from the workshop. The papers would be published either as a
book or as a special issue of a journal.

The exact date for the workshop has not yet been decided by IJCAI.
The workshop is one day in duration and will be held on one of
August 19, 20, or 21.


Schedule
________

Deadline for submissions: March 31, 1995
Notification of acceptance: April 21, 1995
Submissions available by ftp: April 28, 1995
Actual Workshop: August 19/20/21, 1995


Organizing Committee
____________________

Peter Turney, National Research Council (Canada)
Cullen Schaffer, CUNY/Hunter College (USA)
Rob Holte, University of Ottawa (Canada)


Contact Address
_______________

Dr. Peter Turney
Knowledge Systems Laboratory
Institute for Information Technology
National Research Council Canada
Ottawa, Ontario, Canada
K1A 0R6
(613) 993-8564 (office)
(613) 952-7151 (fax)
peter@ai.iit.nrc.ca








------------------------------

Date: Tue, 20 Dec 94 12:28:41 +0100
From: Stefan Wermter <wermter@nats2.informatik.uni-hamburg.de>
Subject: IJCAI95-workshop: Learning for Natural Language Processing


CALL FOR PAPERS AND PARTICIPATION
IJCAI-95 Workshop on
New Approaches to Learning for Natural Language Processing


International Joint Conference on Artificial Intelligence (IJCAI-95)
Palais de Congres, Montreal, Canada

currently scheduled for August 21, 1995



ORGANIZING COMMITTEE
____________________

Stefan Wermter Gabriele Scheler Ellen Riloff
University of Hamburg Technical University Munich University of Utah


PROGRAM COMMITTEE
_________________

Jaime Carbonell, Carnegie Mellon University, USA
Joachim Diederich, Queensland University of Technology, Australia
Georg Dorffner, University of Vienna, Austria
Jerry Feldman, ICSI, Berkeley, USA
Walther von Hahn, University of Hamburg, Germany
Aravind Joshi, University of Pennsylvania, USA
Ellen Riloff, University of Utah, USA
Gabriele Scheler, Technical University Munich, Germany
Stefan Wermter, University of Hamburg, Germany


WORKSHOP DESCRIPTION
____________________

In the last few years, there has been a great deal of interest and activity in
developing new approaches to learning for natural language processing. Various
learning methods have been used, including

- connectionist methods/neural networks
- machine learning algorithms
- hybrid symbolic and subsymbolic methods
- statistical techniques
- corpus-based approaches.

In general, learning methods are designed to support automated knowledge ac-
quisition, fault tolerance, plausible induction, and rule inferences. Using
learning methods for natural language processing is especially important be-
cause language learning is an enabling technology for many other language pro-
cessing problems, including noisy speech/language integration, machine trans-
lation, and information retrieval. Different methods support language learning
to various degrees but, in general, learning is important for building more
flexible, scalable, adaptable, and portable natural language systems.

This workshop is of interest particularly at this time because systems built by
learning methods have reached a level where they can be applied to real-world
problems in natural language processing and where they can be compared with
more traditional encoding methods. The workshop will bring together researchers
from the US/Canada, Europe, Japan, Australia and other countries working on new
approaches to language learning.

The workshop will provide a forum for discussing various learning approaches
for supporting natural language processsing. In particular the workshop will
focus on questions like:

- How can we apply suitable existing learning methods for language processing?

- What new learning methods are needed for language processing and why?

- What language knowledge should be learned and why?

- What are similarities and differences between different approaches for
language learning? (e.g., machine learning algorithms vs neural networks)

- What are strengths and limitations of learning rather than manual encoding?

- How can learning and encoding be combined in symbolic/connectionist systems?

- Which aspects of system architectures and knowledge engineering have to
be considered? (e.g., modular, integrated, hybrid systems)

- What are successful applications of learning methods in various fields?
(speech/language integration, machine translation, information retrieval)

- How can we evaluate learning methods using real-world language?
(text, speech, dialogs, etc.)


WORKSHOP FORMAT
_______________

The workshop will provide a forum for the interactive exchange of ideas and
knowledge. Approximately 30-40 participants are expected and there will be time
for up to 15 presentations depending on the number and quality of paper contri-
butions received. Normal presentation length will be 15+5 minutes, leaving time
for direct questions after each talk. There may be a few invited talks of 25+5
minutes length. In addition to prepared talks, there will be time for moderat-
ed discussions after two related sessions. Furthermore, the moderated discus-
sions will provide an opportunity for an open exchange of comments, questions,
reactions, and opinions.

PUBLICATION
___________

Workshop proceedings will be published by AAAI. If there is sufficient in-
terest of the participants of the workshop there may be a possibility to pub-
lish the results of the workshop as a book.

REGISTRATION
____________

This workshop will take place directly before the general IJCAI-conference. It
is an IJCAI policy, that workshop participation is not possible without regis-
tration for the general conference.

SUBMISSIONS
___________

All submissions will be refereed by the program committee and other experts in
the field. Please submit 4 hardcopies AND a postscript file. The paper format
is the IJCAI95 format: 12pt article style latex, no more than 43 lines, 15
pages maximum, including title, address and email address, abstract, figures,
references. Papers should fit to 8 1/2" x 11" size. Notifications will be sent
by email to the first author.

Postscript files can be uploaded with anonymous ftp:

ftp nats4.informatik.uni-hamburg.de (134.100.10.104)
login: anonymous
password: <your email address>
cd incoming/ijcai95-workshop
binary
put <yourfile.Z or yourfile.gz>
quit

Hardcopies AND postscript files must arrive not later than 24th February 1995
at the address below.

##############Submission Deadline: 24th February 1995
##############Notification Date: 24th March 1995
##############Camera ready Copy: 13th April 1995



Please send correspondence and submissions to:


################################################
Dr. Stefan Wermter
Department of Computer Science
University of Hamburg
Vogt-Koelln-Strasse 30
D-22527 Hamburg
Germany

phone: +49 40 54715-531
fax: +49 40 54715-515
e-mail: wermter@informatik.uni-hamburg.de
################################################

































------------------------------

Date: Tue, 20 Dec 1994 08:43:30 -0600
From: Ron Sun <rsun@cs.ua.edu>
Subject: Connectionist-Symbolic Integration



Call For Papers
to the Workshop

Connectionist-Symbolic Integration:
From Unified to Hybrid Approaches

to be held at IJCAI'95
Montreal, Canada
August 19-20, 1994


There has been a considerable amount of research in integrating
connectionist and symbolic processing. While such an approach has
clear advantages, it also encounters serious difficulties and
challenges. Therefore, various models and ideas have been proposed to
address various problems and aspects in this integration. There is a
growing interest from many segments of the AI community, ranging from
expert systems, to cognitive modeling, to logical reasoning.

Two major trends can be identified in the state of the art: these are
the unified or purely and the hybrid approaches to integration.
Whereas the purely connectionist ("connectionist-to-the-top") approach
claims that complex symbol processing functionalities can be achieved
via neural networks alone, the hybrid approach is premised on the
complementarity of the two paradigms and aims at their synergistic
combination in systems comprising both neural and symbolic components.
In fact, these trends can be viewed as two ends of an entire spectrum.

Up till now, overall, there is still relatively little work in
comparing and combining these fairly isolated efforts. This workshop
will provide a forum for discussions and exchanges of ideas in this
area, to foster cooperative work. The workshop will tackle important
issues in integrating connectionist and symbolic processing.


A tentative Schedule

Day 1:

A. Introduction:

* Invited talks
These talks will provide an overview of the field and set the tone for
ensuing discussions.
* Theoretical foundations for integrating connectionist and symbolic
processing

B. Definition of the two approaches:

* Do they exhaust the space of current research in
connectionist-symbolic integration, or is there room for additional
categories?
* How do we compare the unified and hybrid approaches?
* Do the unified and hybrid approaches constitute a clearcut dichotomy or
are they just endpoints of a continuum?
* What class of processes and problems is well-suited to unified or
hybrid integration? The relevant motivations and objectives.
* What type of model is suitable for what type of application?
Enumerate viable target domains.

C. State of the art:

* Recent or ongoing theoretical or experimental research work
* Implemented models belonging to either the unified or hybrid approach
* Practical applications of both types of systems

Research addressing key issues concerning:

* the unified approach: theoretical or practical
issues involving systematicity, compositionality and variable
binding, biologically inspired models, connectionist knowledge
representation, other high-level connectionist models.

* the hybrid approach: modes and methods of coupling, task
sharing between various components of a hybrid system, knowledge
representation and sharing.

* both: commonsense reasoning, natural language processing,
analogical reasoning, and more generally applications of
unified and hybrid models.


Day 2:

D. Cognitive Aspects:

* Cognitive plausibility and relations to other AI paradigms
* In cognitive modeling, why should we integrate
connectionist and symbolic processing?
* Is there a clear cognitive rationale for such integration? (we may
need to examine in detail some typical areas, such as commonsense
reasoning, and natural language processing)
* Is there psychological and/or biological evidence for
existing models? If so, what is it?

E. Open research issues:

* Can we now propose a common terminology with precise
definitions for both approaches to connectionist-symbolic integration
and for the location on the continuum?
* How far can unified systems go?
Can unified models be supplemented by hybrid models?
Can hybrid models be supplanted by unified models?
* Limitations and barriers faced by both approaches
* What breakthroughs are needed for both approaches?
* Is it possible to synthesize various existing models?


Workshop format
_ panel discussions
_ mini-group discussions: participants will break into groups of 7/8
to discuss a given theme; group leaders will then form a panel to
report on group discussions and attempt a synthesis with audience
participation
_ interactive talks: this is a novel type of oral presentation
we will experiment with. Instead of a classical presentation, the
speaker will present a problem or issue and give a brief statement
of his personal stand (5 min) to launch discussions which he will
then moderate and conclude.
_ classical slide talks followed by Q/A and discussions.


Workshop Co-chairs:
Frederic Alexandre, Crin-Cnrs/Inria-Lorraine
Ron Sun, The University of Alabama

Organizing Committee:
John Barnden, New Mexico State University
Steve Gallant, Belmont Research Inc.
Larry Medsker, American University
Christian Pellegrini, University of Geneva
Noel Sharkey, Sheffield University

Program Committee:
Lawrence Bookman (Sun Laboratory, USA)
Michael Dyer (UCLA, USA)
Wolfgang Ertel (FRW, Germany)
LiMin Fu (University of Florida, USA)
Jose Gonzalez-Cristobal (UPM, Spain)
Ruben Gonzalez-Rubio (University of Sherbrooke, Canada)
Jean-Paul Haton (Crin-Inria, France)
Melanie Hilario (University of Geneva, Switzerland)
Abderrahim Labbi (IMAG, France)
Ronald Yager (Iona College, USA)


Schedule:
_ The submission deadline for participants is February 1, 1995.
_ The authors and potential participants will be notified
the acceptance decision by March 15, 1995.
_ The camera-ready copies of working notes papers will be due on April 15, 1995


Submission:
_ If you wish to present a talk, specify the preferred type of
presentation (classical or interactive talk) and submit 5 copies of an
extended abstract (within the limit of 5-7 pages) to:

Ron Sun
Department of Computer Science
The University of Alabama
Tuscaloosa, AL 35487
rsun@cs.ua.edu
(205) 348-6363

_ If you only wish to attend the workshop, send 5 copies of a short
(no more than one page) description of your interest to the same address above.

_ Please be sure to include your e-mail address in all submissions.


------------------------------

From: Benoit Julien <julien@magnum.crim.ca>
Date: Thu, 22 Dec 94 10:58:35 EST
Subject: ML in Engineering - Call for participation

CALL FOR PARTICIPATION

*** Workshop on Machine Learning in Engineering ***

International Joint Conference on Artificial Intelligence 1995
IJCAI-95

Montreal, Quebec, Canada
August 19-25, 1995


WORKSHOP OBJECTIVES

The last ten years have seen a significant increase in the development of
knowledge-based systems for engineering applications. As in other domains,
the success of knowledge-based approaches in engineering depends critically
on the quality of the knowledge acquisition process. Computer-aided
engineering system developers in the early nineties quickly recognized the
potentials offered by emerging machine learning techniques.

As machine learning moves from "toy" problems to "real" engineering
applications, a concerted R&D effort becomes essential to identify and
overcome critical engineering knowledge acquisition bottlenecks. In that
perspective, this workshop will bring together researchers applying or
developing machine learning techniques for various engineering disciplines in
order to establish important commonalities and differences in engineering
learning problems. This forum will permit the definition of basic engineering
learning tasks and their relationships with appropriate machine learning
strategies. By presenting the state-of-the-art in machine learning
applications to engineering, this event should also bridge many gaps between
machine learning theory and engineering practice.

TOPICS OF INTEREST

All researchers and practitioners actively applying or developing machine
learning techniques to engineering problems are encouraged to submit papers
for this workshop. Topics of interest include, but are not limited to, the
following:

* Case studies
Case studies of application of machine learning in engineering,
with analysis of successes and failures. Examples of application
topics:
- Knowledge mining of engineering databases;
- Engineering learning apprentice systems;
- Semi-automated engineering knowledge acquisition;
- Constructive induction in engineering;
- Engineering knowledge discovery systems;
- Engineering model acquisition and refinement;
- Learning from sensory data.

* Comparative studies
Comparative studies of machine learning techniques solving similar
engineering learning tasks;

* Overviews
Overviews of the state-of-the-art of machine learning in engineering;

* Position papers on key issues
Position papers discussing and proposing methodologies for solving
important engineering learning issues. Examples of key issues:
- Prior knowledge in engineering learning problems;
- Tracking engineering concept drifts (dynamic knowledge);
- Mapping of generic engineering tasks with learning techniques;
- Multistrategy learning for engineering problems;
- Machine learning for engineering data analysis;
- Learning from very small or very large training sets;
- Learning from noisy and incomplete information;
- Integration of machine learning and knowledge acquisition.

Papers describing strictly case studies of manual knowledge acquisition and
maintenance are discouraged. This workshop does not cover applications of
subsymbolic learning techniques such a neural networks and genetic algorithms.

SUBMISSIONS

All papers submitted should not exceed 15 pages. The organizers intend to
publish a selection of the accepted papers as a book or a special issue of a
journal. The authors should take this into account while preparing their
papers. In order to encourage the submission of work in progress reports,
5 pages extended abstracts will also be accepted for submission. However,
the accepted extended abstracts will not be considered for later publication.
Copies of the workshop proceedings containing all accepted papers and
extended abstracts will be prepared and made available by IJCAI at the
workshop.

Each paper and extended abstract should provide a clear description of the
engineering task and the learning problem so that other participants not
familiar with the application can easily understands the key characteristics
and objectives of the research. The papers should also define all technical
terms and make explicit the research methodology and the underlying
characteristics and assumptions of the learning problem(s) and technique(s).
The authors should also discuss important future issues as well as
implications and possible extensions of their work to other engineering
domains.

Each submitted paper and extended abstract will be reviewed by at least
three members of an international program committee and will be judged
on significance, originality, and clarity. Papers submitted simultaneously
to other conferences or journals must state so on the title page.

Those who would like to attend the workshop without giving a presentation
should send a 1 page description of relevant research interests with a short
list of selected publications.

Please send general inquiries to julien@crim.ca.

DEADLINES

Four (4) hard copies of the papers or extended abstracts must be received
by the workshop organiser by February 17, 1995. Alternatively, electronic
submissions in postscript are encouraged. FAX submissions are not accepted.

Notification of acceptance or rejection will be sent to the first
(or designated) author with the reviewers comments by March 24, 1995.
Final camera-ready papers and extended abstracts should arrive by
April 21, 1995. This one-day workshop will be held between Saturday
19 August and Monday 21 August 1995 inclusive.

PAPER FORMAT

Submissions must be clearly legible, with good quality print. Papers and
extended abstracts are respectively limited to 15 and 5 pages
including title page, bibliography, tables and figures. Papers must be
printed on 8.5 x 11 inch paper or A4 paper using 12 point type (10 characters
per inch) with a 1 inch margins and no more than 40 lines per page. The
title page must include the names, postal and electronic (e-mail) addresses
and phone and FAX numbers of all authors together with an abstract (200 words
maximum) and a list of key words. The first key words should specify the
engineering domain (e.g., electrical, civil, mechanical, industrial,
chemical, environmental, metalurgy, mining), the engineering generic task
(e.g., classification, scheduling, control, maintenance, planning, design),
and the machine learning technique(s) used (e.g., case-based learning,
conceptual clustering, explanation-based learning, rule induction, inductive
predicate logic).

Papers without this format will not be reviewed. To save paper and postage
costs please use double-sided printing or, preferably, send a postcript file
via internet to the workshop organizer.

WORKSHOP FORMAT

The format of the workshop will be paper sessions with discussion at the end
of each session. The day will be divided in four (4) thematic sessions of
an hour and a half each. A commentator from the program committee will be
assigned for each presentation so as to initiate and supervised the
discussions. The workshop will conclude with a panel discussion. The panel
discussions will be instrumental in establishing guidelines for future
integrations and collaborations and a research agenda for the next five years
based on the key multidisciplinary issues identified.

The number of participants to the workshop is limited to 40. All workshop
participants are expected to register for the main IJCAI conference and to
pay an additional fee ($US 50) for the workshop.

WORKSHOP CHAIRS

Benoit Julien (workshop organiser)
Centre de recherche informatique de Montreal (CRIM)
1801, McGill College avenue, Suite 800
Montreal (Quebec) H3A 2N4
Canada
phone: 1-514-398-5862
fax: 1-514-398-1244
e-mail: julien@crim.ca

Steven J. Fenves
Department of Civil Engineering
Carnegie Mellon University
Pittsburgh, PA, 15213
United States
phone: 1-412-268-2944
fax: 1-412-268-7813
e-mail: fenves@ce.cmu.edu

Tomasz Arciszewski
Systems Engineering Department
School of Information Technology and Engineering
George Mason University
Fairfax, VA, 22030
United States
phone: 1-703-993-1513
fax: 1-703-993-1706
e-mail: tarcisze@aic.gmu.edu

------------------------------

Date: Mon, 19 Dec 94 12:27:26 PST
From: Russell Anderson <rwa@milo.berkeley.edu>
Subject: Biological and Evolutionary Information Processing


BioSystems Vol. 34, Nos.1-3, pp.1-276 (1995)

Special Issue of BioSystems:
"Biological and Evolutionary Information Processing ---
A Festschrift for Hans J. Bremermann"

Editor: Russell W. Anderson

_____________________________________________________
CONTENTS

Hans J. Bremermann: A pioneer in mathematical biology
R.W. Anderson and M. Conrad pp. 1-10.

Elements of Systematic Search in Animal Behavior and Model Simulations
W. Alt pp. 11-26.

Partitioning nonlinearities in the response of honey bee
olfactory receptor neurons to binary odors
W.M. Getz and R. P. Akers pp.27-40.

Return to equilibrium - or - Why we may be losing information
from our physiologic experiments
S. Zietz pp. 41-46.

Network thermodynamics revisited
E.R. Lewis pp. 47-64.

Intercellular signaling in neuronal-glial networks.
M.S. Cooper pp. 65-86.

On the maternal transmission of immunity: A `molecular attention' hypothesis
R.W. Anderson pp. 87-105

Characterization of prediction in the primate visual smooth pursuit system
D.C. Deno, W.F. Crandall, K. Sherman and E.L. Keller pp.107-128.

Parameter identification of a neurological control model for pathological
head movements of cerebllar patients
C.F. Ramos, W.H. Zangemeister and J. Dee pp.129-141.

The role of dehydro-Alanine in the design of peptides
S. Bhatnagar, G. Subba-Rao and T.P. Singh pp.143-148.

Constantly 'awake" brain and Bremermann's question
R.K. Mishra pp. 149-160.

Evolutionary credit apportionment and its application to time-
dependent neural processing
R. Smalz and M. Conrad pp.161-172.

Computing with dynamic attractors in neural networks
M.W. Hirsch and B. Baird pp.173-195.

A shape representation for computer vision based on differential topology
A.P. Blicher pp. 197-224.

A precondition prover for analogy
W.W. Bledsoe pp. 225-247.

Wavelet variations on the Shannon sampling theorem
H. Bray, K. McCormick, R.O. Wells and X. Zhou pp. 249-257.

Analytic representation of compactly supported wavelets
H.L. Resnikoff pp. 259-272


Russell W. Anderson
2415 College Ave #33
Berkeley, CA 94704
(510) 848-1576
rwa@milo.berkeley.edu


------------------------------

End of ML-LIST (Digest format)
****************************************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT