Copy Link
Add to Bookmark
Report

Alife Digest Number 051

eZine's profile picture
Published in 
Alife Digest
 · 1 year ago

 
ALIFE LIST: Artificial Life Research List Number 51 Sunday, January 6th 1991

ARTIFICIAL LIFE RESEARCH ELECTRONIC MAILING LIST
Maintained by the Indiana University Artificial Life Research Group

Contents:

Technical report available: The Evolution of Learning
ML91 Final Call for Papers

----------------------------------------------------------------------

Date: Wed, 5 Dec 90 00:00:24 EST
From: David Chalmers <dave@cogsci1.cogsci.indiana.edu>
Subject: Technical report available: The Evolution of Learning

The following technical report is now available electronically from the
Center for Research on Concepts and Cognition at Indiana University.

THE EVOLUTION OF LEARNING: AN EXPERIMENT IN GENETIC CONNECTIONISM

David J. Chalmers

Center for Research on Concepts and Cognition
Indiana University
CRCC-TR-47

This paper explores how an evolutionary process can produce systems that
learn. A general framework for the evolution of learning is outlined, and
is applied to the task of evolving mechanisms suitable for supervised
learning in single-layer neural networks. Dynamic properties of a
network's information-processing capacity are encoded genetically, and these
properties are subjected to selective pressure based on their success in
producing adaptive behavior in diverse environments. As a result of
selection and genetic recombination, various successful learning mechanisms
evolve, including the well-known delta rule. The effect of environmental
diversity on the evolution of learning is investigated, and the role of
different kinds of emergent phenomena in genetic and connectionist systems
is discussed.

A version of this paper appears in _Proceedings of the 1990 Connectionist
Models Summer School_ (Touretzky, Elman, Sejnowski and Hinton, eds).

-----------------------------------------------------------------------------

This paper may be retrieved by anonymous ftp from cogsci.indiana.edu
(129.79.238.6). The file is chalmers.evolution.ps.Z, in the directory pub.
To retrieve, do the following:

unix-1> ftp cogsci.indiana.edu # (or ftp 129.79.238.6)
Connected to cogsci.indiana.edu
Name (cogsci.indiana.edu:): anonymous
331 Guest login ok, sent ident as password.
Password: [identification]
230 Guest login ok, access restrictions apply.
ftp> cd pub
ftp> binary
ftp> get chalmers.evolution.ps.Z
ftp> quit
unix-2> uncompress chalmers.evolution.ps.Z
unix-3> lpr -P(your_local_postscript_printer) chalmers.evolution.ps

If you do not have access to ftp, hardcopies may be obtained by sending e-mail
to dave@cogsci.indiana.edu.

Dave Chalmers (dave@cogsci.indiana.edu)
Center for Research on Concepts and Cognition
Indiana University.



------------------------------

Date: Tue, 18 Dec 90 12:02:01 CST
From: birnbaum@fido.ils.nwu.edu (Lawrence Birnbaum)
Subject: ML91 Final Call for Papers

THE EIGHTH INTERNATIONAL WORKSHOP ON MACHINE LEARNING

CALL FOR PAPERS

On behalf of the organizing committee, and the individual workshop committees,
we are pleased to announce submission details for the eight workshop tracks
that will constitute ML91, the Eighth International Workshop on Machine
Learning, to be held at Northwestern University, Evanston, Illinois, USA, June
27-29, 1991. The eight workshops are:

o Automated Knowledge Acquisition
o Computational Models of Human Learning
o Constructive Induction
o Learning from Theory and Data
o Learning in Intelligent Information Retrieval
o Learning Reaction Strategies
o Learning Relations
o Machine Learning in Engineering Automation

Please note that submissions must be made to the workshops individually, at
the addresses given below, by March 1, 1991. The Proceedings of ML91 will be
published by Morgan Kaufmann. Questions concerning individual workshops
should be directed to members of the workshop committees. All other questions
should be directed to the program co-chairs at ml91@ils.nwu.edu. Details
concerning the individual workshops follow.

Larry Birnbaum
Gregg Collins

Northwestern University
The Institute for the Learning Sciences
1890 Maple Avenue
Evanston, IL 60201
phone (708) 491-3500

----------------------------------------------------------------------------

AUTOMATED KNOWLEDGE ACQUISITION

Research in automated knowledge acquisition shares the primary objective of
machine learning research: building effective knowledge bases. However, while
machine learning focuses on autonomous "knowledge discovery," automated
knowledge acquisition focuses on interactive knowledge elicitation and
formulation. Consequently, research in automated knowledge acquisition
typically stresses different issues, including how to ask good questions, how
to learn from problem-solving episodes, and how to represent the knowledge
that experts can provide. In addition to the task of classification, which is
widely studied in machine learning, automated knowledge acquisition studies a
variety of performance tasks such as diagnosis, monitoring, configuration, and
design. In doing so, research in automated knowledge acquisition is exploring
a rich space of task-specific knowledge representations and problem solving
methods.

Recently, the automated knowledge acquisition community has proposed hybrid
systems that combine machine learning techniques with interactive tools for
developing knowledge-based systems. Induction tools in expert system shells
are being used increasingly as knowledge acquisition front ends, to seed
knowledge engineering activities and to facilitate maintenance. The
possibilities of synergistic human-machine learning systems are only beginning
to be explored.

This workshop will examine topics that span autonomous and interactive
knowledge acquisition approaches, with the aim of productive cross-
fertilization of the automated knowledge acquisition and machine learning
communities.

Submissions to the automated knowledge acquisition track should address basic
problems relevant to the construction of knowledge-based systems using
automated techniques that take advantage of human input or human- generated
knowledge sources and provide computational leverage in producing operational
knowledge.

Possible topics include:

o Integrating autonomous learning and focused interaction with an
expert.
o Learning by asking good questions and integrating an expert's
responses into a growing knowledge base.
o Using existing knowledge to assist in further knowledge acquisition.
o Acquiring, representing, and using generic task knowledge.
o Analyzing knowledge bases for validity, consistency, completeness,
and efficiency then providing recommendations and support for revision.
o Automated assistance for theory / model formation and discovery.
o Novel techniques for knowledge acquisition, such as explanation,
analogy, reduction, case-based reasoning, model-based reasoning,
and natural language understanding.
o Principles for designing human-machine systems that integrate the
complimentary computational and cognitive abilities of programs and
users.

Submissions on other topics relating automated knowledge acquisition and
autonomous learning are also welcome. Each submission should specify the basic
problem addressed, the application task, and the technique for addressing the
problem.

WORKSHOP COMMITTEE

Ray Bareiss (Northwestern Univ.)
Bruce Buchanan (Univ. of Pittsburg)
Tom Gruber (Stanford Univ.)
Sandy Marcus (Boeing)
Bruce Porter (Univ. of Texas)
David Wilkins (Univ. of Illinois)

SUBMISSION DETAILS

Papers should be approximately 4000 words in length. Authors should submit
six copies, by March 1, 1991, to:

Ray Bareiss
Northwestern University
The Institute for the Learning Sciences
1890 Maple Avenue
Evanston, IL 60201
phone (708) 491-3500

Formats and deadlines for camera-ready copy will be communicated upon
acceptance.

----------------------------------------------------------------------------

COMPUTATIONAL MODELS OF HUMAN LEARNING

Details concerning this workshop will be forthcoming as soon as possible.

----------------------------------------------------------------------------

CONSTRUCTIVE INDUCTION

Selection of an appropriate representation is critical to the success of
most learning systems. In difficult learning problems (e.g., protein folding,
word pronunciation, relation learning), considerable human effort is often
required to identify the basic terms of the representation language.
Constructive induction offers a partial solution to this problem by
automatically introducing new terms into the representation as needed.
Automatically constructing new terms is difficult because the environment or
teacher usually provides only indirect feedback, thus raising the issue of
credit assignment. However, as learning systems face tasks of greater
autonomy and complexity, effective methods for constructive induction are
becoming increasingly important.

The objective of this workshop is to provide a forum for the interchange
of ideas among researchers actively working on constructive induction issues.
It is intended to identify commonalities and differences among various
existing and emerging approaches such as knowledge-based term construction,
relation learning, theory revision in analytic systems, learning of hidden-
units in multi-layer neural networks, rule-creation in classifier systems,
inverse resolution, and qualitative-law discovery.

Submissions are encouraged in the following topic areas:

o Empirical approaches and the use of inductive biases
o Use of domain knowledge in the construction and evaluation of new terms
o Construction of or from relational predicates
o Theory revision in analytic-learning systems
o Unsupervised learning and credit assignment in constructive induction
o Interpreting hidden units as constructed features
o Constructive induction in human learning
o Techniques for handling noise and uncertainty
o Experimental studies of constructive induction systems
o Theoretical proofs, frameworks, and comparative analyses
o Comparison of techniques from empirical learning, analytical learning,
classifier systems, and neural networks

WORKSHOP COMMITTEE

Organizing Committee: Program Committee:

Christopher Matheus (GTE Laboratories) Chuck Anderson (Colorado State)
George Drastal (Siemens Corp.) Gunar Liepins (Oak Ridge National Lab)
Larry Rendell (Univ. of Illinois) Douglas Medin (Univ. of Michigan)
Paul Utgoff (Univ. of Massachusetts)

SUBMISSION DETAILS

Papers should be a maximum of 4000 words in length. Authors should include a
cover page with authors' names, addresses, phone numbers, electronic mail
addresses, paper title, and a 300 (maximum) word abstract. Do not indicate or
allude to authorship anywhere within the paper. Send six copies of paper
submissions, by March 1, 1991, to:

Christopher Matheus
GTE Laboratories
40 Sylvan Road, MS-45
Waltham MA 02254
(matheus@gte.com)

Formats and deadlines for camera-ready copy will be communicated upon
acceptance.

----------------------------------------------------------------------------

LEARNING FROM THEORY AND DATA

Research in machine learning has primarily focused on either (1) inductively
generalizing a large collection of training data (empirical learning) or (2)
using a few examples to guide transformation of existing knowledge into a more
usable form (explanation-based learning). Recently there has been growing
interest in combining these two approaches to learning in order to overcome
their individual weaknesses. Preexisting knowledge can be used to focus
inductive learning and to reduce the amount of training data needed.
Conversely, inductive learning techniques can be used to correct imperfections
in a system's theory of the task at hand (commonly called "domain theories").

This workshop will discuss techniques for reconciling imperfect domain
theories with collected data. Most systems that learn from theory and data
can be viewed from the perspective of both data-driven learning (how
preexisting knowledge biases empirical learning) and theory-driven learning
(how empirical data can compensate for imperfect theories). A primary goal of
the workshop will be to explore the relationship between these two
complementary viewpoints. Papers are solicited on the following (and related)
topics:

o Techniques for inductively refining domain theories and knowledge bases.
o Approaches that use domain theories to initialize an incremental,
inductive-learning algorithm.
o Theory-driven design and analysis of scientific experiments.
o Systems that tightly couple data-driven and theory-driven learning
as complementary techniques.
o Empirical studies, on real-world problems, of approaches
to learning from theory and data.
o Theoretical analyses of the value of preexisting knowledge in inductive
learning.
o Psychological experiments that investigate the relative roles
of prior knowledge and direct experience.

WORKSHOP COMMITTEE

Haym Hirsh (Rutgers Univ.), hirsh@cs.rutgers.edu
Ray Mooney (Univ. of Texas), mooney@cs.utexas.edu
Jude Shavlik (Univ. of Wisconsin), shavlik@cs.wisc.edu

SUBMISSION DETAILS

Papers should be single-spaced and printed using 12-point type. Authors must
restrict their papers to 4000 words. Papers accepted for general presentation
will be allocated 25 minutes during the workshop and four pages in the
proceedings published by Morgan Kaufmann. There will also be a posters
session; due to the small number of proceedings pages allocated to each
workshop, poster papers will not appear in the Morgan Kaufmann proceedings.
Instead, they will be allotted five pages in an informal proceedings
distributed at this particular workshop only. Please indicate your preference
for general or poster presentation. Also include your mailing and e-mail
addresses, as well as a short list of keywords.

People wishing to discuss their research at the workshop should submit four
(4) copies of a paper, by March 1, 1991, to:

Jude Shavlik
Computer Sciences Department
University of Wisconsin
1210 W. Dayton Street
Madison, WI 53706

Formats and deadlines for camera-ready copy will be communicated upon
acceptance.

----------------------------------------------------------------------------


LEARNING IN INTELLIGENT INFORMATION RETRIEVAL

The intent of this workshop is to bring together researchers from the
Information Retrieval (IR) and Machine Learning (ML) communities to explore
areas of common interest. Interested researchers are encouraged to submit
papers and proposals for panel discussions.

The main focus will be on issues relating learning to the intelligent
retrieval of textual data. Such issues include, for example:

o Descriptive features, clustering, category formation, and
indexing vocabularies in the domain of queries and documents.
+ Problems of very large, sparse feature sets.
+ Large, structured indexing vocabularies.
+ Clustering for supervised learning.
+ Connectionist cluster learning.
+ Content theories of indexing, similarity, and relevance.

o Learning from failures and explanations:
+ Dealing with high proportions of negative examples.
+ Explaining failures and successes.
+ Incremental query formulation, incremental concept
learning.
+ Exploiting feedback.
+ Dealing with near-misses.

o Learning from and about humans:
+ Intelligent apprentice systems.
+ Acquiring and using knowledge about user needs and
goals.
+ Learning new search strategies for differing user
needs.
+ Learning to classify via user interaction.

o Information Retrieval as a testbed for Machine Learning.

o Particularities of linguistically-derived features.

WORKSHOP COMMITTEE

Christopher Owens (Univ. of Chicago), owens@gargoyle.uchicago.edu
David D. Lewis (Univ. of Chicago), lewis@cs.umass.edu
Nicholas Belkin (Rutgers Univ.)
W. Bruce Croft (Univ. of Massachusetts)
Lawrence Hunter (National Library of Medicine)
David Waltz (Thinking Machines Corporation)

SUBMISSION DETAILS

Authors should submit 6 copies of their papers. Preference will be given to
papers that sharply focus on a single issue at the intersection of Information
Retrieval and Machine Learning, and that support specific claims with concrete
examples and/or experimental data. To be printed in the proceedings, papers
must not exceed 4 double-column pages (approximately 4000 words).

Researchers who wish to propose a panel discussion should submit 6 copies of a
proposal consisting of a brief (one page) description of the proposed topic,
followed by a list of the proposed participants and a brief (one to two
paragraph) summary of each participant's relevant work.

Both papers and panel proposals should be received by March 1, 1991, at the
following address:

Christopher Owens
Department of Computer Science
The University of Chicago
1100 East 58th Street
Chicago, IL 60637
Phone: (312) 702-2505

Formats and deadlines for camera-ready copy will be communicated upon
acceptance.

----------------------------------------------------------------------------


LEARNING REACTION STRATEGIES

The computational complexity of classical planning and the need for real-time
response in many applications has led many in AI to focus on reactive systems,
that is, systems that can quickly map situations to actions without extensive
deliberation. Efforts to hand code such systems have made it clear that when
agents must interact with complex environments the reactive mapping cannot be
fully specified in advance, but must be adaptable to the agent's particular
environment.

Systems that learn reaction strategies from external input in a complex domain
have become an important new focus within the machine learning community.
Techniques used to learn strategies include (but are not limited to):

o reinforcement learning
o using advice and instructions during execution
o genetic algorithms, including classifier systems
o compilation learning driven by interaction with the world
o sensorimotor learning
o learning world models suitable for conversion into reactions
o learning appropriate perceptual strategies

WORKSHOP COMMITTEE

Leslie Kaelbling (Teleos), leslie@teleos.com
Charles Martin (Univ. of Chicago), martin@cs.uchicago.edu
Rich Sutton (GTE), rich@gte.com
Jim Firby (Univ. of Chicago), firby@cs.uchicago.edu
Reid Simmons (CMU), reid.simmons@cs.cmu.edu
Steve Whitehead (Univ. of Rochester), white@cs.rochester.edu

SUBMISSION DETAILS

Papers must be kept to four two-column pages (approximately 4000 words) for
inclusion in the proceedings. Preference will be given to submissions with a
single, sharp focus. Papers must be received by March 1, 1990.

Send 3 copies of the paper to:

Charles Martin
Department of Computer Science
University of Chicago
1100 East 58th Street
Chicago, IL 60637

Formats and deadlines for camera-ready copy will be communicated upon
acceptance.

---------------------------------------------------------------------------

LEARNING RELATIONS

In the past few years, there have been a number of developments in empirical
learning systems that learn from relational data. Many applications (e.g.
planning, design, programming languages, molecular structures, database
systems, qualitative physical systems) are naturally represented in this
format. Relations have also been the common language of many advanced
learning styles such as analogy, learning plans and problem solving. This
workshop is intended as a forum for those researchers doing relational
learning to address common issues such as:

Representation: Is the choice of representation a relational language, a
grammar, a plan or explanation, an uncertain or probabilistic variant, or
second order logic? How is the choice extended or restricted for the purposes
of expressiveness or efficiency? How are relational structure mapped into
neural architectures?

Principles: What are the underlying principles guiding the system? For
instance: similarity measures to find analogies between relational structures
such as plans, "minimum encoding" and other approaches to hypothesis
evaluation, the employment of additional knowledge used to constrain
hypothesis generation, mechanisms for retrieval or adapation of prior plans or
explanations.

Theory: What theories have supported the development of the system? For
instance, computational complexity theory, algebraic semantics, Bayesian and
decision theory, psychological learning theories, etc.

Implementation: What indexing, hashing, or programming methodologies have been
used to improve performance and why? For instance, optimizing the performance
for commonly encountered problems (ala CYC).

The committee is soliciting papers that fall into one of three categories:
Theoretical papers are encouraged that define a new theoretical framework,
prove results concerning programs which carry our constructive or relational
learning, or compare theoretical issues in various frameworks. Implementation
papers are encouraged that provide sufficient details to allow
reimplementation of learning algorithms, and discuss the key time/space
complexity details motivating the design. Experimentation papers are
encouraged that compare methods or address hard learning problems, with
appropriate results and supporting statistics.

WORKSHOP COMMITTEE

Wray Buntine (RIACS and NASA Ames Research Center), wray@ptolemy.arc.nasa.gov
Stephen Muggleton (Turing Institute), steve@turing.ac.uk
Michael Pazzani (Univ. of California, Irvine), pazzani@ics.uci.edu
Ross Quinlan (Univ. of Sydney), quinlan@cs.su.oz.au

SUBMISSION DETAILS

Those wishing to present papers at the workshop should submit a paper or an
extended abstract, single-spaced on US letter or A4 paper, with a maximum
length of 4000 words. Those wishing to attend but not present papers should
send a 1 page description of their prior work and current research interests.

Three copies should be sent to arrive by March 1, 1991 to:

Michael Pazzani
ICS Department
University of California
Irvine, CA 92717 USA

Formats and deadlines for camera-ready copy will be communicated upon
acceptance.

---------------------------------------------------------------------------

MACHINE LEARNING IN ENGINEERING AUTOMATION

Engineering domains present unique challenges to learning systems, such as
handling continuous quantities, mathematical formulas, large problem spaces,
incorporating engineering knowledge, and the need for user-system interaction.
This session concerns using empirical, explanation-based, case-based,
analogical, and connectionist learning techniques to solve engineering
problems such as design, planning, monitoring, control, diagnosis, and
analysis. Papers should describe new or modified machine learning systems
that are demonstrated with real engineering problems and overcome limitations
of previous systems.

Papers should satisfy one or more of the following criteria:

o Present new learning techniques for engineering problems.
o Present a detailed case study which illustrates shortcomings preventing
application of current machine learning technology to engineering problems.
o Present a novel application of existing machine learning techniques to an
engineering problem indicating promising areas for applying machine learning
techniques to engineering problems.

Machine learning programs being used by engineers must meet complex
requirements. Engineers are accustomed to working with statistical programs
and expect learning systems to handle noise and imprecision in a reasonable
fashion. Engineers often prefer rules and classifications of events that are
more general than characteristic descriptions and more specific than
discriminant descriptions. Engineers have considerable domain expertise and
want systems that enable application of this knowledge to the learning task.

This session is intended to bring together machine learning researchers
interested in real-world engineering problems and engineering researchers
interested in solving problems using machine learning technology.

We welcome submissions including but not limited to discussions of
machine learning applied to the following areas:

o manufacturing automation
o design automation
o automated process planning
o production management
o robotic and vision applications
o automated monitoring, diagnosis, and control
o engineering analysis

WORKSHOP COMMITTEE

Bradley Whitehall (Univ. of Illinois)
Steve Chien (JPL)
Tom Dietterich (Oregon State Univ.)
Richard Doyle (JPL)
Brian Falkenhainer (Xerox PARC)
James Garrett (CMU)
Stephen Lu (Univ. of Illinois)

SUBMISSION DETAILS

Submission format will be similar to AAAI-91: 12 point font, single-spaced,
text and figure area 5.5" x 7.5" per page, and a maximum length of 4000 words.
The cover page should include the title of the paper, names and addresses of
all the authors, a list of keywords describing the paper, and a short (less
than 200 words) abstract. Only hard-copy submissions will be accepted (i.e.,
no fax or email submissions).

Four (4) copies of submitted papers should be sent to:

Dr. Bradley Whitehall
Knowledge-Based Engineering Systems Research Laboratory
Department of Mechanical and Industrial Engineering
University of Illinois at Urbana-Champaign
1206 West Green Street
Urbana, IL 61801
ml-eng@kbesrl.me.uiuc.edu

Formats and deadlines for camera-ready copy will be communicated upon
acceptance.


------------------------------
End of ALife Digest
********************************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT