Copy Link
Add to Bookmark
Report

Machine Learning List Vol. 2 No. 11

eZine's profile picture
Published in 
Machine Learning List
 · 1 year ago

 
Machine Learning List: Vol. 2 No. 11
Thursday, June 14, 1990

Contents:
Re: To tolerate or to discover
AAAI Schedule
Cog Sci Schedule
Moderator's Note

The Machine Learning List is moderated. Contributions should be relevant to
the scientific study of machine learning. Mail contributions to ml@ics.uci.edu.
Mail requests to be added or deleted to ml-request@ics.uci.edu. Back issues
may be FTP'd from ics.uci.edu in /usr2/spool/ftp/pub/ml-list/V<X>/<N> or N.Z
where X and N are the volume and number of the issue; ID & password: anonymous

------------------------------
Date: Mon, 11 Jun 1990 22:56:37 PDT
From: Andrew Baker <baker@neon.stanford.edu>
Subject: Re: To tolerate or to discover

> Date: Tue, 5 Jun 90 12:53 CDT
> From: Shen Wei-Min <ai.wshen@MCC.COM>
> Subject: To tolerate or to discover?

> The question is that when a program's expectation does not match
> the external observation, based on what information shall we
> tolerate the observation as noise or discover new theories to
> revise our background knowledge?

This sounds like just the standard issue of how to determine the appropriate
complexity of a model, i.e., how to avoid overfitting or underfitting
the data. This is what statistical significance tests are for; generally,
one only accepts the more elaborate hypothesis if it passes such a test.
Probably the best approach to statistical significance is the Bayesian
approach, where one starts off by assigning prior probabilities to the
various hypotheses (presumably, as hypotheses get more and more complicated,
one's prior probabilities for these hypotheses will get smaller and smaller).
Bayes' theorem then provides the appropriate trade-off between a theory's
prior plausibility and its fit to the data.

This use of Bayes' theorem to choose among theories with different
complexities was advocated in a 1939 book by Harold Jeffreys [1].
Solomonoff [2,3] showed how one could use descriptive complexity to
define "universal" probability distributions (in effect, providing a
quantitative version of Occam's razor). And there are lots of learning
programs that make use of the Bayesian solution to significance testing;
see e.g. Cheeseman's classification program [4], or Quinlan and Rivest's
paper on probabilistic decision trees [5].

So I don't think that the general principle involved is an open problem.
On the other hand, actually formalizing one's priors (so that the system
will know in advance that Mendel's genes are more plausible than aliens
from outer space, to use your example) is another matter entirely.

Andrew Baker
baker@neon.stanford.edu
Department of Computer Science
Stanford University

[1] H. Jeffreys, {\em Theory of Probability}, Clarendon Press, 1939
(third edition, 1961).
[2] R.J. Solomonoff, A Formal Theory of Inductive Inference (Part I),
{\em Information and Control}, 7:1--22, 1964.
[3] R.J. Solomonoff, Complexity-Based Induction Systems: Comparisons
and Convergence Theorems, {\em IEEE Transactions on Information
Theory}, 24:422--432, 1978.
[4] P. Cheeseman {\em et al.}, AutoClass: A Bayesian Classification
System, {\em Proceedings of the Fifth International Conference on
Machine Learning}, pages 54--64, 1988.
[5] J.R. Quinlan and R.L. Rivest, Inferring Decision Trees Using the
Minimum Description Length Principle, MIT Tech. Report MIT/LCS/TM-339,
1987.

------------------------------
Date: Tue, 12 Jun 90 10:14 CDT
From: Shen Wei-Min <ai.wshen@MCC.COM>
Subject: Re: To tolerate or to discover

The problem still remains, as I quantified in my question, when learning
is incremental. Take learning a finite automata as an example, and
suppose a pripor we prefer deterministic to non-deterministic. Now, at
the time t, based on all the evidence we have so far, we decide the best
model is a deterministic machine M_t and discard an alternitive non-det
M'_t (which might lead to the correct one). Once the decision is made,
the prediction for further observation will be based on M_t. Now
suppose, at a later time ti, we find that M_t cannot be fit to the new
evidence tightly. We have trouble to go back to M'_t because we did not
keep all the options open.

This is also like searching in a space with restriction for backtracking.
At each node, we have new observation, and make a decision based on the
prior distribution and the currently best model we build. If we have all
the observation (or a fair sample), the prior distribution can guide us
to find the best model. But if we can only observe bit by bit, how do we
avoid to shift too dramatically.

Finally, deciding one's priors and formalizing one's priors are equally
important. When we face a new problem or enter a new environment, we need
priors, but I don't think anyone wants to commit on his priors. That is
why if the problem we face is a truly new one, we normally set the prior
to be uniform distribution. AutoClass does that if I remember correctly.


Wei-Min

------------------------------
Date: Tue, 12 Jun 1990 19:29:23 PDT
From: Andrew Baker <baker@neon.stanford.edu>
Subject: Re: To tolerate or to discover

You seem to be using "incremental" to mean that after choosing a
hypothesis, the programs discards its training data. I am not sure
I really like this definition. Many incremental learning programs (like
Utgoff's ID5) store *all* of their training data; when an incremental
program of this sort gets additional data, it can then modify its current
theory in a reasonable way.

In any event, I'll agree that for large problems, the program won't be
able to store its whole life history; it will have to rely on selective
memory, summary statistics, etc. In this case, the induction program
will need another module to determine which facts to remember, which
statistics to accumulate, etc. For example, suppose the program currently
believes in a weak probabilistic theory, and then it finds a potentially
attractive deterministic theory that explains the few data points that it
has bothered to remember. When it collects more data, it should start
noticing whether this data is consistent or not with the new hypothesis.
But at any fixed point, the program will have to evaluate the hypotheses
based on the data that it *currently* has -- and I don't see why the methods
for doing this should differ from the non-incremental scenario.

------------------------------
From: Tom Dietterich <tgd@turing.cs.orst.edu>
Subject: AAAI Schedule
Date: Wed, 13 Jun 90 11:28:35 PDT

Here are selected portions of the AAAI schedule. I believe everything
is correct, but this is from an old file, so it may not be exactly
right. There is an ML session in all but one of the time slots, so
there are unfortunately many conflicts with interesting parallel
sessions.

--Tom

SUNDAY July 29

Tutorial Program
2:00 pm-6:00 pm
Tutorial SP1: Basic Theory of Neural Networks
Geoffrey Hinton and Michael Jordon

MONDAY July 30

9:00 am-1:00 pm
Tutorial MA1: Applying Neural Network Techniques
David Touretzky and Yann LeCun

2:00 pm-6:00 pm
Tutorial MP1: Principles and Practice of Knowledge Acquisition
Thomas Gruber and Mark Musen

Tutorial MP5: Case-Based Reasoning
Janet Kolodner and Chris Riesbeck


TUESDAY July 31

11:00 am--12:40 pm Machine Learning: Knowledge-Based Inductive
Ballroom B, Hynes Center Learning

On Analytical and Similarity-Based Classification
Marc Vilain, Phyllis Koton and Melissa P. Chase, The
MITRE Corporation

Learning from Textbook Knowledge using Analogical Abductive EBL
William W. Cohen, Rutgers University

Changing the Rules: A Comprehensive Approach to Theory Refinement
Dirk Ourston and Raymond J. Mooney, University of Texas

Theory Reduction, Theory Revision, and Retranslation
Allen Ginsberg, AT&T Bell Laboratories

2:10 pm--3:50 pm Machine Learning: Connectionist Methods
Ballroom B, Hynes Center

Refinement of Approximate Domain Theories by Knowledge-Based
Artificial Neural Networks
Geoffrey G. Towell, Jude W. Shavlik and Michiel O.
Noordewier, University of Wisconsin

Empirical Studies on the Speed of Convergence of Neural Network
Training using Genetic
Algorithms
Hiroaki Kitano, Carnegie Mellon University

A Hybrid Connectionist, Symbolic Learning System
Lawrence O. Hall and Steve G. Romaniuk, University of
South Florida

Explaining Temporal-Differences to Create Useful Concepts for
Evaluating States
Richard C. Yee, Sharad Saxena, Paul E. Utgoff and Andrew G.
Barto, University of Massachusetts


4:40 pm--6:20 pm Invited Talk: Probably Approximately Correct
Learning
Hynes Auditorium

This talk surveys recent theoretical results on the efficiency of
machine learning algorithms. The main tool is the notion of Probably
Approximately Correct (PAC) learning, introduced by Valiant. In the
first part of the talk, it will define PAC learning and give some
examples and simple theorems. In the second part, a brief description
of some recent extensions and modifications of this model will occur.
These include distribution specific results, learning with queries,
noise models, learning multivalued functions (including real and
vector-valued functions), incremental learning and unsupervised
learning.

Speaker: David Haussler, University of California at
Santa Cruz

WEDNESDAY Aug 1

8:30 am--10:10 am Machine Learning: Learning and Problem Solving
Ballroom B, Hynes Center

Learning Effective Abstraction Hierarchies
Craig A. Knoblock, Carnegie Mellon University

Learning Search Control for Constraint-Based Scheduling
Megan Eskey and Monte Zweben, NASA Ames Research
Center

Adaptive Search by Online Explanation-Based Learning of Approximate
Negative Heuristics
Neeraj Bhatnagar, Siemens Corporate Research;
Jack Mostow, Rutgers University

Empirical Comparisons of Some Design Replay Algorithms
Brad Blumenthal, University of Texas at Austin


11:00 am--12:40 pm Machine Learning: Inductive Learning II --
Relational and Probabilistic Models
Ballroom B, Hynes Center

Effective Generalization of Relational Descriptions
Larry Watanabe and Larry Rendell, University of Illinois
at Urbana-Champaign

Inductive Learning in Probabilistic Domain
Yoichiro Nakakuki, Yoshiyuki Koseki and Midori Tanaka,
NEC Corporation

Learning Causal Trees from Empirical Data
Dan Geiger, Anzaria Paz and Judea Pearl, University of
California at Los Angeles

Constructor: A System for the Induction of Probabilistic Models
Robert M. Fung and Stuart L. Crawford, Advanced Decision
Systems


Tutorial Program
2:00 pm--6:00 pm
Tutorial WP1: Explanation-Based Learning: Problems and Methods
Smadar Kedar and Steven Minton

Tutorial WP5: Genetic Algorithms and Classifier Systems
David E. Goldberg and John R. Koza

Thursday, August 2

8:30 am--10:10 am Machine Learning: Inductive Learning III
Ballroom B, Hynes Center

Adding Domain Knowledge to SBL through Feature Construction
Christopher John Matheus, GTE Laboratories Incorporated

What Should Be Minimized in a Decision Tree?
Usama M. Fayyad and Keki B. Irani, University of Michigan

Myths and Legends in Learning Classification Rules
Wray Buntine, Turing Institute

Inductive Learning in a Mixed Paradigm Setting
David B. Skalak and Edwina L. Rissland, University of
Massachusetts


2:10 pm--3:50 pm Machine Learning: Discovery and Learning Robots
Ballroom B, Hynes Center

Learning to Coordinate Behaviors
Pattie Maes and Rodney A. Brooks, Massachusetts Institute
of Technology

Two Case Studies in Cost-Sensitive Concept Acquisition
Ming Tan and Jeffrey C. Schlimmer, Carnegie Mellon
University

A Proven Domain-Independent Scientific Function-Finding Algorithm
Cullen Schaffer, Rutgers Unviersity

Automated Discovery in a Chemistry Laboratory
Jan M. Zytkow, Jieming Zhu and Abul Hussam, George
Mason University

4:40 pm--6:20 pm Machine Learning: Speedup Learning
Ballroom B, Hynes Center

Extending EBG to Term-Rewriting Systems
Philip Laird and Evan Gamble, NASA Ames Research Center

Why Prodigy/EBL Works
Oren Etzioni, Carnegie Mellon University

Operationality Criteria for Recursive Predicates
Stanley Letovsky, Carnegie Mellon University

The Utility of EBL in Recursive Domain Theories
Devika Subramanian and Ronen Feldman, Cornell University


8:30 am--10:10 am Machine Learning: Generalization and Specialization
Ballroom B, Hynes Center

Generalization with Taxonomic Information
Alan M. Frisch and C. David Page, University of Illinois

A Duality Between Generalization and Discrimination
Wei-Min Shen, MCC

Incremental Non-Backtracking Focusing: A Polynomially Bounded
Generalization Algorithm for Version Spaces
Benjamin D. Smith and Paul S. Rosenbloom, University of
Southern California

Knowledge Level and Inductive Uses of Chunking (EBL)
Paul S. Rosenbloom, University of Southern California - ISI;
Jans Aasman, Rijksuniversiteit Groningen

11:00 am--12:40 pm PLENARY ADDRESS:
Hynes Auditorium Introduction: by Daniel G. Bobrow, President AAAI

Invited Speaker: Craig Fields, DARPA
Title: "AI and the National Technological Infrastructure"
------------------------------
Subject: Cognitive Science Society Meeting
From: Stevan Harnad <harnad@clarity.Princeton.EDU>
Date: Tue, 12 Jun 90 00:11:15 -0400

The XII Annual Conference of the Cognitive Science Society will take
place at MIT, July 25-28, 1990. (Immediately preceding the meeting of
the AAAI, also to take place in the Boston area).

Conference Chair: M. Piattelli-Palmarini (MIT) Scientific Advisors: Beth
Adelson (Tufts), Stephen M. Kosslyn (Harvard) Steven Pinker (MIT),
Kenneth Wexler (MIT)

Registration fees:
Members 150$ (before July 1), $200 after July 1
non-members 185 225
student 90 110

Contact the MIT Conference Services, MIT
Room 7- 111 Cambridge, MA 02139
Tel. (617) 253-1700
_______________________________________________

Outline of the program

Tuesday July 24, Wednesday July 25
Tutorials: "Cognitive Aspects of Linguistic Theory",
"Logic and Computability",
Cognitive Neuroscience"
(Require separate registrations)

Wednesday, July 26
4.00 - 7.30 pm Registration at Kresge Auditorium
7.30 - 9.00 First plenary session : Kresge Main
Auditorium

Welcoming address by Samuel Jay Keyser, Assistant Provost of MIT,
Co-Director of the MIT Center for Cognitive Science; Welcoming address by
David E. Rumelhart (Stanford), Chairman of the Board of the Cognitive
Science Society

Keynote speaker: Noam Chomsky (MIT) "
Language and Cognition"

__________

Thursday, July 26 9.00 am - 11.15 am

Symposia:

Execution-Time Response: Applying Plans in a Dynamic World
Kristian J. Hammond (University of Chicago), Chair
Phil Agre (University of Chicago)
Richard Alterman (Brandeis University)
Reid Simmons (Carnegie Mellon University)
R. James Firby (NASA Jet Propulsion Lab)

Cognitive Aspects of Linguistic Theory
Howard Lasnik (University of Connecticut), Chair
David Pesetsky (Massachusetts Institute of Technology), Chair
James T. Higginbotham (Massachusetts Institute of Technology)
John McCarthy (University of Massachusetts)

Perception, Computation and Categorization
Whitman Richards (Massachusetts Institute of Technology), Chair
Aaron Bobick (SRI International)
Ken Nakayama (Harvard University)
Allan Jepson (University of Toronto)

Paper Presentations:

Rule-Based Reasoning,Explanation and Problem-Solving

Reasoning II: Planning

11.30 - 12.45 Plenary session Kresge Main
Auditorium
Keynote Speaker: Morris Halle (MIT) "
Words and their Parts"
Chair: Kenneth Wexler (MIT)
__________________

Thursday, July 26 Afternoon 2.00 pm - 4.15 pm

Symposia:

Principle-Based Parsing
Robert C. Berwick (Massachusetts Institute of Technology), Chair
Steven P. Abney (Bell Communications Research)
Bonnie J. Dorr (Massachusetts Institute of Technology)
Sandiway Fong (Massachusetts Institute of Technology)
Mark Johnson (Brown University)
Edward P. Stabler, Jr. (University of California, Los Angeles)

Recent Results in Formal Learning Theory
Kevin T. Kelly (Carnegie Mellon University)
Clark Glymour (Carnegie Mellon University), Chair

Self-Organizing Cognitive and Neural Systems
Stephen Grossberg (Boston University), Chair
Ennio Mingolla (Boston University)
Michael Rudd (Boston University)
Daniel Bullock (Boston University)
Gail A. Carpenter (Boston University)

Action Systems: Planning and Execution
Emilio Bizzi (Massachusetts Institute of
Technology), Chair
Michael I. Jordan (Massachusetts Institute of
Technology)

Paper presentations

Reasoning : Analogy

Learning and Memory : Acquisition

4.30 - 5.45 Plenary Session (Kresge Main Auditorium)
Keynote Speaker: Amos Tversky (Stanford)
"
Decision under conflict"
Chair: Daniel N. Osherson (MIT)

Banquet
______________

Friday July 27 9.00 - 11.45 am

Symposia:

What's New in Language Acquisition ?
Steven Pinker and Kenneth Wexler (MIT), Chair
Stephen Crain (University of Connecticut)
Myrna Gopnik (McGill University)
Alan Prince (Brandeis University)
Michelle Hollander, John Kim, Gary Marcus,
Sandeep Prasada, Michael Ullman (MIT)

Attracting Attention
Ann Treisman (University of California,Berkeley), Chair
Patrick Cavanagh (Harvard University)
Ken Nakayama (Harvard University)
Jeremy M. Wolfe (Massachusetts Institute of Technology)
Steven Yantis (Johns Hopkins University)

A New Look at Decision Making
Susan Chipman (Office of Naval Research) and Judith Orasanu (Army
Research Institute and Princeton University),Chair
Gary Klein (Klein Associates)
John A. Swets (Bolt Beranek & Newman Laboratories)
Paul Thagard (Princeton University)
Marvin S. Cohen (Decision Science Consortium, Inc.)

Designing an Integrated Architecture: The Prodigy View
Jaime G. Carbonell (Carnegie Mellon University), Chair
Yolanda Gil (Carnegie Mellon University)
Robert Joseph (Carnegie Mellon University)
Craig A. Knoblock (Carnegie Mellon University)
Steve Minton (NASA Ames Research Center)
Manuela M. Veloso (Carnegie Mellon University)

Paper presentations:

Reasoning : Categories and Concepts

Language : Pragmatics and Communication

11.30 - 12. 45 Plenary Session (Kresge main
Auditorium)
Keynote speaker: Margaret Livingstone (Harvard)
"
Parallel Processing of Form, Color and Depth"
Chair: Richard M. Held (MIT)
_________________________

Friday, July 27 afternoon 2.00 - 4.15 pm

Symposia:

What is Cognitive Neuroscience?
David Caplan (Harvard Medical School) and
Stephen M. Kosslyn (Harvard University), Chair
Michael S. Gazzaniga (Dartmouth Medical School)
Michael I. Posner (University of Oregon)
Larry Squire (University of California, San Diego)

Computational Models of Category Learning
Pat Langley (NASA Ames Research Center) and
Michael Pazzani (University of California, Irvine), Chair
Dorrit Billman (Georgia Institute of Technology)
Douglas Fisher (Vanderbilt University)
Mark Gluck (Stanford University)

The Study of Expertise: Prospects and Limits
Anders Ericsson (University of Colorado, Boulder),Chair
Neil Charness (University of Waterloo)
Vimla L. Patel and Guy Groen (McGill University)
Yuichiro Anzai (Keio University)
Fran Allard and Jan Starkes (University of Waterloo)
Keith Holyoak (University of California, Los Angeles), Discussant

Paper presentations:

Language (Panel 1) : Phonology
Language (Panel 2) : Syntax

4.30 - 5.45 Keynote speaker: Anne Treisman (UC Berkeley)
"
Features and Objects"
Chair: Stephen M. Kosslyn (Harvard)

Poster Sessions
I. Connectionist Models
II. Machine Simulations and Algorithms
III. Knowledge and Problem-Solving
__________________________________

Saturday, July 28 9.00 am - 11.15 am

Symposia:

SOAR as a Unified Theory of Cognition: Spring 1990
Allen Newell (Carnegie Mellon University),
Chair
Richard L. Lewis (Carnegie Mellon University)
Scott B. Huffman (University of Michigan)
Bonnie E. John (Carnegie Mellon University)
John E. Laird (University of Michigan)
Jill Fain Lehman (Carnegie Mellon University)
Paul S. Rosenbloom (University of Southern California)
Tony Simon (Carnegie Mellon University)
Shirley G. Tessler (Carnegie Mellon University)

Neonate Cognition
Richard Held (Massachusetts Institute of Technology), Chair
Jane Gwiazda (Massachusetts Institute of Technology)
Renee Baillargeon (University of Illinois)
Adele Diamond (University of Pennsylvania)
Jacques Mehler (CNRS, Paris, France) Discussant

Conceptual Coherence in Text and Discourse
Arthur C. Grasser (Memphis State University), Chair
Richard Alterman (Brandeis University)
Kathleen Dahlgren (Intelligent Text Processing, Inc.)
Bruce K. Britton (University of Georgia)
Paul van den Broek (University of Minnesota)
Charles R. Fletcher (University of Minnesota)
Roger J. Kreuz (Memphis State University)
Richard M. Roberts (Memphis State University)
Tom Trabasso and Nancy Stein

Paper presentations:

Causality,Induction and Decision-Making

Vision (Panel 1) : Objects and Features
Vision (Panel 2) : Imagery

Language : Lexical Semantics

Case-Based Reasoning


11.30 - 12.45 Keynote Speaker
Ellen Markman (Stanford)
"
Constraints Children Place on Possible Word Meanings"
Chair: Susan Carey (MIT)

Lunch presentation: "
Cognitive Science in Europe: A Panorama"
Chair: Willem Levelt (Max Planck, Nijmegen).
Informal presentations by: Jacques Mehler (CNRS,
Paris), Paolo Viviani (University of Geneva), Paolo
Legrenzi (University of Trieste), Karl Wender
(University of Trier).
_____________________________

Saturday 28 Afternoon 2.00 - 3.00 pm

Paper presentations:

Vision : Attention

Language Processing

Educational Methods

Learning and Memory

Agents, Goals and Constraints

3.15 - 4.30 Keynote Speaker: Roger Schank
(Norhwestern) "
The Story is the Message: Memory and Instruction"
Chair: Beth Adelson (Tufts)
4.30 - 5.45 Keynote Speaker: Stephen Jay Gould (Harvard)
"
Evolution and Cognition"
Chair: Steven Pinker (MIT)
-----------------------------
Moderator's Note: If you reply directly to the author of an article in ML-LIST
and you want your response fowarded to everyone on the list, please cc:
ml@ics.uci.edu. I can never decide what to do when the original author
fowards replies to me. I usually contact the person who replied to ask
for permission, but the mail doesn't always get through.
-----------------------------
END of ML-LIST 2.11

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT