Copy Link
Add to Bookmark
Report

Machine Learning List Vol. 2 No. 18

eZine's profile picture
Published in 
Machine Learning List
 · 1 year ago

 
Machine Learning List: Vol. 2 No. 18
Friday, Sept 14, 1990

Contents:
Neural Nets v. Decision Trees
m-of-n learning
International Conference on the Learning Sciences
Call for papers: Special Issue of MLJ on Reinforcement Learning
COLT '91 Call For Papers

The Machine Learning List is moderated. Contributions should be relevant to
the scientific study of machine learning. Mail contributions to ml@ics.uci.edu.
Mail requests to be added or deleted to ml-request@ics.uci.edu. Back issues
may be FTP'd from ics.uci.edu in /usr2/spool/ftp/pub/ml-list/V<X>/<N> or N.Z
where X and N are the volume and number of the issue; ID & password: anonymous

------------------------------
To: ml@ICS.UCI.EDU
Subject: Quinlan on Neural Nets v. Decision Trees


Sometime during the past year Quinlan gave a talk at Princeton comparing
the relative merits of decision trees and neural nets.

Does anyone have a transcript of the talk or a reference? Or thoughts about
the issue?


-- michael de la maza thefool@athena.mit.edu horse@ai.mit.edu

[See Dietterich et al.' excellent paper in the 1990 MLC, and three
papers in the 1989 IJCAI for starters.- Mike]
------------------------------
From: Steven Ritter <sr0o+@andrew.cmu.edu>
Subject: Re: m-of-n learning

I'm not sure if this is too theoretical for you, but Pitt and Valiant
(1988, JACM, 35, 965-984) have shown that n-of-m concepts (they call
them "boolean threshold functions") are not learnable. The result says
that a system which considers only n-of-m concepts will not be able to
discover a target n-of-m concept (or a close approximation of one) in
polynomial time (with respect to various parameters).

I'd be interested in any responses you get to your query.

Steve
sritter@psy.cmu.edu
------------------------------
From: "Ryszard S. Michalski" <michalsk@aic.gmu.edu>
Subject: Re: m-of-n learning

My student, J. Zhang, just defended a PhD thesis in which he showed
how to learn concepts using the so-called "extended complexes."
Such complexes include as a special case "n out of m" concepts, and thus
Zhang's method can learn both - standard conjunctions and
such "counting" concepts.
A report on his thesis is just being prepared. You can send a request
to get it to Janet at AICenter, GMU (jholmes@aic.gmu.edu; 703 764 6259;
alternatively (there were recently problems with the email connection
with her) jwnek@aic.gmu.edu).
---Ryszard
------------------------------
From: David Haussler <haussler@saturn.ucsc.edu>
Subject: learning m-out-of-n

Giulia Pagallo extended her FRINGE algorithm (ML, Vol. 5 , No 1)
to learn m-out-of-n concepts, and even disjunctions of m-out-of-n concepts.
Her experimental results were quite promising. It's written up
in her thesis, and may be published later. The thesis is available
by writing to jean@luna.ucsc.edu (I think)
or by writing to Giulia at giulia@cis.ucsc.edu
------------------------------
From: Chuck Anderson <chuck%henry@gte.COM>
Subject: m of n

Steve Hampson at UCI has done quite a bit of work in training
connectionist units to be m of n functions. See

"Disjunctive Models of Boolean Category Learning"
Biological Cybernetics, 55, 7-72, 1987

and

"Linear Function Neurons: Structure and Training"
Biological Cybernetics, 53, 203-217, 1986.


Chuck Anderson

GTE Laboratories Inc.
40 Sylvan Road
Waltham, MA 02254

617-466-4157
canderson@gte.com
------------------------------
From: Tom Dietterich <tgd@turing.cs.orst.edu>
Subject: Re: m of n concepts

Nick Littlestone did a lot of stuff on this. These are relatively
easy for perceptrons to learn, aren't they?
--Tom
------------------------------
From: Chuck Chang <chuck@aristotle.ils.nwu.edu>
Subject: International Conference on the Learning Sciences


CALL FOR PAPERS / CONFERENCE ANNOUNCEMENT

The International Conference on the Learning Sciences
(formerly The International Conference on Artificial
Intelligence and Education)
will be held August 4 - 7, 1991 at Northwestern University

Leading authorities will present keynote addresses exploring
new ideas and tools to improve teaching and learning in all
settings (schools, corporate training programs, etc.). In
addition, scholars in cognitive science, artificial
intelligence (AI), education, and psychology are invited to
submit proposals for papers. Areas of interest include, but
are not limited to, the following: cognitive development,
theories of teaching, applications of AI to educational
software, computational models of human learning, user models
and student models, innovative educational software,
simulation as a teaching tool, and evaluation of teaching
strategies.

All presentations will be 20 minutes long and followed by a
10-minute discussion. Persons wishing to present a paper
should submit (in English) 3 copies of a 300-word abstract
accompanied by a cover letter. Cover letters should include
paper title, author(s), postal and e-mail addresses, and
telephone number and should be sent to the following address:

Professor Roger C. Schank
The Institute for the Learning Sciences
Northwestern University
1890 Maple Avenue
Evanston, Illinois 60201-3142 USA

Important dates: Deadline for submission: January 4, 1991
Notification of acceptance: April 1, 1991

For further information, contact the Conference Director at
the above address, or call (708) 491-3500.
------------------------------
From: Rich Sutton <rich@gte.COM>
Subject: Call for papers: Special Issue of MLJ on Reinforcement Learning

CALL FOR PAPERS

The journal Machine Learning will be publishing a special issue on
REINFORCEMENT LEARNING in 1991. By "reinforcement learning" I mean
trial-and-error learning from performance feedback without an explicit
teacher other than the external environment. Of particular interest is
the learning of mappings from situation to action in this way.
Reinforcement learning has most often been studied within connectionist
or classifier-system (genetic) paradigms, but it need not be.

Manuscripts must be received by March 1, 1991, to assure full
consideration. One copy should be mailed to the editor:

Richard S. Sutton
GTE Laboratories, MS-44
40 Sylvan Road
Waltham, MA 02254
USA

In addition, four copies should be mailed to:

Karen Cullen
MACH Editorial Office
Kluwer Academic Publishers
101 Philip Drive
Assinippi Park
Norwell, MA 02061
USA

Papers will be subject to the standard review process.
------------------------------
From: Ming Li <mli@water.waterloo.edu>
Subject: COLT '91 Call For Papers



CALL FOR PAPERS

COLT '91
Fourth Workshop on Computational Learning Theory
Santa Cruz, CA. August 5-7, 1991

The fourth workshop on Computational Learning Theory will be held at
the Santa Cruz Campus of the University of California. Registration
is open, within the limits of the space available (about 150 people).

In previous years COLT has focused primarily on developments in the
analysis of learning algorithms within certain computational learning
models. This year we would like to widen the scope of the workshop by
encouraging papers in all areas that relate directly to the theory of
machine learning, including artificial and biological neural networks,
robotics, pattern recognition, information theory, decision theory,
Bayesian/MDL estimation, and cryptography. We look forward to a lively,
interdisciplinary meeting.

As part of our program, we are pleased to present two special invited talks.

"Gambling, Inference and Data Compression"
Prof. Tom Cover of Stanford University

"The Role of Learning in Autonomous Robots"
Prof. Rodney Brooks of MIT

Authors should submit an extended abstract that consists of:

(1) A cover page with title, authors' names,
(postal and e-mail) addresses, and a 200 word summary.
(2) A body not longer than 10 pages in twelve-point font.

Be sure to include a clear definition of the theoretical model used,
an overview of the results, and some discussion of their significance,
including comparison to other work. Proofs or proof sketches should be
included in the technical section. Experimental results are welcome,
but are expected to be supported by theoretical analysis. Authors
should send 11 copies of their abstract to L.G. Valiant, COLT '91,
Aiken Computing Laboratory, Harvard University, Cambridge, MA 02138.
The deadline for receiving submissions is February 15, 1991. This
deadline is FIRM. Authors will be notified by April 8; final
camera-ready papers will be due May 22. The proceedings will be
published by Morgan-Kaufmann. Each individual author will keep the
copyright to his/her abstract, allowing subsequent journal submission
of the full paper.

Chair: Manfred Warmuth (UC Santa Cruz).

Local arrangements chair: David Helmbold (UC Santa Cruz).

Program committee: Leslie Valiant (Harvard, chair), Dana Angluin (Yale),
Andrew Barron (U. Illinois), Eric Baum (NEC, Princeton), Tom Dietterich
(Oregon State U.), Mark Fulk (U. Rochester), Alon Itai (Technion, Israel),
Michael Kearns (Int. Comp. Sci. Inst., Berkeley), Ron Rivest (MIT),
Naftali Tishby (Bell Labs, Murray Hill), Manfred Warmuth (UCSC).

Hosting Institution: Department of Computer and Information Science,
UC Santa Cruz.

Papers that have appeared in journals or other conferences, or that are
being submitted to other conferences are not appropriate for
submission to COLT. Unlike previous years, this includes papers
submitted to the IEEE Symposium on Foundations of Computer Science
(FOCS). We no longer have a dual submission policy with FOCS.

Note: this call is being distributed to THEORY-NET, ML-LIST,
CONNECTIONISTS, Alife, NEWS.ANNOUNCE.CONFERENCES, COMP.THEORY,
COMP.AI, COMP.AI.EDU, COMP.AI.NEURAL-NETS, and COMP.ROBOTICS. Please
help us by forwarding it to colleagues who may be interested and
posting it on any other relevant electronic networks.
------------------------------
END of ML-LIST 2.18

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT