Copy Link
Add to Bookmark
Report

Machine Learning List Vol. 2 No. 08

eZine's profile picture
Published in 
Machine Learning List
 · 11 months ago

 
Machine Learning List: Vol. 2 No. 8
Tuesday, May 15, 1990

Generating random data to test machine learning programs.
Machine Learning Conference
AUTOCLASS III

The Machine Learning List is moderated. Contributions should be relevant to
the scientific study of machine learning. Mail contributions to ml@ics.uci.edu.
Mail requests to be added or deleted to ml-request@ics.uci.edu. Back issues
may be FTP'd from ics.uci.edu in /usr2/spool/ftp/pub/ml-list/V<X>/<N> or N.Z
where X and N are the volume and number of the issue; ID & password: anonymous

------------------------------
Date: Thu, 26 Apr 90 03:53:32 -0500
From: Ganesh Mani <ganesh@cs.wisc.edu>
Subject: Generating random data to test machine learning programs.

> Date: Wed, 25 Apr 90 23:18:48 -0700
> From: Michael Pazzani <pazzani@ICS.UCI.EDU>

> Positive examples can be generated easily by taking the leaves of a
> random proof tree (and perhaps adding some irrelevant features).

> However, it's not clear that negative examples can be generated as easily.
> Any suggestions?

In the case of domain theories where the proof trees can be enumerated
and are small in number (this will not be true for intractable or recursive
domain theories), negative examples can be generated by changing the
feature values until the example is satisfied by
none of the proof trees.
This can be done incrementally to (in some sense) generate negative
examples which are near misses and negative examples which are "far away"
(on some similarity or information-theoretic distance metric) from
the concept. Thus, if we have

apple(X) :- apple_shape(X), apple_color(X),.........
apple_shape(X) :- shape(X, round).
apple_color(X) :- color(X, green).
apple_color(X) :- color(X, red).
.....


there will be at least two proof trees for the concept of an apple, and
a near-miss negative example
can be generated by including color(X, yellow) or color(X, foo) in the
example. A more severe negative example can be generated by also
including shape(X, square).

It turns out that generating all proof trees for this purpose is not
all that expensive; critical paths in the proof tree(s) can be identified
(akin to the notion of cut sets for partitioning networks) easily, especially
if the domain theory rules are ordered.
For example, in the case above the path rooted at apple_shape(X)
is a critical path of degree 1. Thus, by
just including shape(X, bar) where bar "not equal to" round, we can
guarantee a negative example without generating the full proof trees.


---Ganesh Mani

------------------------------
From: Machine Learning Conference 1990 <ml90@cs.utexas.edu>
Date: Fri, 27 Apr 90 09:51:02 CDT
Subject: Machine Learning Conference


SEVENTH INTERNATIONAL CONFERENCE ON MACHINE
LEARNING: CALL FOR PARTICIPATION


The Seventh International Conference on Machine Learning will be
held at the University of Texas in Austin during June 21-23, 1990.
Its goal is to bring together researchers from all areas of
machine learning. The conference will include presentations of
refereed papers, and invited talks by Tom Mitchell, Doug Lenat,
Lenny Pitt, and Don Michie. The deadline for early registration is
June 1st.

For more information, contact:

Bruce Porter or Raymond Mooney

Department of Computer Sciences ml90@cs.utexas.edu
University of Texas (512)471-7316
Austin, Texas 78712

------------------------------
Date: Fri, 11 May 90 15:18 PDT
From: Robin Hanson <Hanson@CHARON.arc.nasa.GOV>
Subject: Classes from Data! Autoclass III is here! Elvis weds Alien!

Announcing AUTOCLASS III, the mostest Bayesian Classifier ever! You
describe a bunch of cases with as many attributes as you like, and
before you can factor a 500 digit number, out pops a set of classes
defined by a small set of class parameters, with cases assigned to
classes.

FEATURES:

* Finds the best number of classes automatically.
* Can deal with positions, magnitudes, discretes, and unknown values.
* Can leap tall buildings in a single bound. (scratch that, wrong ad)
* Class/Case assignments are probabilistic, not just either/or.
* Indicates which attributes are most influential for which classes.
* Described in respectable publications (see below).
* You can stop the search and get the current best answer at any time.
* Estimates rate of progress in search, to aid deciding when to quit.
* Found meaningful new classes in the infra red spectra of stars (see below).
* Fully Bayesian - uses priors and finds max posterior classification.

How much would you pay for all this? Don't answer! There's more.
If you act now you also get:

* In Commonlisp, with source code as standard equipment.
* Organically grown and bug-free (we call them features).
* Our cheerful support staff will answer all your calls 3:00-3:01am
* Runs lickity-split on Symbolics, Sun (Franz, Lucid), & Explorer.
* Spiffy results graphics interface available on Symbolics.
* Developed by NASA, the guys who gave you the moon.
* Price: FREE! Or make an offer!
* Available via anonymous ftp: riacs.edu /pub/autoclass/read-me.text
(Internet address: 128.102.16.8)

Questions? Comments? Want the Movie Rights? Write us at:

Will Taylor <taylor@pluto.arc.nasa.gov>
NASA Ames Research Center, MS 244-17, Moffett Field, CA 94035

REFERENCES:

P. Cheeseman, et al. "Autoclass: A Bayesian Classification System",
Proceedings of the Fifth International Conference on Machine Learning,
pp. 54-64, Ann Arbor, MI. June 12-14 1988.
P. Cheeseman, et al. "Bayesian Classification", Proceedings of the
Seventh National Conference of Artificial Intelligence (AAAI-88),
pp. 607-611, St. Paul, MN. August 22-26, 1988.
J. Goebel, et al. "A Bayesian classification of the IRAS LRS Atlas",
Astron. Astrophys. 222, L5-L8 (1989).

The Bayes Boys,

Peter Cheeseman, John Stutz, Robin Hanson, Will Taylor



------------------------------
END of ML-LIST 2.8

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT