Copy Link
Add to Bookmark
Report

Machine Learning List Vol. 1 No. 04

eZine's profile picture
Published in 
Machine Learning List
 · 1 year ago

 
Machine Learning List: Vol. 1 No. 4
Sunday, July 30, 1989

Contents:
1990 Machine Learning Conference
Cups (2)
Machine Learning in Europe
A Challenge Less Grand
Notes: Employment Advertisements

The Machine Learning List is moderated. Contributions should be relevant to
the scientific study of machine learning. Mail contributions to ml@ics.uci.edu.
Mail requests to be added or deleted to ml-request@ics.uci.edu

----------------------------------------------------------------------
Date: Thu, 27 Jul 89 13:02:55 CDT
>From: "B. Porter and R. Mooney" <ml90@cs.utexas.EDU>
Subject: ML90

Seventh International Conference on Machine Learning

The Seventh International Conference on Machine Learning will be held
at the University of Texas in Austin during June 21-23, 1990.
The conference will include presentations of refereed papers, invited
talks, and poster sessions.

The deadline for submitting papers is February 1, 1990.
Papers are limited to 12 double-space pages (including figures and
references), and should be formatted with 12 point font.
Authors will be notified of acceptance by March 20, 1990 and
camera-ready copy is due by April 23, 1990.
In addition to reporting advances in current areas of machine learning,
authors are encouraged to report results on exploring novel learning tasks.

Please send papers (3 copies) to:

Machine Learning Conference
Department of Computer Sciences
University of Texas at Austin
Austin, Texas 78712-1188

For information, please contact:

Bruce Porter or Raymond Mooney
ml90@cs.utexas.edu
(512)471-7316
----------------------------------------------------------------------
Date: Mon, 24 Jul 1989 8:36:53 EDT
>From: Bernard Silver <bs30%sirius@gte.COM>
Subject: Cups

The discussion about cups vs emotions as valid domains seems to me to
miss an important point. Simplifying assumptions are not always bad
(and are usually needed); the "cheating" comes when the assumptions made
are tailored to the learning algorithm. Doing EBL using a cup theory developed
for a robot vision planner seems fine to me, it may not be a
full-fledged theory of real-world cups but I haven't chosen which
simplifying assumptions are "clearly OK".

I think that using other people's domains/problems is the best way to
stay honest. Our (non-learning) work at Edinburgh with PRESS was an
example of this. As our problem set, we used EVERY equation-solving
question set in a standard British exam over a 15 year period. We were
forced to drop many "obvious" simplifying assumptions as the first few
questions came in, and as a result eventually forced to construct a program
which performed very well (about 90% correct).

(Of course, you can't make everyone happy. One psychologist concluded
that this success just demonstrated the vacuous nature of the exam!
Perhaps he would have preferred us to work on cups.)
----------------------------------------------------------------------
Date: Mon, 24 Jul 89 13:38 EDT
>From: Tom Fawcett <Tom%catherine@gte.COM>
Subject: Cups

I didn't intend to defend the cup domain; on the contrary, the main
point of my message was "yes, cups are exceedingly toy, but be careful
of what you choose instead." This was prompted by Pazzani's
concentrating on the cup domain as the (seemingly) only toy domain,
and by a common assumption in ML that a domain theory that deals with
real-world events/objects is somehow more respectable.

I apologize for mentioning O'Rorke's "emotions" domain in my list of
toy domains; it was a mistake to include it. But it doesn't change my
basic claim: many "real world" domain theories are constructed as
arbitrarily (and are as lacking in sophistication) as those that are
classified as "toy". As with a toy domain, the only explicit
requirement of these theories is that the learning method be able to
work with them. It is for this reason that I called them "arbitrary
simplifications" of the real world, and suggested that the
toy/real-world distinction be deemphasized.

The claims that your domain theory is very complex in some way, or is
psychologically plausible, or that it's actually being used by other
AI systems, are much more meaningful and useful than the simple claim
that yours is a "real world" domain.

----------------------------------------------------------------------
>From: unido!gmdzi!morik@uunet.uu.NET
Subject: Employment advertisements
Date: Wed, 26 Jul 89 19:17:30 +0100

Concerning the machine learning database it might be of interest that there
is a European (ESPRITP2154) project with the aim to offer several learning
algorithms in the same (UNIX-)environment via a common interface. Moreover, an
expert system called the Consultant, recommends a particluar algorithm for
a particular application. This presupposes the characterization of the learning
algorithms and the application areas which is the real research subject of
the project. As a side-effect the communication and exchange of European
researchers will be enhanced.
The project is called "Machine Learning Toolbox" and involves 12 partners,
industry, research institutes, and universities, e.g. Yves Kodratoff, Derek
Sleeman, Tim Niblett.
Currently, I'm working on the Common Knowledge Representation Language for the
MLT, which will be designed as a communication language ( NOT the universal
knowledge representation for ML!!!). If there is any experience with such a
task or any comment or hint - don't hesitate to mail to
Katharina Morik
GMD (German National Research Institute for Computer Science)
F3-XPS
POBox 1240
D-5205 Sankt Augustin 1
e-mail: morik@gmdzi.uucp Katharina

----------------------------------------------------------------------
Date: Sun, 30 Jul 89 11:59 EST
>From: 30-Jul-1989 1158 <LEWIS@cs.umass.EDU>
Subject: A Challenge Less Grand

I agree with Pazzani and Tadepalli that the state of NLP technology is
insufficient to do what we'd really like, which is to learn form text the kind
of knowledge bases that will support powerful inference. However, I'll argue
that there are currently attackable and interesting learning issues related to
natural language text.

In particular, information retrieval (IR) researchers have long applied
supervised and unsupervised learning techniques, mostly from pattern
recognition, to text retrieval. In recent years, NLP, knowledge-based, and
plausible inference techniques have started to be applied to IR (see recent
SIGIR proceedings), resulting in much more complex text representations. Making
effective use of these new representations will undoubtedly require different
learning approaches. Text retrieval has many advantages as a domain:

--It is an extremely difficult and decidedly non-toy problem in
which there is great real-world interest.
--There are methods in wide operational use which achieve a
reasonable, if far-from-perfect, level of performance, giving a
nontrivial standard of comparison.
--There is a long history of careful evaluation in IR and a number
of standard test collections.
--It's possible that even partial and errorful NLP techniques will
produce text representations that have a significant potential to
improve IR performance, especially if combined with effective learning
methods.

So I propose IR as a slightly less grand, but more tractable challenge.
I'll contact David Aha to see about getting some of the standard IR collections
into the UCI repository, and I'd like to encourage machine learning researchers
to submit papers for the 1990 AAAI Spring Symposium on Text-Based Intelligent
Systems. --Dave

David D. Lewis ph. 413-545-0728
Information Retrieval Laboratory BITNET: lewis@umass
Computer and Information Science (COINS) Dept. ARPA/MIL/CS/INTERnet:
University of Massachusetts, Amherst lewis@cs.umass.edu
Amherst, MA 01003
USA UUCP: ...!uunet!cs.umass.edu!lewis@uunet.uu.net
----------------------------------------------------------------------
>From: ml-request@ics.uci.edu
Subject: Employment advertisements

Employment advertisements will continue in ML-LIST restricted to at most
one screen of text at the end of the ML-LIST (due to the strong
support of those looking for employment).
----------------------------------------------------------------------
END of ML-LIST 1.4


← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT