Copy Link
Add to Bookmark
Report

Machine Learning List Vol. 2 No. 10

eZine's profile picture
Published in 
Machine Learning List
 · 1 year ago

 
Machine Learning List: Vol. 2 No. 10
Saturday, June 9, 1990

Contents:
Re: Get Real! Comments on ML 2(8)
To tolerate or to discover?
Is there learning or compilation in EBL and/or CBR?
ML talks at the AAAI 90?

The Machine Learning List is moderated. Contributions should be relevant to
the scientific study of machine learning. Mail contributions to ml@ics.uci.edu.
Mail requests to be added or deleted to ml-request@ics.uci.edu. Back issues
may be FTP'd from ics.uci.edu in /usr2/spool/ftp/pub/ml-list/V<X>/<N> or N.Z
where X and N are the volume and number of the issue; ID & password: anonymous

------------------------------
Date: Fri, 25 May 90 14:28:41 -0500
From: Larry Watanabe <watanabe@cs.uiuc.edu>
Subject: Get Real

Jeff Shrager writes:
>What ever happened to the goals of understanding human learning,
>explanation, development, concept formation? Instead of random test data
>or soy bean data, why aren't you feeding your systems protocols of parent
>or teacher input gathered from real homes or classrooms? Yeah, I know,
>you're trying to get at the deep computational properties and scope of
>these algorithms...blah blah blah. There's something to this, but it's a
>shame to see a whole community drop into this black hole. And anyway, how
>do you know that the algorithms you're spending so much time studying the
>details of are even vaguely plausible either as engineering tools or as
>psychological models?

One of the points that Jeff addresses may be the following:
there is little work on knowledge acquisition from a helpful
teacher (the exceptions that I know of are mainly by researchers
in cognitive science, such as John Anderson's work, and Van Lehn's
Sierra system).

Perhaps learning algorithms for knowledge
acquisition from a helpful teacher should be quite different
from learning algorithms for machine discovery. Most inductive
learning algorithms (both theoretical and implemented systems)
make few assumptions about the training order, distribution of
examples, internal models of the teacher and student, etc.
This makes them appropriate for discovery, but
slow for situations where the above assumptions hold.

I would appreciate if others would add to Jeff's reference
list. In particular, I would be interested in work
that characterizes the assumptions that are
appropriate for learning from a teacher, and empirical
evidence that these assumptions hold.

The problem with just feeding parent or teacher input to
a system, without making these kinds of characterizations,
is that one may just end up with a large, complicated
system that may not be generalizable to other domains
(or even other examples!). Further, the inductive engine
may just look like a less powerful version of a discovery
system.

-Larry Watanabe watanabe@cs.uiuc.edu

------------------------------
Subject: Get Real
Date: Mon, 28 May 90 13:43:31 -0700
From: John Gennari <gennari@harlie.ICS.UCI.EDU>

Jeff: an interesting commentary, indeed. If this doesn't wake up the
ML-list, I don't know what will. I won't respond to all of your points,
but I'll give you my answer as to why ML has become "boring" (in your
words).

> What ever happened to the goals of understanding human learning,
> explanation, development, concept formation? Instead of random test data
> or soy bean data, why aren't you feeding your systems protocols of parent
> or teacher input gathered from real homes or classrooms? . . . .
> And anyway, how do you know that the algorithms you're spending so much
> time studying the details of are even vaguely plausible either as
> engineering tools or as psychological models?

> Try this. Tape record "All Things Considered" or "McNeil-Lehrer" one day
> and transcribe the explanations that take place there.

For me, those high-level goals are still there, but that does not
mean that I'm ready to go try my system out on McNeil-Lehrer. As I see it,
one of the major shortcomings of early work in AI is that it tried to claim
too much, and to solve too many goals at once. A classic example is the
history of work on machine translation.
I don't believe that tasks like understanding McNeil-Lehrer are
solveable without advancements way beyond the state-of-the-art in machine
learning. Understanding human learning is extremely difficult, and the only
way to tackle the problem is very slowly, piece by painstaking piece. This
includes understanding algorithms under different conditions, mathematical
analysis of limitations, etc, etc. If this is boring to you, I'm sorry.
I still find plenty of time to dream about how all the pieces might fit
together someday, as a real psychological model. But I'm willing to state
up front that these are just dreams, and I don't think they should get
published in the Machine Learning Journal.

I realize that this view may be just as controversial as yours. I'll
add a caveat that the above is pure opinion, and opinion of a relatively
junior researcher at that. However, I think it's important that we remember
the failures of early AI work, and not try to tackle too much, too soon.

-John Gennari
------------------------------
Date: Tue, 5 Jun 90 12:53 CDT
From: Shen Wei-Min <ai.wshen@MCC.COM>
Subject: To tolerate or to discover?

This is a question that's bothered me for a long time. I don't have an answer
for it. But since it seems fundamental, I'd like to hear other people's
ideas or opinions.

The question is that when a program's expectation does not match
the external observation, based on what information shall we
tolerate the observation as noise or discover new theories to
revise our background knowledge?

For example, when Mendel observed that most offspring of green peas are
green but some are yellow, he could treat the yellow offspring as
noise instead of hypothesizing the theory of genes. On the other hand,
when someone told you that he/she saw an alien from outer space, you
probably will treat that data as noise and keep your beliefs that
nothing can travel faster than light.

This question occurs in many domains. In learning concepts from
examples, it is well known that if an algorithm is too noise-sensitive
(i.e. change its hypothesized concept every time a prediction fails)
then it will do poorly on noisy data. If an algorithm is too
theory-stubborn then it will not find the target concept.

In inductive inference (eg. learning finite automata based on
input/output observations), a "tolerated" learning program will always
settle down on a non-deterministic machine even if the target is
deterministic. This is because a failure of prediction can always be
treated as another non-deterministic branch of some state. On the other
hand, a "discover" program will do poorly if the target is
non-deterministic even if it can learn deterministic machines well
(because it tries to predict the behavior precisely).

In statistics, if learning must be incremental (although one may argue
the necessity of incremental learning), then we have the same problem.
Shall we change the patterns discovered so far in order to incorporate
the new data that do not fit, or we keep the pattern and hope the
unfitted data will be eventually proved as noise.

In default reasoning, when new unexpected observation occurs, one can
either treat it as an exception of the default theories, or change the
default to something new. For example, before Newton the default was
that everything is motionless. Things that are moving are exceptions to
the default because they are influenced by some force. This default is
changed by Newton. Nowadays, we believe by default that everything is
moving; and things that are motionless are exceptions.

Finally, I think the question "to tolerate or to discover" is more than
just a philosophic interest because in the real world we rarely know the
real truth of our observation. Unless we have a computational solution,
computer programs will be either too ignorant to discover new theories
from natural data or too suspicious to learn anything with a practical
application.

------------------------------
Date: Fri, 8 Jun 90 17:42:34 +0200
From: Fred Tusveld <hcsrnd!tusveld@relay.EU.NET>
Subject: Is there learning or compilation in EBL and/or CBR?


There seems to be a controversy around the learning competence of CBR.
In Russels' dissertation [S J Russell, Pitman 1989], the term 'knowledge
compilation' is coined to describe the EBL area. I'm wondering how researchers
on EBL feel about such a description.

In the Working Notes of the 1990 AAAI Spring Symposium Series on CBR,
an article appears which compares EBL and CBR. ("Towards a Unification
of CBR and EBL", by M S Braverman and R Wilensky). In general, it
seems they judge the competence of EBL and CBR to be equal. As I
haven't attended the workshop, I would like to hear some
counter-arguments.


Fred Tusveld,HCS Technology, Industrial Automation, R&D
Landdrostlaan 51, 7302 HA Apeldoorn The Netherlands
tel. +31 55 498600
tusveld@hcsrnd.uucp.nl
or hcsrnd!tusveld@relay.EU.net

------------------------------
Date: Fri, 8 Jun 90 17:44:40 +0200
From: Fred Tusveld <hcsrnd!tusveld@relay.EU.NET>
Subject: ML talks at the AAAI 90?

Does anyone know when talks are scheduled on ML, EBL and/or CBR at AAAI 90?

[I'd be happy to foward schedules of AAAI, MLC, or CogSci if anyone has them
on line: Mike]
------------------------------
END of ML-LIST 2.10

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT