Copy Link
Add to Bookmark
Report
Machine Learning List Vol. 1 No. 07
Machine Learning List: Vol. 1 No. 7
Saturday, Sept 9, 1989
Contents:
Review: Cognizers by R. Colin Johnson
Double Auction Tournament
Natural Language and Machine Learning
Use of known feature dependencies in (un)supervised learning
The Machine Learning List is moderated. Contributions should be relevant to
the scientific study of machine learning. Mail contributions to ml@ics.uci.edu.
Mail requests to be added or deleted to ml-request@ics.uci.edu. Back issues
of Volume 1 may be FTP'd from /usr2/spool/ftp/pub/ml-list/V1/<N> where N
is the number of the issue; Host ics.uci.edu; Userid & password: anonymous
----------------------------------------------------------------------
Date: Fri, 1 Sep 89 12:50:56 -0500
From: "Carl M. Kadie" <kadie@cs.uiuc.EDU>
Subject: Review: Cognizers by R. Colin Johnson
I just paged through a book called Cognizers by R. Colin Johnson. The
author is the former editor of a trade newspaper.
The book is about artificial neural nets (ANNs), which the author prefers
call cognizers, a term he coined because it has a nice verb form.
Here are the three most provocative arguments of the book:
+ "The Brain is Not A Computer" A computer can perhaps
*simulate* the brain, but it cannot *model* the brain. ANN's
can perhaps *model* the brain.
I won't try to explain the distinction he sees between simulating and modeling.
Or how it is that ANNs overcome the distinction while computers do not.
He draws an analogy between the skyline of a city and the skyline of a
model of a city. I think most people in computer science believe that
computation is different than skylines. Something that simulates a
skyline, may not be a skyline. Something that fully simulates
a computer *is* a computer. Whether the brain/mind/sole follows the
pattern of computers or skylines is a philosophical question that
the books does little to settle.
+ ANNs are better than computers because they don't use symbols. Symbols
are a problem because no one knows how to tie them to things in the
real world (the symbol-grounding problem).
One could just as easily make the opposite argument. People use
symbols (for example, in language). ANNs don't. Computers do. Therefore,
computers are better than ANN's. Is symbol use a bug or a feature?
The book does little to settle the question.
+ AI systems can't learn.
"Although the expert system can take the knowledge acquired by a human
expert and automate the reasoning process, they still cannot learn any
thing new on their own." (p. 42)
"While cognizers [ANNs] will probably fare no better as a general
problem solver, they do hold out the promise of easing the development
of specific problem solvers, because they can learn. An AI system
must be explicitly programmed by a knowledge engineer after human experts
in the domain have be interviewed." (p. 156)
It appears that the author is not familiar with the AI subfield of
machine learning. Like ANN systems, these systems learn
from examples. Unlike (most) ANN systems, (most) machine learning
systems produced rules that can be understood by people.
The recent International Joint Conference on Artificial Intelligence
included several studies that compared AI systems to ANNs on
a variety of learning problems. Both approaches learned with high
highly accuracy. The AI systems were generally much faster (they
learned in seconds rather than in hours).
In sum, the author does a disservice to ANN's by hyping them
for the wrong reasons. The publisher apparently realizes
this: the book begins with a disclaimer.
Carl Kadie
Beckman Institute
Department of Computer Science
University of Illinois at Urbana-Champaign
[Apparently, there was an interesting discussion led by Ross Quinlan
on the papers comparing learning algorithms at IJCAI. Anyone care
to summarize- MP]
----------------------------------------------------------------------
Date: Sun, 3 Sep 89 13:51:45 -0500
From: "Norman H. Packard" <n@tiger.ccsr.uiuc.EDU>
Subject: double auction tournament
[This note has been condensed- MP]
DOUBLE AUCTION TOURNAMENT
Computer Strategy Challenge
Rewards: $10,000
Entries in the form of computer programs are invited for a Double
Auction Tournament to be conducted at the Santa Fe Institute in March
1990. The Double Auction game is based on the operation of the New
York Stock Exchange and the Chicago Board of Trade, and is of current
interest to economists and game theorists. Developing an effective
strategy for playing the game is apparently a simple challenge, but
actually rather subtle.
In the computer tournament each player will be represented by a
separate computer program. A central monitor program manages the
game. Reward money totaling $10,000 will be distributed among the
participants in proportion to the total trading profit earned by their
programs. Each player's program may be written in C, Fortran or
Pascal. Skeleton programs are provided in these languages so that
participants only need to develop the routines that make the strategic
decisions, such as how high to bid or when to accept an offer.
For further details about the tournament please include an electronic
mail address, if available, and write to dat@sfi.santafe.edu or:
Double Auction Tournament
Santa Fe Institute
1120 Canyon Road
Santa Fe, NM 87501
----------------------------------------------------------------------
Date: Mon, 4 Sep 89 02:22 EST
From: KROVETZ@cs.umass.EDU
Subject: natural language and machine learning
In Vol. 1, No. 1, Pazzani claims that some problems involving learning
from text are "pure natural language problems", and gives anaphora
resolution and word-sense disambiguation as examples. Although I'm
not sure about the ways in which machine-learning and anaphora
resolution interact, I do not understand his distinction about what
constitutes a "pure natural language" problem. I also have strong
disagreement with his claim that word-sense disambiguation doesn't
involve learning. Any lexicon is always going to be incomplete due to
the dynamic nature of language; it must be augmented by techniques
that acquire lexical information from text. Sometimes a lexeme will
be a completely new word (such as "Irangate"), and sometimes it will
constitute a new sense of a word we alreay know. The information that
the system acquires about the sense can be used to help in the
disambiguation process. This certainly seems like learning to me!
What isn't clear is just what information the system needs to acquire
in order to be effective in disambiguation. Research on that problem
is in progress, and the recent Lexical Acquisition Workshop at IJCAI
had several papers addressing it.
Bob Krovetz
----------------------------------------------------------------------
Date: Fri, 8 Sep 89 22:19 EST
From: LEWIS@cs.umass.EDU
Subject: Use of known feature dependencies in (un)supervised learning
Can anyone recommend references on clustering (conceptual or statistical)
when logical or statistical dependencies between features are known a
priori? (For instance, if we have both the feature AUTOMOBILE and the
feature VEHICLE.) Similarly, any references on choosing and training
nonlinear discriminant functions when there is a priori knowledge of
statistical dependencies between features?
Thanks, Dave
David D. Lewis ph. 413-545-0728
Information Retrieval Laboratory BITNET: lewis@umass
Computer and Information Science (COINS) Dept. ARPA/MIL/CS/INTERnet:
University of Massachusetts, Amherst lewis@cs.umass.edu
Amherst, MA 01003
USA UUCP: ...!uunet!cs.umass.edu!lewis@uunet.uu.net
----------------------------------------------------------------------
END of ML-LIST 1.7