Copy Link
Add to Bookmark
Report

Machine Learning List Vol. 6 No. 20

eZine's profile picture
Published in 
Machine Learning List
 · 1 year ago

 
Machine Learning List: Vol. 6 No. 20
Monday, July 25, 1994

Contents:
Revised Version of C4.5
CFP: AIJ Special Issue Devoted to Empirical AI
A conservation law of generalization performance
CMU Artificial Intelligence Repository

The Machine Learning List is moderated. Contributions should be relevant to
the scientific study of machine learning. Mail contributions to ml@ics.uci.edu.
Mail requests to be added or deleted to ml-request@ics.uci.edu. Back issues
may be FTP'd from ics.uci.edu in pub/ml-list/V<X>/<N> or N.Z where X and N are
the volume and number of the issue; ID: anonymous PASSWORD: <your mail address>

----------------------------------------------------------------------

Date: Wed, 20 Jul 1994 15:12:03 +1000
From: Ross Quinlan <quinlan@ml2.cs.su.oz.au>
Subject: Revised Version of C4.5

There have been several small changes (minor bug fixes and improvements)
since the code was published in 1992. If you have Release 5 (i.e. the
disk from Morgan Kaufmann), you can obtain the altered files by anonymous
ftp from ftp.cs.su.oz.au, directory pub/ml, file patch.tar.Z. The file
Modifications summarizes the changes since Release 5.

Needless to say, it is advisable to retain the old files until you are
satisfied with Release 6!

Ross Quinlan


------------------------------

Date: Mon, 25 Jul 94 16:22:31 -0500
From: David Hart <dhart@cs.umass.edu>
Subject: CFP: AIJ Special Issue Devoted to Empirical AI


Call for Papers

Special Issue of the Artificial Intelligence Journal
Devoted to Empirical Artificial Intelligence

Editors: Paul Cohen and Bruce Porter

We are looking for papers that characterize and explain the
behaviors of systems in task environments. Papers should report
results of studies of AI systems, or new techniques for studying
systems. The studies should be empirical, by which we mean "based
on observation"
(not exclusively "experimental," and certainly not
exclusively statistical hypothesis testing). Examples (some of which
are already in the AI literature) include:

A report of performance comparisons of message-understanding
systems, explaining why some systems perform better than
others in some task environments

A study of commonly-used benchmarks or test sets, explaining why
a simple algorithm performs well on many of them

A study of the empirical time and space complexity of an
important algorithm or sample of algorithms

Results of corpus-based machine-translation projects

A paper that introduces a feature of a task that suggests why
some task instances are easy and others difficult, and tests
this claim

Theoretical explanations (with appropriate empirical backing)
of unexpected empirical results, such as constant-time
performance on the million-queens problem

A statistical procedure for comparing performance profiles
such as learning curves

A resampling method for confidence intervals for statistics
computed from censored data (e.g., due to cutoffs on run times)

A paper that postulates (on empirical or theoretical grounds)
an equivalence class of systems that appeared superficially
different, providing empirical evidence that, on some
important measures, members of the class are more similar
to each other than they are to nonmembers.

The empirical orientation will not preclude theoretical articles; it
is often difficult to explain and generalize results without a
theoretical framework. However, the overriding criterion for papers
will be whether they attempt to characterize, compare, predict,
explain and generalize what we observe when we run AI systems.

This is an atypical special issue because many of us think there is
nothing special about empirical AI. It isn't a subfield or a
particular topic, but rather a methodology that applies to many
subfields and topics. We are concerned, however, that despite the
scope of empirical AI, it might be underrepresented in the pages of
the Artificial Intelligence Journal. This special issue is an
experiment to find out: if the number of submitted, publishable papers
is high, then we may conclude that the Journal could publish a higher
proportion of such papers in the future, and this issue might be
inaugural rather than special.

Three principles will guide reviewers: Papers should be interesting,
they should be convincing, and in most cases they should pose a
question or make a claim. A paper might be unassailable from a
methodological standpoint, but if it is an unmotivated empirical
exercise (e.g., "I wonder, for no particular reason, which of these
two algorithms is faster"
), it won't be accepted. In the other
corner, we can envision fascinating papers devoid of convincing
evidence. Different interpretations of "convincing" are appropriate
at different stages of projects and for different kinds of projects;
for example, the standards for hypothesis testing are stricter than
those for exploratory studies, and the standards for new empirical
methods are of a different kind, pertaining to power and validity.
If, however, the focus of a paper is a claim, then convincing evidence
must be provided.

Deadline: Jan. 10, 1995.

Please contact either of the editors as soon as possible to tell us
whether you intend to submit a paper, and include a few lines
describing the paper, so we can gauge the level of interest and the
sorts of work we'll be receiving.

Request: Due to the broad nature of this call, it will be difficult
to reach all potential contributors. So, please tell a friend...



The Editorial Board for this issue includes:

B. Chandrasekaran, Eugene Charniak, Mark Drummond, John Fox, Steve
Hanks, Lynette Hirschman, Adele Howe, Rob Holte, Steve Minton, Jack
Mostow, Martha Pollack, Ross Quinlan, David Waltz, Charles Weems.


------------------------------

Date: Wed, 20 Jul 94 14:51:59 -0400
From: Larry Hunter <hunter@work.nlm.nih.gov>
Subject: A conservation law of generalization performance

Schaffer's Law states that there is no perfect learning algorithm for all
learning tasks (i.e. all distributions of examples D, mappings from examples
to classes C and sizes of training set n). In fact, when summed over all
possible learning tasks, all algorithms exhibit precisely the same
generalization performance, namely 0.

This is an elegant statement that further refines Dietterich's ML '89 paper
on the Limits of Inductive Learning, where Tom demonstrated that "there are
no general purpose learning methods that can learn any concept (from a
sample of reasonable size)."


Schaffer's Law does not preclude a learning method from exhibiting
superior generalization performance on a _particular_ learning task,
or a particular broad family of learning tasks. Don't let these
fundamental results persuade you that ML is hopeless in particular
(rather than in general). Showing my Schankian roots, I will point to
human performance as a clear existance proof that it is possible to
exhibit useful generalization performance in the extremely broad class
of complex and difficult learning problems that tend to appear in our
world.

My personal approach has been to try to identify the characteristics of
learning tasks that make them suited to a particular learning method, or
combination of methods (see my "Planning to Learn" paper in Cognitive
Science '90, or my chapter in the Machine Learning IV book). Another more
formal approach to this idea is described in Rendell & Cho's "Empirical
Learning as a Function of Concept Character,"
(MLj 1990, 5(3):267-98). An
interesting follow up to that article is Rendell & Ragavan's "Improving the
Design of Induction Methods by Analyzing Algorithmic Functionality and
Data-based Concept Complexity"
(Proc IJCAI '93, pp.952-958) and the Rendell,
et al, LFC++ paper (IJCAI '93 pp. 946-51) which actually proposes a new
(and, in my experience, very useful) learning algorithm based on that
analysis.

Anyone questing for the Holy Grail of a perfect (machine) learning method
should have given up long ago. The rest of us have been trying to figure
out how to match learning methods to tasks. As Schaffer says "In short,
empirical success in generalization is always due to problem selection.
Although it is tempting to interpret such successes as evidence in favor of
a learner, it rather shows that the learner has been well applied."
I have
been looking at the problem of automatically determining what learning
method to use, and how best to apply it for more than five years, and I
welcome Schaffer's result as validation of the appropriateness of this
research agenda.

Larry Hunter
hunter@nlm.nih.gov


------------------------------

From: Tom Dietterich <tgd@chert.cs.orst.edu>
Date: Wed, 20 Jul 94 11:15:01 PDT
Subject: A conservation law of generalization performance


Yes, of course the law is correct. Furthermore, it is just a
corollary of the results from PAC learning theory (as well as from my
own paper "Limitations of Inductive Learning" from ML-89). A learning
algorithm must "place its bets" on some space of hypotheses. If the
target concept is in (or near) this space, it will do well, otherwise
it will do poorly (worse than random). I agree with Cullen that some
people in the ML community have said and written things that suggest
that they believe a "general purpose" learning algorithm is possible.
This is particularly evident in papers about constructive induction,
but authors of multistrategy learning papers also appear to support
this view occasionally. The key lesson people should draw from this
paper (and all of the theoretical work) is that general purpose
learning is impossible.

On the other hand, this paper will not change my research nor do I
believe it should change other people's research. From PAC learning
theory, we know that the number of examples required for learning is
roughly O(log |H|), where H is the hypothesis space (for fixed values
of epsilon and delta). Hence, if we increase the size of the
hypothesis space by a factor of 10, we only require O(log 10 + log
|H|) examples. In short, we can afford to do a tremendous amount of
searching through large hypothesis spaces, constructive induction
features, multiple learning strategies, etc., without significantly
hurting our accuracy. Each factor of 10 in hypotheses costs us only
about 20 additional training examples (if we are going for 90%
accuracy).

What Cullen and all of the other theory tells us is that even if we do
all of this searching, we are still fundamentally limited to
accurately learning only a small fraction of all possible concepts.
So the key to successful application of machine learning is to ensure
that the target concept is in (or near) the hypothesis space. This is
where we employ prior knowledge. We never apply machine learning
algorithms to data unless we believe there is a pattern in that data
(with the possible exception of the folks searching for
extraterrestrial intelligence!

Four research directions:

* we must find ways to incorporate prior knowledge into our learning
algorithms. This includes understanding how to do good vocabulary
engineering for each algorithm.

* we must understand the range of applicability of our current
algorithms (what assumptions do they make? How can we decide in
advance whether the target hypothesis is likely to be found by the
algorithm?)

* we must develop diagnostics that can tell us when our algorithms
are not doing well and can help us identify what is going wrong.
In linear regression, many diagnostics are known (e.g., various things
to look for in analyzing the residuals). But for tree- and rule-
learning algorithms and neural networks, we have very few
diagnostics.

* we must define properties of data sets that can be measured to
understand those data sets and to generate synthetic data. I
recommend two papers that are examples of excellent experimental
methodology aimed at deepening our understanding of algorithms:

1. Aha, D. W. (1992). Generalizing from case studies: A case
study. In {\it Proceedings of the Ninth International Conference on
Machine Learning} (pp. 1--10). San Francisco: Morgan Kaufmann.

2. Frank, I. E., Friedman, J. H. (1993). A statistical view of
some chemometrics regression tools. {\it Technometrics, 35} (2)
109--135. This paper applies methods from linear algebra to
conduct a large experimental study of three regularized linear
regression algorithms. I wish we could do something like this for
comparing tree pruning algorithms and neural network regularization
algorithms.

--Tom

------------------------------

Date: Wed, 20 Jul 94 15:47:55 MDT
From: dhw@santafe.edu
Subject: A conservation law of generalization performance



Yes, the conservation law is correct (At least the general concept is;
I haven't seen the final version of Cullen's paper, so I can't vouch
for the precise form of the law presented there.) In fact, there are
many alternative versions of the basic law, some of which are much
more counter-intuitive.

And the law certainly changes the way I do my research ... In
particular, I'm currently doing work (joint w/ Manny Knill and Tal
Grossman) that uses the law as a starting point for investigating a
number of theoretical issues. Several papers are in the works already.

Some discussion of (some of the) alternate forms of the law is
contained in a paper of mine "The relationship between PAC, the
statistical physics framework, the Bayesian framework, and the VC
framework"
.

This paper will be in a proceedings I'm editing that should be
completed w/in a month or so. I plan to post ftp instructions for the
paper shortly.

(FYI, The other contributors to the proceedings: Leo Breiman, David
Haussler Tali Tishby, Tom Dietterich, Grace Wahba, Peter Cheeseman,
Geoff Hinton, John Denker.)


David Wolpert

------------------------------

Date: Thu, 21 Jul 94 08:40:00 MDT
From: dhw@santafe.edu
Subject: A conservation law of generalization performance


A quick follow-up message concerning Schaffer's MLC '94 paper:

Cullen's conservation law is an elegant re-expression of some results
that have been discussed in the neural net community for about a
year now. (Although Cullen's paper summarizes the results well,
I have to say I prefer Haussler's phrase for them - the "no free
lunch theorems"
- to Cullen's "conservation law".)

When I originally derived them I formulated the results as follows:

Let 'f' indicate a target function, 'd' a training set, and
'm' the training set's size. Let E_i indicate expectation value
using learning algorithm i. Let C indicate off-training-set
misclassification rate. Then for *any* two algorithms '1' and '2':

i) Averaged over all f, E_1(C | f, m) = E_2(C | f, m).
ii) Averaged over all f, for any d, E_1(C | f, d) = E_2(C | f, d).
iii) Averaged over all priors P(f), E_1(C | m) = E_2(C | m).
iv) Averaged over all priors P(f), for any d, E_1(C | d) = E_2(C | d).

***

So for example (iv) refers to the usual scenario in Bayesian decision
theory (if one is using zero-one loss, rather than say quadratic loss),
and (i) to the usual scenario in the statistical mechanics of learning,
much of PAC, etc.

This results have to be heavilly modified if one is interested in things
like off-training-set quadratic loss; there *are* reasons to a priori
prefer not to use certain algorithms in that scenario. (For example,
one should never use an algorithm whose guess is random; one should guess
the average of those guesses of the algorithm instead.) But for most purposes,
those preferences can be viewed as 'baselines'; one can in essence
'mod them out' of the analysis to recover no-free-lunch theorems.

As I mentioned previously, there are many variations and corrolaries of
these theorems. For example, one can introduce the empirical
misclassification rate into the conditioning event. The resultant
equations have strong implications for what one can/can't infer from
the VC theorems.

***

If anyone's interested, there is a TR of mine called "On overfitting
avoidance as bias"
that was placed in the connectionist neuroprose
archive about a year and a half ago. In part it is a theoretical
treatment of Cullen's nice "Overfitting avoidance as bias" paper
published in Machine Learning. It also presents (a by now rather
dated I'm afraid) introduction to the ideas behind the no-free-lunch
theorems. The ftp adress is archive.cis.ohio-state.edu. The file is
called wolpert.overfitting.ps.Z; it is compressed postscript, and
lives in the directory pub/neuroprose.

An up-to-date treatment of the no-free-lunch theorems is in the article
I referred to in my previous posting on this subject.

David Wolpert

------------------------------

From: Cullen Schaffer <schaffer@roz.hunter.cuny.edu>
Date: Sun, 24 Jul 94 11:16:35 EDT
Subject: A conservation law of generalization performance


Here are a few points in response to comments from David Wolpert, Larry
Hunter and Tom Dietterich regarding my Conservation Law paper.

1. David mainly points out that my conservation law is a small subpart
of a much larger body of theory he has developed. This is absolutely
true--all I've done is to present this small subpart in a way I
thought would be particularly comprehensible and compelling for the
machine learning research community. An important purpose of my paper
was to induce machine learning researchers to tackle David's somewhat
forbidding papers.

2. I very much like Tom's list of research directions suggested by
both the conservation law and his own past work:

* Focus on the explicit use of prior knowledge.
* Understand the bias of existing algorithms
* Develop data diagnostics to check if assumptions are justified
(or, more precisely, to suggest when they're not)
* Define important measurable data set characteristics (though, of
course, these can't be used to circumvent the conservation law;
only information ~external to the data can be used to justify
the bias inherent in any generalization).

3. On the other hand, I think Tom is too quick to assume that the
machine learning community as a whole understands the implications of
the conservation law as well as he himself normally does. If this
were true, the paper--however sound technically--would have been an
empty exercise. Naturally, I don't think it was. Here are three
small examples to support my view that confusion on basic points is
rather widespread.

First, I myself have had a hard time swallowing some basic
consequences of the conservation law. I doubt that I'm alone in
finding it rather unintuitive, for example, (1) that the standard
default learning algorithm (the one that uniformly predicts the class
seen most often in training) performs worse than chance in
generalization on a wide variety of realistic problems and (2) that
information loss is a better decision tree splitting criterion than
information gain for some concepts of real-world importance.

Second, Tom slips up too, in his note. After agreeing that general
purpose learning is impossible, he goes on say that, however,

We never apply machine learning algorithms to data unless we believe
there is a pattern in that data.

This statement--though often paraphrased in arguments regarding the
conservation law--is highly misleading. The point is not whether
there is a pattern in the data, but whether it is one of the ones our
algorithms are biased toward guessing. I'm sure Tom understands this
distinction completely, when he takes the time to be careful.
Nevertheless, like a Freudian slip, the particular way his informal
statement strays from the key point suggests that, when he is not being
careful, Tom falls into the same trap as the rest of us--assuming that
the simple fact of an underlying regularity is enough to justify the
generalization biases of standard induction algorithms.

Finally, Larry's comments (though, again, we agree much more than we
disagree) provide a third striking example of how an intuitive
understanding of the conservation law may be seriously at variance
with the law itself. Larry paraphrases the law as I'm afraid many
would:

Schaffer's Law states that there is no perfect learning algorithm
for all learning tasks.

In fact, this is a much weaker statement, since it leaves open the
possibility of induction algorithms that are, if not perfect
everywhere, at least considerably better than chance for a large
majority of all learning tasks. The conservation law imposes much
stricter limits on what can be achieved. It rules out, not just
learners that are perfect, but even learners that are good in any
general sense. According to the law, no learner is better than any
other for generalization, except with respect to knowledge of the
problem distribution.

4. As a final item of interest, I'd like to mention that Rob Holte is
the first (any only one so far) to respond correctly to my challenge
to identify a simple, non-technical concept for which information loss
is a better splitting criterion than information gain, as far as
generalization is concerned. I'll be happy to divulge the concept to
anyone interested, but I won't give the answer here in case, as I
hope, some people are still thinking.



------------------------------

Date: Fri, 22 Jul 94 10:04:48 +0200
From: Roberto Piola <piola@di.unito.it>
Subject: A conservation law of generalization performance

[This message arrived after Cullen was asked to respond. -MP]

Yes, I think it is correct, if we consider that in the concept space
there are "deceptive" concepts, like concepts having only positive
examples in the learning set and all negative examples in the test
set, or concepts definite "at random".

However, these theoretical considerations will not change my research,
because we usually make some assumptions about the concepts to be learnt.

It will change maybe the way in which we'll present our results (including
examples of where our algorithms will fail and why).


@ @ Dr. Roberto Piola (piola@di.unito.it)
Department of Computer Science
\____/ University of Turin, Italy

------------------------------

Date: Thu, 21 Jul 94 15:41:56 -0400
From: Mark Kantrowitz <Mark_Kantrowitz@glinda.oz.cs.cmu.edu>
Subject: CMU Artificial Intelligence Repository


++++++++++++++++++++++++++++++++++++++++++
+ CMU Artificial Intelligence Repository +
+ and +
+ Prime Time Freeware for AI +
++++++++++++++++++++++++++++++++++++++++++

July 1994

The CMU Artificial Intelligence Repository was established by Carnegie
Mellon University to contain public domain and freely distributable
software, publications, and other materials of interest to AI researchers,
educators, students, and practitioners. The AI Repository currently
contains more than a gigabyte of material and is growing steadily.

The AI Repository is accessible for free by anonymous FTP, AFS, and WWW.
A selection of materials from the AI Repository is also being published
on CD-ROM by Prime Time Freeware and should be available for purchase
at AAAI-94 or direct by mail or fax from Prime Time Freeware (see below).

============================
Accessing the AI Repository:
============================

To access the AI Repository by anonymous FTP, ftp to:
ftp.cs.cmu.edu [128.2.206.173]
and cd to the directory:
/user/ai/
Use username "anonymous" (without the quotes) and type your email
address (in the form "user@host") as the password.

To access the AI Repository by AFS (Andrew File System), use the directory:
/afs/cs.cmu.edu/project/ai-repository/ai/

To access the AI Repository by WWW, use the URL:
http://www.cs.cmu.edu:8001/Web/Groups/AI/html/repository.html

Be sure to read the files 0.doc and readme.txt in this directory.

==============================
Contents of the AI Repository:
==============================

The AI Programming Languages and the AI Software Packages sections of
the repository are "complete". These can be accessed in the lang/ and
areas/ subdirectories of the AI Repository. Compression and archiving
utilities may be found in the util/ subdirectory. Other directories,
which are in varying states of completion, are events/ (Calendar of
Events, Conference Calls) and pubs/ (Publications, including technical
reports, books, mail/news archives).

The AI Programming Languages section includes directories for Common Lisp,
Prolog, Scheme, Smalltalk, and other AI-related programming languages.

The AI Software Packages section includes subdirectories for:

agents/ Intelligent Agent Architectures
alife/ Artificial Life and Complex Adaptive Systems
anneal/ Simulated Annealing
blackbrd/ Blackboard Architectures
bookcode/ Code From AI Textbooks
ca/ Cellular Automata
classics/ Classical AI Programs
constrnt/ Constraint Processing
dai/ Distributed AI
discover/ Discovery and Data-Mining
doc/ Documentation
edu/ Educational Tools
expert/ Expert Systems/Production Systems
faq/ Frequently Asked Questions
fuzzy/ Fuzzy Logic
games/ Game Playing
genetic/ Genetic Algorithms, Genetic Programming,
Evolutionary Programming
icot/ ICOT Free Software
kr/ Knowledge Representation, Semantic Nets, Frames, ...
learning/ Machine Learning
misc/ Miscellaneous AI
music/ Music
neural/ Neural Networks, Connectionist Systems, Neural Systems
nlp/ Natural Language Processing (Natural Language
Understanding, Natural Language Generation, Parsing,
Morphology, Machine Translation)
planning/ Planning, Plan Recognition
reasonng/ Reasoning (Analogical Reasoning, Case Based Reasoning,
Defeasible Reasoning, Legal Reasoning, Medical Reasoning,
Probabilistic Reasoning, Qualitative Reasoning, Temporal
Reasoning, Theorem Proving/Automated Reasoning, Truth
Maintenance)
robotics/ Robotics
search/ Search
speech/ Speech Recognition and Synthesis
testbeds/ Planning/Agent Testbeds
vision/ Computer Vision

The repository has standardized on using 'tar' for producing archives
of files and 'gzip' for compression.

====================================
Keyword Searching of the Repository:
====================================

To search the keyword index by mail, send a message to:
ai+query@cs.cmu.edu
with one or more lines containing calls to the keys command, such as:
keys lisp iteration
in the message body. You'll get a response by return mail. Do not
include anything else in the Subject line of the message or in the
message body. For help on the query mail server, include:
help
instead.

A Mosaic interface to the keyword searching program is in the works. We
also plan to make the source code (including indexes) to this program
available, as soon as it is stable.

=========================================
Contributing Materials to the Repository:
=========================================

Contributions of software and other materials are always welcome, but
must be accompanied by an unambiguous copyright statement that grants
permission for free use, copying, and distribution, such as:

- a declaration that the materials are in the public domain, or

- a copyright notice that states that the materials are subject to
the GNU General Public License (cite version), or

- some other copyright notice (we will tell you if the copying
permissions are too restrictive for us to include the materials
in the repository)

Inclusion of materials in the repository does not modify their copyright
status in any way.

Materials may be placed in:
ftp.cs.cmu.edu:/user/ai/new/
When you put anything in this directory, please send mail to
ai+contrib@cs.cmu.edu giving us permission to distribute the files, and
state whether this permission is just for the AI Repository, or also
includes publication on the CD-ROM version (Prime Time Freeware for AI).

We would appreciate if you would include a 0.doc file for your package;
see /user/ai/new/package.doc for a template. (If you don't have the
time to write your own, we can write it for you based on the
information in your package.)

====================================
Prime Time Freeware for AI (CD-ROM):
====================================

A portion of the contents of the repository is published annually by
Prime Time Freeware. The first issue consists of two ISO-9660 CD-ROMs
bound into a 224-page book. Each CD-ROM contains approximately 600
megabytes of gzipped archives (more than 2 gigabytes uncompressed and
unpacked). Prime Time Freeware for AI is particularly useful for folks
who do not have FTP access, but may also be useful as a way of saving
disk space and avoiding annoying FTP searches and retrievals.

Prime Time Freeware helped establish the CMU AI Repository, and sales
of Prime Time Freeware for AI will continue to help support the
maintenance and expansion of the repository. It sells (list) for US$60
plus applicable sales tax and shipping and handling charges. Payable
through Visa, MasterCard, postal money orders in US funds, and checks
in US funds drawn on a US bank. For further information on Prime Time
Freeware for AI and other Prime Time Freeware products, please contact:

Prime Time Freeware
370 Altair Way, Suite 150
Sunnyvale, CA 94086 USA
Tel: +1 408-433-9662
Fax: +1 408-433-0727
E-mail: ptf@cfcl.com

======================
Repository Maintainer:
======================

The AI Repository was established by Mark Kantrowitz in 1993 as an
outgrowth of the Lisp Utilities Repository (established 1990) and his
work on the FAQ (Frequently Asked Questions) postings for the AI, Lisp,
Scheme, and Prolog newsgroups. The Lisp Utilities Repository has been
merged into the AI Repository.

Bug reports, comments, questions and suggestions concerning the repository
should be sent to Mark Kantrowitz <AI.Repository@cs.cmu.edu>. Bug reports,
comments, questions and suggestions concerning a particular software
package should be sent to the address indicated by the author.


------------------------------

End of ML-LIST (Digest format)
****************************************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT