Copy Link
Add to Bookmark
Report

Machine Learning List Vol. 5 No. 02

eZine's profile picture
Published in 
Machine Learning List
 · 1 year ago

 
Machine Learning List: Vol. 5 No. 2


Contents:



The Machine Learning List is moderated. Contributions should be relevant to
the scientific study of machine learning. Mail contributions to ml@ics.uci.edu.
Mail requests to be added or deleted to ml-request@ics.uci.edu. Back issues
may be FTP'd from ics.uci.edu in pub/ml-list/V<X>/<N> or N.Z where X and N are
the volume and number of the issue; ID: anonymous PASSWORD: <your mail address>


I'm afraid this issue of ML-LIST isn't formatted as well as I'd like.
I'm on sabbitical at University of Sydney until March, and I don't
have my formatting utilities installed yet. Mike


----------------------------------------------------------------------

Received: from ics.uci.edu by Paris.ics.uci.edu id aa23426; 9 Jan 93 16:49 PST
Received: from kariba.bbn.com by q2.ics.uci.edu id aa04332; 9 Jan 93 16:49 PST
To: ml@ics.uci.edu
Subject: MDL references
From: aboulang@bbn.COM
Sender: aboulang@bbn.COM
Reply-to: aboulanger@bbn.COM
Date: Sat, 9 Jan 93 19:49:29 EST
Source-Info: From (or Sender) name not authenticated.
Message-ID: <9301091649.aa04332@q2.ics.uci.edu>



Several folk, in response to my posting, asked for some references on
MDL-like methods. Some references are incomplete so perhaps you can
help *me* complete them,


First off, probably one can say the spirit of MDL (stochastic
complexity) methods was in the early work of algorithmic complexity
theory -- especially the work on a general methodology for inductive
inference by Solomonoff. The big problem with algorithmic complexity
is that it is not computable.

****************

Akiake's early work on an MDL-like metric for model selection is
notable in that work has been done with it to formally relate it to
cross-validation, asymptotically:

"An Asymptotic Equivalence of Choice of Model by Cross-Validation and
Akaike's Criterion". M, Stone, ????, 44-47


Akiake did not take into account sample size into his metric.

****************
There is a pair of back-to-back papers in the Journal of the Royal
Statistics Society B(1987), by Jorma Rissanen and then C.S. Wallace &
P.R. Freeman on their contributions to MDL-like metrics:


"Stochastic Complexity", Jorma Rissanen, J. R. Statist. Soc B(1987)
49, No 3, pp 223-239, and 252-265.

"Estimation and Inference by Compact Coding", J. R. Statist. Soc B(1987)
49, No 3, pp 240-265.

These two papers are of interest along with the discussion section on
pages 252-265. The crux of the issue (besides the silly "who published
first") is Rissanen use of a "universal" prior. Wallace does not,
being a die-hard prior believer. (Personally I feel that often the
choice of prior can lead to radically different answers, and any
attempt at making a more "robust" method should be welcomed. This
observation comes from some work I did on a health-risk assessment
program. I realized that misinformation in the system can really screw
things up in using a normal Bayesian framework -- we should seek a
more robust approach.)

Two other references to Rissanen are:

J. Rissanen, "Universal Coding, Information, Prediction &
Estimation,", IEEE Trans. Inform. Theory, vol 30, pp 629-636, 1984

& his book:

J. Rissanen, "Stochastic Complexity in Statistical Inquiry", World
Scientific, N.J., 1989


****************

An application of MDL to *unsupervised* clustering is:

"Finding Natural Clusters Having Minimum Description Length"
Richard S. Wallace & Takeo Kanade, IEEE ???, 438-442, 1990

****************
MDL is the basis for pruning methods for tree classifiers (*supervised*):

J. R. Quinlan and R. Riverst, "Inferring Decision Trees Using the
Minimum Description Length Principle," Information and Computation,
80, 227-248.

****************

As I mentioned in the short note, one can push the use of MDL earlier
into the generation phase of machine learning programs. In this paper,
it used for both growing and pruning the decision tree:

"Construction of Tree Structured Classifiers by the MDL Principle",
Mati Wax, ICASSP (??) Proceedings, M7.10, 2157-2160, 1990.

****************

Padhraic Smyth has applied it for model selection of Markov random
fields, decision tree classifiers, and multilevel feedforward NNets,

"Admissible Stochastic Complexity Models for Classification
Problems", ??? 26.1-26.8

See also:

"A General Selection Criterion for Inductive Inference"
M.P. Geirgeff & C.S. Wallace, Advances in Artificial Intelligence,
T. O'Shea (ed.), Elsevier. ECCAI, 1985.

****************

Finally, a really neat thesis of George Hart:

"Minimum Information Estimation of Structure"
MIT Lab. of Information and Decision Science, LIDS-TH-1664, April 1987.

His main development is the inference of FSMs from strings. The main
application, is a really neat inverse problem -- infer the different
electrical loads of a house only from a recording off a load meter
external to the house.

Wallace also applied it to figuring out patterns in Stone Circles.


***************

Again, I am sorry for the incomplete references.

Regards,
Albert Boulanger
aboulanger@bbn.com



------------------------------

Received: from ics.uci.edu by Paris.ics.uci.edu id aa17834; 11 Jan 93 0:28 PST
Received: from gmdzi.gmd.de by q2.ics.uci.edu id aa13550; 11 Jan 93 0:28 PST
Received: from localhost by gmdzi.gmd.de with SMTP id AA09561
(5.65c/IDA-1.4.4 for ml@ics.uci.edu); Mon, 11 Jan 1993 09:27:24 +0100
Message-Id: <199301110827.AA09561@gmdzi.gmd.de>
To: ml@ics.uci.edu
Cc: Werner Emde <werner.emde@gmd.de>
Subject: ML program library
Date: Mon, 11 Jan 93 09:27:22 +0100
From: Werner.Emde@gmd.de
X-Mts: smtp

The Machine-Learning-Program-Library with PROLOG implementations of
basic Machine Learning algorithms has been moved from the ftp-server
of the University of Osnabrueck to the ftp-server of the German
National Research Center for Computer Science (GMD).

The programs are now accessible via ftp from 'ftp.gmd.DE' within the
directory 'gmd/mlt/ML-Program-Library'.

Please, consult the attached README-file of the library for further
details.

% file: gmd/mlt/ML-Program-Library/README:

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% This is the Machine Learning Program Library
% of the
% Special Interest Group on Machine Learning (FG 1.1.3)
% of the German Society for Computer Science (GI e.V.)
% 7 January 1993
% Anonymous ftp-Server: ftp.gmd.DE (129.26.8.90)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

Included in this library are several PROLOG implementations of
basic machine learning algorithms. The contents of the repository can
remotely copied to other network sites via ftp from
'ftp.gmd.de'. The login-name is 'anonymous', as password enter your
own e-mail address. To find the program directories with the programs,
some small test data sets and demonstration LOG files enter
cd gmd/mlt/ML-Program-Library

The names and addresses of the authors, references to the origin of
the algorithms, and hints how the programs can be started are usually
included in the program files.

The file LOGFILE lists changes and modifications to the library. This
file should make it easy for you to determine what's new since you
last looked at it.

Notes
-----

1) Software delivery: If you have implemented a basic machine learning
algorithm in PROLOG, which is free of copyrights, please send it to
Thomas Hoppe. Order his software documentation file for more details.

2) Bug detection: The algorithms are more or less tested. Somtimes
bugs may occur as a consequence of the subtil differences of
the different PROLOG dialects (especially built-in predicates).
If you find a 'new feature', which you did not expect, inform
Thomas Hoppe so that others can take benefit from your experience.

3) Copyright: Please note the remarks on the copyright and allowed
modifications made by some program authors at the beginning of
the program files.

4) The files may also be ordered via surface or electronic mail.
People without access to the archive should send a short notice
to Thomas Hoppe using the address given below.

5) We appreciate to draw your attention to the fact that the Knowledge
Acquisition and Machine Learning System MOBAL (2.0) is also accesible
accessible via the same anonymous ftp server. The system, a user
guide and a README file are located in the directory
gmd/mlt/Mobal
(MOBAL has been developed using QUINTUS PROLOG 3.1.1 on a SUN4).


Brief Overview of the Program Library
-------------------------------------

Each sub-directory contains a PROLOG (re-)implementation of a basic
machine learning algorithm, one (or more) test data files, and (in
some cases) a small log file produced by running the program on
the test data set using QUINTUS PROLOG (release 2.4).

README this file
LOGFILE description of last changes and additions
aq1/
aq1.pro reimplementation of Jeffrey M. Becker's AQ-PROLOG (based
on Michalski's AQ) (author: Thomas Hoppe)
aq1_1.pro a simple data set
aq1_2.pro Extensions to aq1_1.pro
arch1/
arch1.pro Winston's incremental learning procedure for
structural descriptions (author: Stefan Wrobel)
arch1_1.pro Winston's example archs
arch1.log Log-file of a sample run
arch2/
arch2.pro a minimal implementation of Winstons's ARCH
(author: Ivan Bratko)
arch2_1.pro a small test set
arch2.log Log-file of a sample run
attdsc/
attdsc.pro Ivan Bratko's algorithm for learning attributional
descriptions
attdsc_1.pro Small example set for learning to recognize objects
from their silhouettes
cobweb/
cobweb.pro a PROLOG implementation of Fisher's COBWEB using
CLASSIT's evaluation function to deal with numeric
attributes (author: Joerg-Uwe Kietz)
cobweb_1.pro a simple data set describing some hotels (numeric and
nominal attributes)
cobweb_2.pro Gennari, Langley, and Fisher's rectangle
classification example (numeric attributes)
cobweb_3.pro Fisher's animal classification example (nominal
attributes)
cobweb_4.pro Gennari, Langley, and Fisher's cell classification
example (numeric attributes)
cobweb.log Log-file of running the program the example data
sets
discr/
discr.pro Brazdil's generation of discriminations from
derivation trees (author: Thomas Hoppe)
discr_1.pro Simple abstract example
discr_2.pro Abstract example generating useful and not
ebg/
ebg.pro Basic algorithms for explanation based generalisa-
tion and partial evaluation based on Kedar-Cabelli
& McCarty's idea. Different kinds of simple PROLOG
meta-interpreters.
ebg_1.pro Suicide example for EBG
ebg_2.pro Safe_to_stack example for EBG
useful descriminants (author: Thomas Hoppe)
id3/
idt.pro ID3.1 Implementation of Quinlan's ID3 algorithm
based on the 'gain-ratio'-measure
(authors: Luis Torgo, Thomas Hoppe)
idt_1.pro simple example data set
idt_2.pro simple example data set
idt_3.pro simple example data set
invers/
invers.pro Implementation of absorption and intra-construction
operators for inverse resolution
(author: Thomas Hoppe)
invers_1.pro example calls
logic/
logic.pro Substitution matching, term generalizations,
generalized subsumption
logic_1.pro Example calls
multagent/
multagent.pro Yiu Cheung HO's implementation of Brazdil's tutoring
setting
teacher.pro Teacher's knowledge base
learner1.pro A correct Learner's knowledge base
learner2.pro An erroneous Learner's knowledge base
calls_1.pro Example calls concerning correct knowledge
calls_2.pro Example calls concerning wrong knowledge
vs/
vs.pro Implementation of Mitchell's version space algorithm
vs_1.pro a simple shape and color taxonomy
vs_1.log Log-file of a sample run




Suggestions and complaints regarding the access to the ftp-library
or the Log-files are welcome any time by Werner Emde.

Additional PROLOG implementations of Machine Learning Algorithms are
welcome by Thomas Hoppe who is responsible for the maintenance of the
program library. Thomas Hoppe has made slight changes to the programs
supplied by the different authors in order to make them independent of
a specific PROLOG dialect (as far as possible).


Thomas Hoppe Dr. Werner Emde
Projektgruppe KIT GMD, FIT.KI
Technische Universitaet Berlin Postfach 13 16
Franklinstr. 28/29 Schloss Birlinghoven
D-1000 Berlin 10 D-W-5205 Sankt Augustin 1
Germany Germany

email: hoppet@cs.tu-berlin.de email: werner.emde@gmd.de
Phone: +49.30.314-25494 Phone: +49.2241.14-2282
FAX: +49.30.314-24929 FAX: +49.2241.14-2072



------------------------------

Received: from ics.uci.edu by Paris.ics.uci.edu id aa10718; 14 Jan 93 7:39 PST
Received: from gatech.edu by q2.ics.uci.edu id aa19545; 14 Jan 93 7:38 PST
Received: from burdell.cc.gatech.edu by gatech.edu (4.1/Gatech-9.1)
id AA24159 for ml@ics.uci.edu; Thu, 14 Jan 93 10:41:29 EST
Received: from leo.cc.gatech.edu by burdell.cc.gatech.edu (4.1/SMI-4.1)
id AA18894; for ml@ics.uci.edu; Thu, 14 Jan 93 10:38:55 EST
Received: by leo.cc.gatech.edu (4.1/SMI-4.1)
id AA00335; Thu, 14 Jan 93 10:38:51 EST
Date: Thu, 14 Jan 93 10:38:51 EST
From: Ashwin Ram <ashwin@cc.gatech.EDU>
Message-Id: <9301141538.AA00335@leo.cc.gatech.edu>
To: ml@ics.uci.edu
Cc: l-barsalou@uchicago.EDU, leake@cs.indiana.EDU, michalski@aic.gmu.EDU,
evelyn_ng@sfu.ca, pault@clarity.princeton.EDU, ashwin@cc.gatech.EDU
Subject: Goal-Driven Learning: Fundamental Issues and Symposium Report
Reply-To: Ashwin Ram <ashwin@cc.gatech.EDU>

The following report on the symposium on Goal-Driven Learning held at the
CogSci-92 conference is now available by anonymous FTP:

File: er-93-01.ps.Z
TITLE: Goal-Driven Learning: Fundamental Issues and Symposium Report
AUTHORS: David Leake and Ashwin Ram
APPEARS-IN: Technical Report #85, Cognitive Science Program, Indiana
University, Bloomington, IN, 1993
ABSTRACT:
In Artificial Intelligence, Psychology, and Education, a growing body of
research supports the view that learning is a goal-directed process.
Psychological experiments show that people with different goals process
information differently; studies in education show that goals have strong
effects on what students learn; and functional arguments from machine
learning support the necessity of goal-based focusing of learner effort. At
the Fourteenth Annual Conference of the Cognitive Science Society, a
symposium brought together researchers in AI, psychology, and education to
discuss goal-driven learning. This article presents the fundamental points
illuminated by the symposium, placing them in the context of open questions
and current research directions in goal-driven learning.

The file is retrievable using anonymous FTP from ftp.cc.gatech.edu
(130.207.3.245) from the directory /pub/ai. Login as anonymous and enter your
real name as the password. Use binary mode to download the compressed (.Z)
file, then uncompress the file using 'uncompress er-93-01.ps.Z'. See the README
file in the same directory for more information.

Ashwin Ram <ashwin@cc.gatech.edu>
Assistant Professor, College of Computing
Georgia Institute of Technology, Atlanta, Georgia 30332-0280
(404) 853-9372 (phone)
(404) 853-9378 (fax)

------------------------------

Received: from ics.uci.edu by Paris.ics.uci.edu id aa25392; 14 Jan 93 16:00 PST
Received: from ptolemy-ethernet.arc.nasa.gov by q2.ics.uci.edu id aa18058;
14 Jan 93 16:00 PST
Received: from madonna.arc.nasa.gov by ptolemy.arc.nasa.gov (4.1/) id <AA29374>; Thu, 14 Jan 93 16:00:02 PST
Date: Thu, 14 Jan 93 16:00:02 PST
From: Wray Buntine <wray@ptolemy.arc.nasa.GOV>
Message-Id: <9301150000.AA29374@ptolemy.arc.nasa.gov>
Received: by madonna.arc.nasa.gov (4.1/SMI-4.1)
id AA04893; Thu, 14 Jan 93 16:01:09 PST
To: ml@ics.uci.edu
Subject: please change my previous contribution to this!!!!


IND Version 2.1 - creation and manipulation of decision trees from data
----------------------------------------------------------------------

A common approach to supervised classification and prediction in
artificial intelligence and statistical pattern recognition
is the use of decision trees. A tree is "grown" from
data using a recursive partitioning algorithm to create a tree
which (hopefully) has good prediction of classes on new data.
Standard algorithms are CART (by Breiman, Friedman, Olshen and Stone)
and Id3 and its successor C4.5 (by Quinlan). More recent techniques
are Buntine's smoothing and option trees, Wallace and Patrick's MML method,
and Oliver and Wallace's MML decision graphs which extend the tree
representation to graphs. IND reimplements and integrates these
methods. The newer methods produce more accurate class probability
estimates that are important in applications like diagnosis.

IND is applicable to most data sets consisting of
independent instances, each described by a fixed length vector of
attribute values. An attribute value may be a number, one of a
set of attribute specific symbols, or omitted. One of the
attributes is delegated the "target" and IND grows trees
to predict the target. Prediction can then be done on new data or
the decision tree printed out for inspection.

IND provides a range of features and styles with convenience
for the casual user as well as fine-tuning for the advanced user or
those interested in research. Advanced
features allow more extensive search, interactive control and display
of tree growing, and Bayesian and MML
algorithms for tree pruning and smoothing. These often produce
more accurate class probability estimates at the leaves.
IND also comes with a comprehensive experimental control suite.

IND consist of four basic kinds of routines; data manipulation
routines, tree generation routines, tree testing routines, and
tree display routines. The data manipulation routines are used
to partition a single large data set into smaller training and
test sets. The generation routines are used to build
classifiers. The test routines are used to evaluate classifiers
and to classify data using a classifier. And the display
routines are used to display classifiers in various formats.

IND is written in K&R C, with controlling scripts in the "csh"
shell of UNIX, and extensive UNIX man entries. It is designed to be
used on any UNIX system, although it has only been thoroughly tested
on SUN platforms. Assistence with porting to other machines will
be given in some cases.
IND comes with a manual giving a guide to tree methods
and pointers to the literature, and several companion documents.

New Features Over Versions 1.X
==============================

Improved user interface and documentation.
Debugged version of CART classification trees (not regression).
Reimplementation of some features of C4.5.
Decision graphs ala Oliver and Wallace implemented by Jon Oliver.
Bayesian option trees no longer needs hand holding.
More portable.

Availability
============

IND Version 2.0 will shortly be available through NASA's COSMIC
facility. IND Version 2.1 is available strictly as unsupported
beta-test software. If you're interested in obtaining a beta-test copy,
with no obligation on your part to provide feedback, contact

Wray Buntine
NASA Ames Research Center
Mail Stop 269-2
Moffett Field, CA, 94035
email: wray@kronos.arc.nasa.gov

Unfortunately, the beta-test version is not available for overseas.
This is standard NASA policy. Version 2.0, however, should
be available soon at a modest price from NASAs COSMIC center
in Georgia, USA. Enquiries should be directed to:

mail (to customer support): service@cossack.cosmic.uga.edu
Phone: (706) 542-3265 and ask for customer support
FAX: (706) 542-4807.



------------------------------

Received: from ics.uci.edu by Paris.ics.uci.edu id aa13666; 15 Jan 93 6:14 PST
Received: from siemens.siemens.com by q2.ics.uci.edu id aa06604;
15 Jan 93 6:13 PST
Received: from sol.siemens.com by siemens.siemens.com with smtp
(Smail3.1.28.1 #5) id m0nCro3-0019RAC; Fri, 15 Jan 93 09:13 EST
Received: by sol.siemens.com (4.1/RTL-CLIENT-SUBSIDIARY)
id AA02994; Fri, 15 Jan 93 09:13:50 EST
Date: Fri, 15 Jan 93 09:13:50 EST
From: Ellen Voorhees <ellen@sol.siemens.COM>
Message-Id: <9301151413.AA02994@sol.siemens.com>
To: ml@ics.uci.edu
Subject: Job announcement

The learning department of Siemens Corporate Research in Princeton, New Jersey
is looking to hire a researcher interested in statistical
and knowledge-based methods for natural language processing, text retrieval,
and text categorization. The position requires either a PhD (preferred)
or a masters degree with some experience in an appropriate field.
The main responsibility of the successful candidate will be to conduct research
in automatic information retrieval and (statistical) natural language
processing. Tasks include setting up and running experiments, programming, etc.

People interested in the position should send a PLAIN ASCII resume
to ellen@learning.siemens.com or a hardcopy of the resume to:
Human Services
Department EV
Siemens Corporate Research, Inc.
755 College Road East
Princeton, NJ 08540
Siemens is an equal opportunity employer.


Ellen Voorhees
Member of Technical Staff
Siemens Corporate Research, Inc.

------------------------------

Received: from ics.uci.edu by Paris.ics.uci.edu id aa06095; 15 Jan 93 22:53 PST
Received: from lanai.cs.ucla.edu by q2.ics.uci.edu id aa06227;
15 Jan 93 22:53 PST
Received: by lanai.cs.ucla.edu
(Sendmail 5.61d+YP/3.21) id AA24696;
Fri, 15 Jan 93 22:53:44 -0800
Date: Fri, 15 Jan 93 22:53:42 PST
From: David Heckerman <heckerma@cs.ucla.EDU>
Message-Id: <930116.065342z.24694.heckerma@lanai.cs.ucla.edu>
To: ML@ics.uci.edu
Subject: call for papers

To: uai-list
Subject: Call for Papers
-------

NINTH ANNUAL CONFERENCE ON UNCERTAINTY IN ARTIFICIAL INTELLIGENCE

July 9-11, 1993, Washington D.C.

CALL FOR PAPERS

The ninth annual Conference on Uncertainty in Artificial Intelligence will be
devoted to methods for reasoning under uncertainty as applied to problems in
artificial intelligence. The conference's scope covers the full range of
approaches to automated and interactive reasoning and decision making under
uncertainty, including both qualitative and numeric methods.

We seek papers on fundamental theoretical issues, on computational techniques
for uncertain reasoning, and on the foundations of alternative paradigms of
uncertain reasoning. Topics of interest include:

- Foundations of uncertainty concepts
- Representations of uncertain knowledge and their semantics
- Knowledge acquisition
- Construction of uncertainty models from data
- Uncertainty in machine learning
- Automated planning and decision making under uncertainty
- Algorithms for uncertain inference
- Pooling of uncertain evidence
- Belief updating and inconsistency handling in uncertain knowledge bases
- Explanation and summarization of uncertain information
- Control of reasoning and real-time architectures

This year, we hope to attract more contributions that emphasize real-world
applications of uncertain reasoning. Questions of particular interest
include:

- Why was it necessary to represent uncertainty in your domain?
- What kind of uncertainties does your application address?
- Why did you decide to use your particular uncertainty formalism?
- What theoretical problems, if any, did you encounter?
- What practical problems did you encounter?
- Did users of your system find the results or recommendations useful?
- Did the introduction of your system lead to improvements in reasoning
or decision making?
- What methods were used to validate the effectiveness of the systems?

Papers will be carefully refereed for originality, significance, technical
soundness, and clarity of exposition. Papers may be accepted for presentation
in plenary or poster sessions. Some key applications oriented work may be
presented both in a plenary session and in a poster session where more
technical details can be discussed. All accepted papers will be included in
the published proceedings. Outstanding student papers may be selected for
special distinction.

Five copies of each paper should be sent to one of the Program Co-Chairs by
February 5, 1993. The first page should include a descriptive title, the
names, addresses, and student status of all authors, a brief abstract, and
salient keywords or other topic indicators. Acceptance notices will be sent
by March 29, 1993. Final camera-ready papers, incorporating reviewers'
suggestions, will be due approximately five weeks later. There will be an
eight-page limit on proceedings papers, with a few extra pages available for
a fee.

Program Co-Chairs (paper submissions):

David Heckerman
Department of Computer Science, UCLA
Boelter Hall, Room 3531
405 Hilgard Avenue
Los Angeles, CA 90024-1596
tel: (310) 825-2695, fax: (310) 825-2273
email: heckerman@cs.ucla.edu

Abe Mamdani
Deptartment of Electronic Engineering
Queen Mary & Westfield College
Mile End Road
London E1 4NS
tel: +44-71-975-5341, fax: +44-81-981-0259
e-mail: e.h.mamdani@qmw.ac.uk

General Co-Chair (conference inquiries):

Michael P. Wellman
Department of EECS, University of Michigan
Artificial Intelligence Laboratory
Ann Arbor, MI 48109
tel: (313) 764-6894, fax: (313) 763-1260
email: wellman@engin.umich.edu

Conference Committee: Piero Bonissone, Peter Cheeseman, Mike Clarke, Bruce
D'Ambrosio, Didier Dubois, Max Henrion, John Fox, Rudolf Kruse, Henry Kyburg,
John Lemmer, Tod Levitt, Ramon Lopez de Mantaras, Serafin Moral, Ramesh Patil,
Judea Pearl, Enrique Ruspini, Ross Shachter, Glenn Shafer, Philippe Smets,
Kurt Sundermeyer, Lotfi Zadeh.



------------------------------

End of ML-LIST (Digest format)
****************************************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT