Copy Link
Add to Bookmark
Report
Machine Learning List Vol. 6 No. 01
Machine Learning List: Vol. 6 No. 1
Saturday, January 22, 1994
Contents:
KDD-94: AAAI Workshop on Knowledge Discovery in Databases
Call of Participation
AI and Stats announcement
CFP: ALT'94
book announcement
Data mining report available
INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE
The Machine Learning List is moderated. Contributions should be relevant to
the scientific study of machine learning. Mail contributions to ml@ics.uci.edu.
Mail requests to be added or deleted to ml-request@ics.uci.edu. Back issues
may be FTP'd from ics.uci.edu in pub/ml-list/V<X>/<N> or N.Z where X and N are
the volume and number of the issue; ID: anonymous PASSWORD: <your mail address>
----------------------------------------------------------------------
Sender: rick@sparky.sterling.com (Richard Ohnemus)
Date: Fri, 7 Jan 1994 14:39:53 GMT
Subject:KDD-94: AAAI Workshop on Knowledge Discovery in Databases
C a l l F o r P a p e r s
KDD-94: AAAI Workshop on Knowledge Discovery in Databases
Seattle, Washington, July 31-August 1, 1994
Knowledge Discovery in Databases (KDD) is an area of common interest for
researchers in machine learning, machine discovery, statistics, intelligent
databases, knowledge acquisition, data visualization and expert systems. The
rapid growth of data and information created a need and an opportunity for
extracting knowledge from databases, and both researchers and application
developers have been responding to that need. KDD applications have been
developed for astronomy, biology, finance, insurance, marketing, medicine,
and many other fields. Core Problems in KDD include representation issues,
search complexity, the use of prior knowledge, and statistical inference.
This workshop will continue in the tradition of the 1989, 1991, and 1993 KDD
workshops by bringing together researchers and application developers from
different areas, and focusing on unifying themes such as the use of domain
knowledge, managing uncertainty, interactive (human-oriented) presentation,
and applications. The topics of interest include:
Applications of KDD Techniques
Interactive Data Exploration and Discovery
Foundational Issues and Core Problems in KDD
Machine Learning/Discovery in Large Databases
Data and Knowledge Visualization
Data and Dimensionality Reduction in Large Databases
Use of Domain Knowledge and Re-use of Discovered Knowledge
Functional Dependency and Dependency Networks
Discovery of Statistical and Probabilistic models
Integrated Discovery Systems and Theories
Managing Uncertainty in Data and Knowledge
Machine Discovery and Security and Privacy Issues
We also invite working demonstrations of discovery systems. The workshop
program will include invited talks, a demo and poster session, and panel
discussions. To encourage active discussion, workshop participation will be
limited. The workshop proceedings will be published by AAAI. As in previous
KDD Workshops, a selected set of papers from this workshop will be considered
for publication in journal special issues and as chapters in a book.
Please submit 5 *hardcopies* of a short paper (a maximum of 12 single-spaced
pages, 1 inch margins, and 12pt font, cover page must show author(s) full
address and E-MAIL and include 200 word abstract + 5 keywords) to reach the
workshop chairman on or before March 1, 1994.
Usama M. Fayyad (KDD-94) | Fayyad@aig.jpl.nasa.gov
AI Group M/S 525-3660 |
Jet Propulsion Lab | (818) 306-6197 office
California Institute of Technology | (818) 306-6912 FAX
4800 Oak Grove Drive |
Pasadena, CA 91109 |
************************************* I m p o r t a n t D a t e s **********
* Submissions Due: March 1, 1994 *
* Acceptance Notice: April 8, 1994 Final Version due: April 29, 1994 *
******************************************************************************
Program Committee
=================
Workshop Co-Chairs:
Usama M. Fayyad (Jet Propulsion Lab, California Institute of Technology)
Ramasamy Uthurusamy (General Motors Research Laboratories)
Program Committee:
Rakesh Agrawal (IBM Almaden Research Center)
Ron Brachman (AT&T Bell Laboratories)
Leo Breiman (University of California, Berkeley)
Nick Cercone (University of Regina, Canada)
Peter Cheeseman (NASA AMES Research Center)
Greg Cooper (University of Pittsburgh)
Brian Gaines (University of Calgary, Canada)
Larry Kerschberg (George Mason University)
Willi Kloesgen (GMD, Germany)
Chris Matheus (GTE Laboratories)
Ryszard Michalski (George Mason University)
Gregory Piatetsky-Shapiro (GTE Laboratories)
Daryl Pregibon (AT&T Bell Laboratories)
Evangelos Simoudis (Lockheed Research Center)
Padhraic Smyth (Jet Propulsion Laboratory)
Jan Zytkow (Wichita State University)
------------------------------
Date: Wed, 29 Dec 1993 15:20:20 -0700 (MST)
From: Jeffrey Van Baalen <Jeff.Vanbaalen@uwyo.edu>
Subject: Call of Participation
WORKSHOP ON THEORY REFORMULATION
AND ABSTRACTION
Jackson Hole, Wyoming, May 22-24, 1994
Workshop Chairman:
Jeffrey Van Baalen (University of Wyoming, USA)
Program Committee:
Tom Ellman (Rutgers University, USA)
Fausto Giunchiglia (IRST and University of Trento,Italy)
Robert Holte (University of Ottawa, Canada)
Michael Lowry (NASA Ames Research Center, USA)
CALL FOR PARTICIPATION
From the inception of artificial intelligence research it has been
recognized that problem reformulation and abstraction are crucial
capabilities for intelligent behavior. A common belief persists that
improvement in our understanding of these capabilities could
revolutionize knowledge-based and learning systems. However, only
limited success has been achieved in incorporating problem
reformulation and abstraction into such systems.
Considerable interest in recent years in improving the theory and
practice of problem reformulation has led to a series of workshops on
problem reformulation. The first was the ``Representation Issues in
Machine Learning'' workshop organized as a part of the 1989 Machine
Learning Workshop. In 1990, the first Workshop on Change of
Representation and Problem Reformulation was held in Menlo Park, CA.
Then in 1992, the second in this series was held in Monterey, CA.
In parallel there has been a series of workshops on abstraction
methods. Two of these workshops were, "Automatic Generation of
Approximations and Abstractions," a AAAI workshop held in Boston
(1990) and, "Approximation and Abstraction of Computational Theories,"
held at AAAI in San Jose (1992). The workshops were intended address
two distinct computational tasks: (1) The synthesis problem: Given a
complete, correct but intractable domain theory, construct an
approximate or abstract version of the theory; (2) The selection
problem: Given a set of different approximate or abstract domain
theories, select one that is most appropriate to the problem at hand.
There was a considerable intersection in the set of attendees at the
two separate workshop series and many noted that the goals of the two
research lines were remarkably similar. The present workshop
represents an attempt to merge these two series and will enable
intensive interaction among researchers from a variety of disciplines
with an interest in representation change, problem reformulation, and
abstraction. It will include selected presentations, discussions, and
a keynote address by Saul Amarel. Attendance is by invitation only
and is limited. Interested parties should send a research summary to
be reviewed by the committee. Submissions are welcomed in the areas
of:
- Techniques for automating reformulation or abstraction.
- Theoretical analyses of reformulation or abstraction.
- Reformulation or abstraction techniques applied to
planning, scheduling, control, constraint satisfaction,
theorem proving or other search tasks.
- Reformulation or abstraction techniques for simulation,
monitoring, design or diagnosis of physical systems.
- Reformulation or abstraction for reasoning by analogy.
- Reformulation or abstraction for constructive induction.
Investigators studying reformulation and abstraction in all areas of
Artificial Intelligence are encouraged to apply to attend. In the
previous workshops, diverse groups of participants from fields such as
Software Synthesis, Machine Learning, Reasoning about Physical
Systems, Automated Design, Logic Programming and Knowledge
Representation have contributed to a rich and lively exchange of
ideas. We hope and expect that the upcoming workshop will include an
equally diverse group of participants.
In addition to their research summary, participants who wish to
present should submit an extended abstract to be reviewed by the
committee. Abstracts should be 3000-5000 words. This is a workshop:
presentations do not have to be on new work.
Accepted participants will be invited to submit full length papers, upto
twenty pages, for the proceedings. The proceedings will be distributed to
the workshop participants.
Please send submissions to the chairman at the address below or by
e-mail, in PostScript form only. If submitting by FAX, only one copy
is needed; if submitting by mail please send three copies. Please
include several ways of contacting the principal author: electronic
mail addresses and telephone numbers are preferred, in that order. In
case of multiple authors, please indicate which authors wish to
participate.
Submissions received after 15 March 1994 will not be considered. The
decisions of the committee will be mailed 11 April 1994; full length papers
are due 6 May 1994.
Chairman: Jeffrey Van Baalen
Computer Science Department
P.O. Box 3682
University of Wyoming
Laramie, WY 82071
E-mail: jvb@uwyo.edu
FAX: (307) 766-4036; Telephone: (307) 766-6231
A hardcopy version of this call can be obtained by sending your address or
FAX number to the chairman.
------------------------------
Date: Fri, 14 Jan 94 08:40:08 CST
From: "Douglas H. Fisher" <dfisher@vuse.vanderbilt.edu>
Subject: AI and Stats announcement
Call For Papers
Fifth International Workshop on
Artificial Intelligence
and
Statistics
January 4-7, 1995
Ft. Lauderdale, Florida
PURPOSE:
This is the fifth in a series of workshops which has
brought together researchers in Artificial Intelligence and in
Statistics to discuss problems of mutual interest. The exchange has
broadened research in both fields and has strongly encouraged
interdisciplinary work.
This workshop will have as its primary theme:
``Learning from data''
Papers on other aspects of the interface between AI & Statistics
are *strongly* encouraged as well (see TOPICS below).
FORMAT:
To encourage interaction and a broad exchange of ideas, the
presentations will be limited to about 20 discussion papers in single
session meetings over three days (Jan. 5-7). Focussed poster
sessions will provide the means for presenting and discussing the
remaining research papers. Papers for poster sessions will be treated
equally with papers for presentation in publications.
Attendance at the workshop will *not* be limited.
The three days of research presentations will be preceded by a day
of tutorials (Jan. 4). These are intended to expose researchers
in each field to the methodology used in the other field. The Tutorial
Chair is Prakash Shenoy. Suggestions on tutorial topics can be sent to
him at pshenoy@ukanvm.bitnet.
LANGUAGE:
The language will be English.
TOPICS OF INTEREST:
The fifth workshop has a primary theme of
``Learning from data''
At least one third of the workshop schedule will be set aside for
papers with this theme. Other themes will be developed according
to the strength of the papers in other areas, including but not
limited to:
- integrated man-machine modeling methods
- empirical discovery and statistical methods for knowledge
acquisition
- probability and search
- uncertainty propagation
- combined statistical and qualitative reasoning
- inferring causation
- quantitative programming tools and integrated software for
data analysis and modeling.
- discovery in databases
- meta data and design of statistical data bases
- automated data analysis and knowledge representation for
statistics
- cluster analysis
SUBMISSION REQUIREMENTS:
Three copies of an extended abstract (up to four pages) should be
sent to
H. Lenz, Program Chair or D. Fisher, General Chair
5th Int'l Workshop on AI & Stats 5th Int'l Workshop on AI & Stats
Free University of Berlin Box 1679, Station B
Department of Economics Department of Computer Science
Institute for Statistics Vanderbilt University
and Econometrics Nashville, Tennessee 37235
14185 Berlin, Garystr 21 USA
Germany
or electronically (postscript or latex documents preferred) to
ai-stats-95@vuse.vanderbilt.edu
Submissions for discussion papers (and poster presentations) will
be considered if *postmarked* by June 30, 1994. If the submission
is electronic (e-mail), then it must be *received* by midnight
June 30, 1994. Abstracts postmarked after this date but *before*
July 31, 1994, will be considered for poster presentation *only*.
Please indicate which topic(s) your abstract addresses and include
an electronic mail address for correspondence. Receipt of all
submissions will be confirmed via electronic mail. Acceptance
notices will be mailed by September 1, 1994. Preliminary papers (up
to 20 pages) must be returned by November 1, 1994. These preliminary
papers will be copied and distributed at the workshop.
PROGRAM COMMITTEE:
General Chair: D. Fisher Vanderbilt U., USA
Program Chair: H. Lenz Free U. Berlin, Germany
Members:
W. Buntine NASA (Ames), USA
J. Catlett AT&T Bell Labs, USA
P. Cheeseman NASA (Ames), USA
P. Cohen U. of Mass., USA
D. Draper UCLA, USA
Wm. Dumouchel Columbia U., USA
A. Gammerman U. of London, UK
D. J. Hand Open U., UK
P. Hietala U. Tampere, Finland
R. Kruse TU Braunschweig, Germany
S. Lauritzen Aalborg U., Denmark
W. Oldford U. of Waterloo, Canada
J. Pearl UCLA, USA
D. Pregibon AT&T Bell Labs, USA
E. Roedel Humboldt U., Germany
G. Shafer Rutgers U., USA
P. Smyth JPL, USA
D. Spiegelhalter Cambridge U., UK
MORE INFORMATION:
For more information write dfisher@vuse.vanderbilt.edu
or write to ai-stats-request@watstat.uwaterloo.ca to
subscribe to the AI and Statistics mailing list.
------------------------------
To: alt94announce@rifis.sci.kyushu-u.ac.jp
Subject: CFP: ALT'94
Date: Mon, 17 Jan 1994 11:27:39 +0900
*** CALL FOR PAPERS ***
ALT'94
Fifth International Workshop on Algorithmic Learning Theory
Reinhardsbrunn Castle, Germany
October 13-15, 1994
The Fifth International Workshop on Algorithmic Learning Theory (ALT'94)
will be held at the Reinhardsbrunn Castle, Friedrichroda, Germany during
October 13-15, 1994. The workshop will be supported by the German Computer
Science Society (GI) in cooperation with the Japanese Society for
Artificial Intelligence (JSAI) and it will be coupled with the Fourth
International Workshop on Analogical and Inductive Inference for Program
Synthesis (AII'94), which will be held October 10-11. We invite
submissions to ALT'94 from researchers in algorithmic learning or its
related fields, such as (but not limited to) the theory of machine
learning, computational logic of/for machine discovery, inductive
inference, query learning, learning by analogy, neural networks, pattern
recognition, and applications to databases, gene analysis, etc. The
conference will include presentations of refereed papers and invited talks
by Dr. Naoki Abe from NEC, Prof. Michael M. Richter from Kaiserslautern and
Prof. Carl H. Smith from Maryland.
SUBMISSION. Authors must submit six copies of their extended abstracts to :
Prof. Setsuo Arikawa - ALT'94
RIFIS, Kyushu University 33
Fukuoka 812, Japan
Abstracts must be received by
April 15, 1994.
Notification of acceptance or rejection will be mailed to the first (or
designated) author by June 1, 1994.
Camera-ready copy of accepted papers will be due July 4, 1994.
FORMAT. The submitted abstract should consist of a cover page with
title, authors' names, postal and e-mail addresses, and an
approximately 200 word summary , and a body not longer than ten (10)
pages of size A4 or 7x10.5 inches in twelve-point font. Papers not
adhering to this format may be returned without review.
Double-sided printing is strongly encouraged.
POLICY. Each submitted abstract will be reviewed by at least four
members of the program committee, and be judged on clarity,
significance, and originality. Submissions should contain new
results that have not been published previously. Submissions to
ALT'94 may be submitted to AII'94, but if so a statement to this
effect must appear on the cover page or the first page.
Proceedings will be published as a volume in the Lecture Notes Series
in Artificial Intelligence from Springer-Verlag, and some selected
papers will be included in a special issue of the Annals of Mathematics
and Artificial Intelligence.
For more information, contact :
ALT94@informatik.th-leipzig.de
alt94@rifis.sci.kyushu-u.ac.jp
CONFERENCE CHAIR :
K.P. Jantke
HTWK Leipzig (FH)
Fachbereich IMN
Postfach 66
04251 Leipzig, Germany
janos@informatik.th-leipzig.de
PROGRAM COMMITTEE CHAIR :
Setsuo Arikawa, Kyushu Univ.
alt94@rifis.sci.kyushu-u.ac.jp
PROGRAM COMMITTEE :
N. Abe (NEC), D. Angluin (Yale U.), J. Barzdins (U.Latvia),
A. Biermann (Duke U.), J. Case (U.Delaware), R. Daley (U.Pittsburgh),
P. Flach (Tilburg U.), R. Freivalds (U.Latvia), M. Haraguchi (TiTech),
H. Imai(U.Tokyo), B. Indurkhya (Northeastern U.), P. Laird (NASA),
Y. Kodratoff (U.Paris-Sud), A. Maruoka (Tohoku U.),
S. Miyano (Kyushu U.), H. Motoda (Hitachi), S. Muggleton (Oxford U.),
M. Numao (TiTech), L. Pitt (U. Illinois),
Y. Sakakibara (Fujitsu Lab.), P. Schmitt (U. Karlsruhe),
T. Shinohara (KIT), C. Smith (U.Maryland), E. Ukkonen (U.Helsinki),
O. Watanabe (TiTech), R. Wiehagen (U.Kaiserslautern),
T. Yokomori (U.Electro-Comm.), T. Zeugmann (TH Darmstadt),
LOCAL ARRANGEMENTS COMMITTEE CHAIR :
Erwin Keusch
HTWK Leipzig (FH)
Fachbereich IMN
Postfach 66
04251 Leipzig, Germany
erwin@informatik.th-leipzig.de
------------------------------
Date: Fri, 14 Jan 1994 16:09:16 +0100
From: Paul.Vitanyi@cwi.nl
Subject: book announcement
Ming Li and Paul Vitanyi,
AN INTRODUCTION TO KOLMOGOROV COMPLEXITY AND ITS APPLICATIONS,
Springer Verlag, September 1993, xx+546 pp, 38 illus.
Hardcover \$59.00/ISBN 0-387-94053-7/ISBN 3-540-94053-7.
(Texts and Monographs in Computer Science Series)
BLURB:
Written by two experts in the field, this is the only
comprehensive and unified treatment of the
central ideas and their applications of Kolmogorov complexity---the
theory dealing with the quantity of information in individual objects.
Kolmogorov complexity is known variously as `algorithmic
information', `algorithmic entropy', `Kolmogorov-Chaitin
complexity', `descriptional complexity', `shortest program length',
`algorithmic randomness', and others.
The book presents a thorough, comprehensive treatment of the subject
with a wide range of illustrative applications. Such applications
include randomness of individual finite objects or infinite sequences,
Martin-Loef tests for randomness,
Goedel's incompleteness result, information theory of individual objects,
universal probability, general inductive reasoning,
inductive inference, prediction, mistake bounds, computational
learning theory, inference in statistics,
the incompressibility method, combinatorics,
time and space complexity of computations,
average case analysis of algorithms such as HEAPSORT,
language recognition, string matching,
formal language and automata theory,
parallel computation, Turing machine complexity,
lower bound proof techniques, probability theory, structural complexity theory,
oracles, logical depth, universal optimal search, physics and computation,
dissipationless reversible computing, information distance
and picture similarity, thermodynamics of computing, statistical
thermodynamics and Boltzmann entropy.
The book is ideal for advanced undergraduate students, graduate students
and researchers in computer science, mathematics, cognitive sciences,
philosophy, electrical engineering, statistics and physics.
The text is comprehensive enough to provide enough material for a two semester
course and flexible enough for a one semester course. Although it discusses
the mathematical theories of Kolmogorov complexity and randomness tests
in detail, it does not presuppose a background in heavy mathematics.
The book is self contained in the sense that it contains the basic requirements
of computability theory, probability theory, information theory, and coding.
Included are numerous problem sets, comments, source references and hints to
solutions of problems, as well as extensive course outlines for classroom use.
CONTENTS:
Preface v
How to Use This Book viii
Acknowledgements x
Outlines of One-Semester Courses xii
List of Figures xix
1 Preliminaries 1
1.1 A Brief Introduction 1
1.2 Mathematical Preliminaries 6
1.2.1 Prerequisites and Notation 6
1.2.2 Numbers and Combinatorics 7
1.2.3 Binary Strings 11
1.2.4 Asymptotics Notation 14
1.3 Basics of Probability Theory 16
1.3.1 Kolmogorov Axioms 17
1.3.2 Conditional Probability 18
1.3.3 Continuous Sample Spaces 19
1.4 Basics of Computability Theory 22
1.4.1 Effective Enumerations and Universal Machines 26
1.4.2 Undecidability of the Halting Problem 32
1.4.3 Enumerable Functions 34
1.4.4 Feasible Computations 35
1.5 The Roots of Kolmogorov Complexity 45
1.5.1 Randomness 46
1.5.2 Prediction and Probability 55
1.5.3 Information Theory and Coding 61
1.5.4 State Symbol Complexity 79
1.6 History and References 80
2 Algorithmic Complexity 87
2.1 The Invariance Theorem 90
2.2 Incompressibility 95
2.3 Complexity C(x) as an Integer Function 101
2.4 Random Finite Sequences 105
2.5 *Random Infinite Sequences 112
2.6 Statistical Properties of Finite Sequences 126
2.6.1 Statistics of 0's and 1's 127
2.6.2 Statistics of Blocks 130
2.6.3 Length of Runs 132
2.7 Algorithmic Properties of 134
2.8 Algorithmic Information Theory 140
2.9 History and References 165
3 Algorithmic Prefix Complexity 169
3.1 The Invariance Theorem 171
3.2 Incompressibility 175
3.3 Prefx Complexity K(x) as an Integer Function 177
3.4 Random Finite Sequences 177
3.5 *Random Infinite Sequences 180
3.6 Algorithmic Properties of K(x) 188
3.7 *The Complexity of the Complexity Function 190
3.8 *Symmetry of Algorithmic Information 194
3.9 History and References 209
4 Algorithmic Probability 211
4.1 Enumerable Functions Revisited 212
4.2 A Nonclassical Approach to Measures 214
4.3 Discrete Sample Space 216
4.3.1 Universal Enumerable Semimeasure 217
4.3.2 A Priori Probability 221
4.3.3 Algorithmic Probability 223
4.3.4 The Coding Theorem 223
4.3.5 Randomness by Sum Tests 228
4.3.6 Randomness by Payoff Functions 232
4.4 Continuous Sample Space 234
4.4.1 Universal Enumerable Semimeasure 234
4.4.2 A Priori Probability 238
4.4.3 *Solomonoff Normalization 242
4.4.4 *Monotone Complexity and a Coding Theorem 243
4.4.5 *Relation Between Complexities 246
4.4.6 *Randomness by Integral Tests 247
4.4.7 *Randomness by Martingale Tests 254
4.4.8 *Randomness by Martingales 256
4.4.9 *Relations Between Tests 258
4.5 History and References 268
5 Inductive Reasoning 275
5.1 Introduction 275
5.2 Bayesian Reasoning 279
5.3 Solomonoff's Induction Theory 282
5.3.1 Formal Analysis 284
5.3.2 Application to Induction 290
5.4 Recursion Theory Induction 291
5.4.1 Inference of Hypotheses 291
5.4.2 Prediction 292
5.4.3 Mistake Bounds 293
5.4.4 Certification 294
5.5 Pac-Learning 295
5.5.1 Definitions 296
5.5.2 Occam's Razor Formalized 296
5.6 Simple Pac-Learning 300
5.6.1 Discrete Sample Space 301
5.6.2 Continuous Sample Space 305
5.7 Minimum Description Length 308
5.8 History and References 318
6 The Incompressibility Method 323
6.1 Two Examples 324
6.2 Combinatorics 328
6.3 Average Case Complexity of Algorithms 334
6.3.1 Heapsort 334
6.3.2 Longest Common Subsequence 338
6.3.3 m -Average Case Complexity 340
6.4 Languages 344
6.4.1 Formal Language Theory 344
6.4.2 On-Line CFL Recognition 349
6.4.3 Multihead Automata 351
6.5 Machines 356
6.5.1 *Turing Machine Time Complexity 356
6.5.2 Parallel Computation 362
6.6 History and References 370
7 Resource-Bounded Complexity 377
7.1 Mathematical Theory 378
7.1.1 Computable Majorants 381
7.1.2 Resource-Bounded Hierarchies 386
7.2 Language Compression 392
7.2.1 With an Oracle 393
7.2.2 Without an Oracle 396
7.2.3 Ranking 399
7.3 Computational Complexity 401
7.3.1 Constructing Oracles 402
7.3.2 P-Printability 405
7.3.3 Instance Complexity 406
7.4 Kt Complexity 410
7.4.1 Universal Optimal Search 411
7.4.2 Potential 413
7.5 Logical Depth 421
7.6 History and References 428
8 Physics and Computation 433
8.1 Reversible Computation 434
8.1.1 Energy Dissipation 434
8.1.2 Reversible Logic Circuits 435
8.1.3 A Ballistic Computer 436
8.1.4 Reversible Turing Machines 439
8.2 Information Distance 441
8.2.1 Max Distance 442
8.2.2 Picture Distance 446
8.2.3 Reversible Distance 448
8.2.4 Sum Distance 450
8.2.5 Metrics Relations and Dimensional Properties 452
8.2.6 Thermodynamics of Computing 455
8.3 Thermodynamics 458
8.3.1 Classical Entropy 458
8.3.2 Statistical Mechanics and Boltzmann Entropy 461
8.3.3 Gibbs Entropy 467
8.4 Entropy Revisited 468
8.4.1 Algorithmic Entropy 469
8.4.2 Algorithmic Entropy and Randomness Tests 473
8.4.3 Entropy Stability and Nondecrease 478
8.5 Chaos, Biology, and All That 486
8.6 History and References 490
Bibliography 493
Index 527
If you are seriously interested in using the text in the course,
contact Springer-Verlag's Editor for Computer Science, Martin
Gilchrist, for a complimentary copy.
Martin Gilchrist gilchris@sccm.Stanford.edu
Suite 200, 3600 Pruneridge Ave. (408) 249-9314
Santa Clara, CA 95051
If you are interested in the text but won't be teaching a course,
we understand that Springer-Verlag sells the book, too.
To order, call toll-free 1-800-SPRINGER (1-800-777-4643); N.J.
residents call 201-348-4033. For information regarding
examination copies for course adoptions, write Springer-Verlag
New York, Inc. Attn: Jacqueline Jeng, 175 Fifth Avenue, New York,
NY 10010. E-mail address: jeng@spint.compuserve.com
------------------------------
Date: Tue, 18 Jan 1994 17:40:12 +0100
From: Marcel.Holsheimer@cwi.nl
Subject: Data mining report available
The following report can be obtained by ftp:
DATA MINING
The Search for Knowledge in Databases
Marcel Holsheimer, Arno Siebes
Abstract
Data mining is the search for relationships and global patterns that
exist in large databases, but are `hidden' among the vast amounts of
data, such as a relationship between patient data and their medical
diagnosis. These relationships represent valuable knowledge about the
database and objects in the database and, if the database is a
faithful mirror, of the real world registered by the database.
One of the main problems for data mining is that the number of
possible relationships is very large, thus prohibiting the search for
the correct ones by simple validating each of them. Hence, we need
intelligent search strategies, as taken from the area of machine
learning.
Another important problem is that information in data objects is often
corrupted or missing. Hence, statistical techniques should be applied
to estimate the reliability of the discovered relationships.
The report provides a survey of current data mining research, it
presents the main underlying ideas, such as inductive learning, and
search strategies and knowledge representations used in data mine
systems. Furthermore, it describes the most important problems and
their solutions, and provides an survey of research projects.
CR subject classification (1991):
Database applications (H.2.8),
Information search and retrieval (H.3.3),
Learning (I.2.6) concept learning, induction, knowledge acquisition,
Clustering (I.5.3)
keywords: database applications, machine learning, inductive learning,
knowledge acquisition, data summarization
_____________________________________________________________________
The report can be obtained by anonymous ftp:
& ftp ftp.cwi.nl
Name: ftp
331 Guest login ok, send ident (your e-mail address) as password.
Password:
ftp> binary
ftp> cd pub/CWIreports/AA
ftp> get CS-R9406.ps.Z
ftp> bye
________________________________________________________________________
Marcel Holsheimer | Centre for Mathematics and Computer Science (CWI)
phone +31 20 592 4134 | Kruislaan 413, Amsterdam, The Netherlands
------------------------------
Date: Fri, 7 Jan 94 13:30:28 EST
From: Isabelle Guyon <isabelle@neural.att.com>
Subject: INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE
**************************************************************************
SPECIAL ISSUE
OF THE
INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE
ON
NEURAL NETWORKS
**************************************************************************
ISSN: 0218-0014
Advances in Pattern Recognition Systems using Neural Networks,
Eds. I. Guyon and P.S.P. Wang, IJPRAI, vol. 7, number 4, August 1993.
**************************************************************************
Among the many applications that have been proposed for neural networks,
pattern recognition has been one of the most successful ones, why?
This collection of papers give will satisfy your curiosity!
The commonplace rationale behind using Neural Networks is that a machine which
architecture imitates that of the brain should inherit its remarquable
intelligence. This logic usually contrasts with the reality of the performance
of Neural Networks. In this special issue, however, the authors have kept some
distance with the biological foundations of Neural Networks. The success of
their applications relies, to a large extend, on careful engineering. For
instance, many novel aspects of the works presented here are concerned with
combining Neural Networks with other ``non neural'' modules.
With:
[ [1] ] Y. Bengio.
A Connectionist Approach to Speech Recognition.
[ [2] ]
J. Bromley, J. W. Bentz, L. Bottou, I. Guyon, L. Jackel, Y. Le Cun, C. Moore,
E. Sackinger, and R. Shah.
Signature Verification with a Siamese TDNN.
[ [3] ]
C. Burges, J. Ben, Y. Le Cun, J. Denker and C. Nohl.
Off-line Recognition of Handwritten Postal Words using Neural Networks.
[ [4] ]
H. Drucker, Robert Schapire and Patrice Simard.
Boosting Performance in Neural Networks.
[ [5] ]
F. Fogelman, B. Lamy and E. Viennet.
Multi-Modular Neural Network Architectures for Pattern Recognition:
Applications in Optical Character Recognition and Human Face Recognition.
[ [6] ]
A. Gupta, M. V. Nagendraprasad, A. Liu, P. S. P. Wang and S. Ayyadurai.
An Integrated Architecture for Recognition of
Totally Unconstrained Handwritten Numerals.
[ [7] ]
E. K. Kim, J. T. Wu, S. Tamura, R. Close, H. Taketani, H. Kawai, M. Inoue and K.
Ono.
Comparison of Neural Network and K-NN Classification Methods in Vowel
and Patellar Subluxation Image Recognitions.
[ [8] ]
E. Levin, R. Pieraccini and E. Bocchieri.
Time-Warping Network: A Neural Approach to Hidden Markov Model based
Speech Recognition.
[ [9] ]
H. Li and J. Wang.
Computing Optical Flow with a Recurrent Neural Network.
[ [10] ]
W. Li and N. Nasrabadi.
Invariant Object recognition Based on Neural Network of Cascaded RCE Nets.
[ [11] ]
G. Martin, M. Rashid and J. Pittman.
Integrated Segmentation and Recognition Through Exhaustive Scans or
Learned Saccadic Jumps.
[ [12] ]
C. B. Miller and C. L. Giles.
Experimental Comparison of the Effect of Order in Recurrent Neural Networks.
[ [13] ]
L. Miller and A. Gorin.
Structured Networks, for Adaptive Language Acquisition.
[ [14] ]
N. Morgan, H. Bourlard, S. Renals M. Cohen and H. Franco.
Hybrid Neural Network / Hidden Markov Model Systems for Continuous Speech
Recognition.
[ [15] ]
K. Peleg and U. Ben-Hanan.
Adaptive Classification by Neural Net Based Prototype Populations.
[ [16] ]
L. Wiskott and C. von der Malsburg.
A Neural System for the Recognition of Partially Occluded Objects in
Cluttered Scenes - A Pilot Study.
[ [17] ]
G. Zavaliagkos, S. Austin, J. Makhoul and R. Schwartz.
A Hybrid Continuous Speech Recognition System Using Segmental Neural
Nets with Hidden Markov Models.
------------------------------
End of ML-LIST (Digest format)
****************************************