Copy Link
Add to Bookmark
Report
Neuron Digest Volume 04 Number 30
Neuron Digest Sunday, 4 Dec 1988 Volume 4 : Issue 30
Today's Topics:
Call for Papers (ISMIS'89)
Tech Report - Connectionist Speech Recognition
IEEE ICNNN 1989 Call for Papers
Explanatory Coherence: BBS Call for Commentators
TR available
Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"
------------------------------------------------------------
Subject: Call for Papers (ISMIS'89)
From: wong@gvax.cs.cornell.edu (Mike Wong)
Organization: Cornell Univ. CS Dept, Ithaca NY
Date: 21 Nov 88 16:08:44 +0000
CALL FOR PAPERS
FOURTH INTERNATIONAL SYMPOSIUM ON METHODOLOGIES
FOR INTELLIGENT SYSTEMS
Charlotte, North Carolina, Hilton Hotel, University Place
October 12-14, 1989
SPONSORS: Energy Division of the ORNL, Martin Marietta Energy Systems,
University of North Carolina - Charlotte, University of Turin (ITALY)
PURPOSE OF THE SYMPOSIUM: This Symposium is intended to attract individuals
who are actively engaged both in theoretical and practical aspects of
intelligent systems. The goal is to provide a platform for a useful
exchange between theoreticians and practitioners, and to foster the
cross-fertilization of ideas in the following areas: approximate reasoning,
expert systems, intelligent databases, knowledge representation, learning
and adaptive systems, logic for A.I., neural networks.
SYMPOSIUM CHAIRMAN: Zbigniew W. Ras (UNC-Charlotte)
ORGANIZING COMMITTEE: Bill Chu (UNC-C), Mary Emrich (ORNL),
Attilio Giordana (Turin, Italy), Zbigniew Michalewicz (New Zealand),
Alberto Pettorossi (Rome, Italy), Pietro Torasso (Turin, Italy),
S.K.Michael Wong (Cornell), Maria Zemankova (NSF & UT-Knoxville),
Jan Zytkow (George Mason)
PROGRAM COMMITTEE: Luigia Aiello (Italy), Andrew G. Barto (UM-Amherst),
James Bezdek (Boeing), Alan W. Bierman (Duke), John Bourne (Vanderbilt),
Jaime Carbonell (CMU), Peter Cheeseman (NASA), Su-shing Chen (UNC-C),
Melvin Fitting (CUNY), Brian R. Gaines (Canada), Peter E. Hart
(Syntelligence), Marek Karpinski (West Germany), Kurt Konolige (SRI),
Catherine Lassez (IBM-T.J Watson), R. Lopez de Mantaras (Spain),
Ryszard Michalski (George Mason), Jack Minker (Maryland),
Jose Miro (Spain), Masao Mukaidono (Japan), Ephraim Nissan (Israel),
Rohit Parikh (CUNY), Reind van de Riet (The Netherlands),
Colette Rolland (France), Lorenza Saitta (Italy), Eric Sandewall
(Sweden), Joachim W. Schmidt (West Germany), Richmond Thomason
(Pittsburgh), David S. Warren (SUNY-Stony Brook)
INVITED SPEAKERS: Jon Doyle (MIT), Ryszard Michalski (George Mason),
Richard Waldinger (SRI)
SUBMISSION AND INFORMATION: Send four copies of a complete paper to one of
the addresses below:
Dr. S.K. Michael Wong, Cornell Univ., Comp. Sci., Upson Hall,
Ithaca, New York 14853-7501
or
Dr. A. Giordana, Univ. of Turin, Comp. Sci., Corso Svizzera 185,
10149 Torino, Italy
TIME SCHEDULE:
Submission of papers..........................March 15, 1989
Notification of acceptance....................May 15, 1989
Final paper to be included in proceedings.....June 15, 1989
------------------------------
Subject: Tech Report - Connectionist Speech Recognition
From: watrous@linc.cis.upenn.edu (Raymond Watrous)
Date: Wed, 23 Nov 88 15:53:30 -0500
The following technical report is available from the Department of Computer
and Information Science, University of Pennsylvania:
Speech Recognition Using Connectionist Networks
Raymond L. Watrous
MS-CIS-88-96
LINC LAB 138
Abstract
The use of connectionist networks for speech recognition is assessed using
a set of representative phonetic discrimination problems. The problems are
chosen with respect to the physiological theory of phonetics in order to
give broad coverage to the space of articulatory phonetics. Separate
network solutions are sought to each phonetic discrimination problem.
A connectionist network model called the Temporal Flow Model is defined
which consists of simple processing units with single valued outputs
interconnected by links of variable weight. The model represents temporal
relationships using delay links and permits general patterns of
connectivity including feedback. It is argued that the model has properties
appropriate for time varying signals such as speech.
Methods for selecting network architectures for different recognition
problems are presented. The architectures discussed include random
networks, minimally structured networks, hand crafted networks and networks
automatically generated based on samples of speech data.
Networks are trained by modifying their weight parameters so as to minimize
the mean squared error between the actual and the desired response of the
output units. The desired output unit response is specified by a target
function. Training is accomplished by a second order method of iterative
nonlinear optimization by gradient descent which incorporates a method for
computing the complete gradient of recurrent networks.
Network solutions are demonstrated for all eight phonetic discrimination
problems for one male speaker. The network solutions are analyzed carefully
and are shown in every case to make use of known acoustic phonetic cues.
The network solutions vary in the degree to which they make use of context
dependent cues to achieve phoneme recognition.
The network solutions were tested on data not used for training and
achieved an average accuracy of 99.5%.
Methods for extending these results to a single network for recognizing the
complete phoneme set from continuous speech obtained from different
speakers are outlined.
It is concluded that acoustic phonetic speech recognition can be
accomplished using connectionist networks.
+++++++++++++++++++++++++++++++++++++++++++++++++++++
This report is available from:
James Lotkowski
Technical Report Facility
Room 269/Moore Building
Computer Science Department
University of Pennsylvania
200 South 33rd Street
Philadelphia, PA 19104-6389
or james@central.cis.upenn.edu
Please do not request copies of this report from me. Copies of the report
cost approximately $19.00 which covers duplication (300 pages) and postage.
I will bring a 'desk copy' to NIPS.
As of December 1, I will be affiliated with the University of Toronto. My
address will be:
Department of Computer Science
University of Toronto
10 King's College Road
Toronto, Canada M5S 1A4
watrous@ai.toronto.edu
------------------------------
Subject: IEEE ICNNN 1989 Call for Papers
From: pwh@ece-csc.ncsu.edu (Paul Hollis)
Date: Fri, 25 Nov 88 14:21:29 -0500
NEURAL NETWORKS
CALL FOR PAPERS
IEEE International Conference on Neural Networks
June 19-22, 1989
Washington, D.C.
The 1989 IEEE International Conference on Neural Networks (ICNN-89)
will be held at the Sheraton Washington Hotel in Washington, D.C., USA
from June 19-22, 1989. ICNN-89 is the third annual conference in a
series devoted to the technology of neurocomputing in its academic,
industrial, commercial, consumer, and biomedical engineering aspects.
The series is sponsored by the IEEE Technical Activities Board Neural
Network Committee, created Spring 1988. ICNN-87 and 88 were huge
successes, both in terms of large attendance and high quality of the
technical presentations. ICNN-89 continues this tradition. It will
be by far the largest and most important neural network meeting of
1989. As in the past, the full text of papers presented orally in the
technical sessions will be published in the Conference Proceedings
(along with some particularly outstanding papers from the Poster
Sessions). The Abstract portions of all poster papers not published
in full will also be published in the Proceedings. The Conference
Proceedings will be distributed at the registration desk to all
regular conference registrants as well as to all student registrants.
This gives conference participants the full text of every paper
presented in each technical session -- which greatly increases the
value of the conference. ICNN is the only major neural network
conference in the world to offer this feature. As is now the
tradition, ICNN-89 will include a day of tutorials (June 18), the
exhibit hall (the neurocomputing industry's primary annual tradeshow),
plenary talks, and social events. Mark your calendar today and plan
to attend IEEE ICNN-89 -- the definitive annual progress report on the
neurocomputing revolution!
DEADLINE FOR SUBMISSION OF PAPERS for ICNN-89 is February 1, 1989.
Papers of 8 pages or less are solicited in the following areas:
- -Real World Applications -Associative Memory
- -Supervised Learning Theory -Image Processing
- -Reinforcement Learning Theory -Self-Organization
- -Robotics and Control -Neurobiological Models
- -Optical Neurocomputers -Vision
- -Optimization -Electronic Neurocomputers
- -Neural Network Theory & Architectures
Papers should be prepared in standard IEEE Conference Proceedings
Format, and typed on the special forms provided in the Author's Kit.
The Title, Author Name, Affiliation, and Abstract portions of the
first page of the paper must be less than a half page in length.
Indicate in your cover letter which of the above subject areas you
wish your paper included in and whether you wish your paper to be
considered for oral presentation, presentation as a poster, or both.
For papers with multiple authors, indicate the name and address of the
author to whom correspondence should be sent. Papers submitted for
oral presentation may, at the referee's discretion, be designated for
poster presentation instead, if they feel this would be more
appropriate. FULL PAPERS in camera-ready form (1 original and 5
copies) should be submitted to Nomi Feldman, Conference Coordinator,
at the address below. For more details, or to request your IEEE
Author's Kit, call or write:
Nomi Feldman,
ICNN-89 Conference Coordinator
3770 Tansy Street
San Diego, CA 92121
(619) 453-6222
------------------------------
Subject: Explanatory Coherence: BBS Call for Commentators
From: harnad@Princeton.EDU (Stevan Harnad)
Date: Sun, 27 Nov 88 12:35:11 -0500
Below is the abstract of a forthcoming target article to appear in
Behavioral and Brain Sciences (BBS), an international, interdisciplinary
journal providing Open Peer Commentary on important and controversial
current research in the biobehavioral and cognitive sciences. To be
considered as a commentator or to suggest other appropriate commentators,
please send email to:
harnad@confidence.princeton.edu or write to:
BBS, 20 Nassau Street, #240, Princeton NJ 08542 [tel: 609-921-7771]
____________________________________________________________________
EXPLANATORY COHERENCE
Paul Thagard
Cognitive Science Loboratory
Princeton University
Princeton NJ 08542
Keywords: Connectionist models, artificial intelligence, explanation,
coherence, reasoning, decision theory, philosophy of science
This paper presents a new computational theory of explanatory coherence
that applies both to the acceptance and rejection of scientific hypotheses
and to reasoning in everyday life. The theory consists of seven principles
that establish relations of local coherence between a hypothesis and other
propositions that explain it, are explained by it, or contradict it. An
explanatory hypothesis is accepted if it coheres better overall than its
competitors. The power of the seven principles is shown by their
implementation in a connectionist program called ECHO, which has been
applied to such important scientific cases as Lavoisier's argument for
oxygen against the phlogiston theory and Darwin's argument for evolution
against creationism, and also to cases of legal reasoning. The theory of
explanatory coherence has implications for artificial intelligence,
psychology, and philosophy.
------------------------------
Subject: TR available
From: Bruno Olshausen <bruno@riacs.edu>
Date: Fri, 02 Dec 88 15:31:14 -0800
The following technical report is available. Please send email
to bruno@riacs.edu, or phone 415-694-4997, for requests:
A Survey of Visual Preprocessing and Shape Representation Techniques
Bruno A. Olshausen
Research Institute for Advanced Computer Science
NASA Ames Research Center
Astract. This survey summarizes many recent theories and methods
proposed for visual preprocessing and shape representation. The survey
brings together research from the fields of biology, psychology, computer
science, electrical engineering, and most recently, neural networks. This
report was motivated by the need to preprocess images for a sparse
distributed memory (SDM), but the techniques presented herein may also
prove useful for applying other associative memories to visual pattern
recognition. The material of this survey is divided into three sections 1)
an overview of biological visual processing, 2) methods of preprocessing
(extracting parts of shape, texture, motion, and depth), and 3) shape
representation and recognition (form invariance, primitives and structural
descriptions, and theories of attention).
------------------------------
End of Neurons Digest
*********************