Copy Link
Add to Bookmark
Report
Neuron Digest Volume 08 Number 09
Neuron Digest Monday, 18 Nov 1991 Volume 8 : Issue 9
Today's Topics:
Announcement of NIPS Workshop
Program information: NIPS91 Workshops
Send submissions, questions, address maintenance, and requests for old
issues to "neuron-request@cattell.psych.upenn.edu". The ftp archives are
available from cattell.psych.upenn.edu (128.91.2.173). Back issues
requested by mail will eventually be sent, but may take a while.
----------------------------------------------------------------------
Subject: Announcement of NIPS Workshop
From: Scott_Fahlman@SEF-PMAX.SLISP.CS.CMU.EDU
Date: Mon, 28 Oct 91 00:10:20 -0500
The Neural Information Processing Systems Conference will be followed by a
program of workshops in Vail, Colorado on December 6 and 7, 1991. The
following one-day workshop will be offered on December 6:
Constructive and Destructive Learning Algorithms
Workshop Leader: Scott E. Fahlman
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
Internet: fahlman@cs.cmu.edu
Most existing neural network learning algorithms work by adjusting
connection weights in a fixed network. Recently we have seen the
emergence of new learning algorithms that alter the network's topology as
they learn. Some of these algorithms start with excess connections and
remove any that are not needed; others start with a sparse network and
add hidden units as needed, sometimes in multiple layers; some algorithms
do both. These algorithms eliminate the need to guess in advance what
network topology will best fit a given problem. In addition, some of
these algorithms claim significant improvements in learning speed and
generalization.
A successful two-day workshop on this topic was presented at the NIPS-90
conference. A number of algorithms were presented by their authors and
were critically evaluated. The past year has seen a great deal of
additional work in this area, so a second workshop on this topic seems
appropriate. We will briefly review the major algorithms presented last
year. Then we will turn to more recent developments, including both new
algorithms and experience gained in using the older ones. Finally, we
will consider current trends and will try to identify open problems for
future research.
I would like to hear from people who are interested in presenting new
algorithms or results at this workshop. I would particularly like to
hear from people with application results or comparative studies using
algorithms of this kind. The tentative plan, depending on the response
we get, is allow 15-20 minutes for each presentation, with ample time for
discussion. If you would like to present something, please send a short
description to Scott Fahlman, at the internet address listed above.
For Cascade-Correlation fans, I will be presenting a new variation called
"Cascade 2" that performs better than the original in a number of
situations, especially in problems with continuous analog outputs.
------------------------------
Subject: Program information: NIPS91 Workshops
From: Gerald Tesauro <tesauro@watson.ibm.com>
Date: Mon, 28 Oct 91 11:41:58 -0500
The NIPS91 post-conference workshops will take place Dec. 5-7, 1991,
at the Marriott Mark Resort Hotel in Vail, Colorado. The following
message gives information on the program schedule and local
arrangements, and is organized as follows:
I. Summary schedule
II. Workshop schedule
III. Arrangements information
IV. Workshop abstracts
I. Summary Schedule:
Thursday, Dec. 5th 5:00 pm Registration Open
7:00 pm Orientation Meeting
8:00 pm Reception
Friday, Dec. 6th 7:00 am Breakfast
7:30 - 9:30 am Workshop Sessions
4:30 - 6:30 pm Workshop Sessions
7:00 pm Banquet
Saturday, Dec. 7th 7:00 am Breakfast
7:30 - 9:30 am Workshop Sessions
4:30 - 6:30 pm Workshop Sessions
6:30 - 7:00 pm Wrap-up
7:30 pm Barbecue Dinner
(optional)
II. Workshop schedule:
Friday, Dec. 6th:
Character recognition
Projection pursuit and neural networks
Constructive and destructive learning algorithms II
Modularity in connectionist models of cognition
VLSI neural networks and neurocomputers (1st day)
Recurrent networks: theory and applications (1st day)
Active learning and control (1st day)
Self-organization and unsupervised learning in vision (1st day)
Developments in Bayesian methods for neural networks (1st day)
Saturday, Dec. 7th:
Oscillations and correlations in neural information processing
Optimization of neural network architectures for speech recognition
Genetic algorithms and neural networks
Complexity issues in neural computation and learning
Computer vision vs. network vision
VLSI neural networks and neurocomputers (2nd day)
Recurrent networks: theory and applications (2nd day)
Active learning and control (2nd day)
Self-organization and unsupervised learning in vision (2nd day)
Developments in Bayesian methods for neural networks (2nd day)
III. Arrangements information:
Accomodations:
The conference sessions will be held in the banquet area at
Marriott Mark Resort, at Vail CO, 90 miles west of Denver.
For accomodations, call the Mariott at (303)-476-4444. Our
room rate is $74 (single or double). Condos for larger
groups can be arranged through Destination Resorts, at
(303)-476-1350.
Registration:
Registration fee for the workshops is $100 ($50 for students).
Transportation:
CME (Colorado Mountain Express) will be running special shuttles
from the Sheraton in Denver up to the Marriott in Vail Thursday
afternoon at a price of $31.00 per person. Call them at 1-800-
525-6363, at least 24 hours in advance, to reserve and give a
credit card number for prepayment. CME also runs shuttles down
from Vail to the Denver airport, same price, on Sunday at many
convenient times. The earlier you call CME, the more vans will
be made available for our use. Be sure to mention our special
group code "NIPS".
Hertz has a desk in the Sheraton, and will rent cars at a weekend
rate for the trip up to Vail and back to the airport in Denver.
This is an unlimited mileage rate; prices start at $60 (three
days, plus tax). To make reservations call the Sheraton at
1-800-552-7030 and ask for Kevin Kline at the Hertz desk.
Skiing:
Skiing at Vail can be expensive. The lift tickets this year were
slated to rise to $40 per day. The conference has negotiated
very attractive group rates for tickets bought in advance:
$56 for a 2-day ticket
$84 for a 3-day ticket
$108 for a 4-day ticket
You can purchase these by sending a check to the conference
registration office: NIPS*91 Registration, Siemens Research
Center, 755 College Road East, Princeton, NJ 08540.
The tickets will be printed for us, and available
when we get to Vail on Thursday evening.
There are several sources for rental boots and skis in Vail.
The rental shop at the lifts and Banner Sports (located in
the Marriott) are offering the following packages to those
who identify themselves as NIPS attendees:
skis, boots, poles skis, poles
standard package $ 8 / day $6 / day
performance package $ 11 / day $9 / day
Banner will, as extra incentives, stay open for us after the
Thursday orientation meeting, and give a 10% discount on
anything else in the store.
Optional Gourmet barbecue dinner(!):
Finally, besides the conference banquet, included in the
registration fee, there will be an optional dinner on Saturday
night at Booco's Station, a few miles outside of Vail and world
famous for its barbecued meats and special sauces. Dinner will
include transportation (if you need it), appetizers,
all-you-can-eat barbecue, cornbread, vegetables, dessert,
and more than 40 kinds of beer at the cash bar. Tickets will
be on sale at the Sheraton and at the Marriott. Price: $27.
IV. Workshop Abstracts:
=========================================================================
Modularity in Connectionist Models of Cognition
Organizer: Jordan Pollack, Ohio State Univ.
Speakers: Michael Mozer, Univ of Colorado
Robert Jacobs, MIT
John Barnden, New Mexico State University
Rik Belew, UCSD
Abstract:
Classical modular theories of mind presume mental "organs" - function
specific, put in place by evolution - which communicate in a symbolic
language of thought. In the 1980's, Connectionists radically rejected
this view in favor of more integrated architectures, uniform learning
systems which would be very tightly coupled and communicate through
many feedforward and feedback connections. However, as connectionist
attempts at cognitive modeling have gotten more ambitious, ad-hoc
modular structuring has become more prevalent. But there are concerns
regarding how much architectural bias is allowable. There has been a
flurry of work on resolving these concerns by seeking the principles
by which modularity could arise in connectionist architectures. This
will involve solving several major problems - data decomposition,
structural credit assignment, and shared adaptive representations.
This workshop will bring together proponents of modular connectionist
architectures to discuss research direction, recent progress, and
long-term challenges.
=========================================================================
Character Recognition
Organizers: C. L. Wilson and M. D. Garris, National Institute
of Standards and Technology
Speakers: Jon Hull, SUNY Buffalo
Tom Vogl, ERIM
Jim Keeler, MCC
Chris Schofield, Nestor
C. L. Wilson, NIST
R. G. Casey, IBM
Abstract:
This workshop will consider issues related to present and future testing
needs for character recognition including:
1) What is user experience in using the NIST and other
publicly available databases?
2) What types of databases will be required in the future?
3) What are future testing needs, such as x-y coordinate stream or
gray level data?
4) How can the evaluation of current research problems, such as
segmentation, be enhanced through carefully designed databases,
standard testing procedures, and automated evaluation methodologies.
5) Is the incorporation of context important in testing?
6) What other issues face the research and development of large scale
recognition systems?
The target audience includes those interested in and/or working on
hand print recognition and developers who wish to include character
recognition as part of systems to recognize documents.
=========================================================================
Genetic Algorithms and Neural Networks
Organizer: Rik Belew, Univ. of Calif. at San Diego
Speakers: Rik Belew and Dave Rogers
Abstract:
This workshop will examine theoretical and algorithmic
interactions between GA and NNet techniques, as well as
models of the evolutionary constraints on nervous systems.
Specific topics include:
1) Comparison and composition of global GA sampling techniques
with the local (gradient) search of NNet methods.
2) Use of the GA to evolve additional higher-order function
approximation terms (``hidden units'').
3) The dis/advantages of GA recombination and its impact
on appropriate representations for NNets.
4) Trade-offs between NNet training time and GA generational time.
5) Parallel implementations of GAs that facilitate NNet simulation.
6) A role for ontogenesis between GA evolution and NNet learning.
7) The role optimality (doesn't!) play in evolution
=========================================================================
Projection Pursuit and Neural Networks
Organizers: Ying Zhao, Chris Atkeson and Peter Huber, MIT
Speakers: R.Douglas Martin, University of Washington
John Moody, Yale University
Ying Zhao, MIT
Andrew R. Barron, University of Illinois
Nathan Intrator, Brown University
Trevor Hastie, Bell Labs
Abstract: Projection Pursuit is a nonparametric statistical technique
to find "interesting" low dimensional projections of high dimensional
data sets. We hope to improve our understanding of neural networks
and projection pursuit by discussing issues such as fast training
algorithms based on PP, duality with kernel approximation, possible
avoidance of the "curse of dimensionality", and the sample complexity
for PP.
=========================================================================
Constructive and Destructive Learning Algorithms II
Organizer: Scott E. Fahlman, Carnegie Mellon University
Speakers: TBA
Abstract:
Recently we have seen the emergence of new learning algorithms that
alter the network's topology. Some of these algorithms start with
excess connections and remove any that are not needed; others start
with a sparse network and add hidden units as needed, sometimes in
multiple layers; some algorithms do both. In a two-day workshop
on this topic at NIPS-90, a number of learning algorithms that
modify network topology were presented by their authors and were
critically evaluated. The past year has seen a great deal of
additional work in this area. We will briefly review the major
algorithms presented last year. Then we will turn to more recent
developments, including both new algorithms and experience gained
in using the older ones. Finally, we will consider current trends
and will try to identify open problems for future research.
=========================================================================
Oscillations and Correlations in Neural Information Processing
Organizer: Ernst Niebur, Caltech
Speakers: Bard Ermentrout, U. of Pittsburgh
Hennric Jokeit, U. of Munich
Marius Usher, Weizmann Institute
Ernst Niebur, Caltech
Abstract:
This workshop will address models proposed for tasks like tieing
together the different parts of one object in the visual field
or for binding the different representations of an object in
different cortical areas. Both oscillation-based models as well
as alternative models based on phase coherence (correlations)
will be considered in the light of the latest experimental findings.
=========================================================================
Optimization of Neural Network Architectures for Speech Recognition
Organizers: Uli Bodenhausen, Universitaet Karlsruhe
Alex Waibel, Carnegie Mellon University
Speakers: Kenichi Iso, NEC Corporation, Japan
Patrich Haffner, CNET, France
Mike Franzini, Telefonica I + D, Spain
Abstract:
A variety of neural network algorithms have recently been applied to
speech recognition tasks. Besides having learning algorithms for
weights, optimization of the network architectures is required to
achieve good performance. Also of critical importance is the
optimization of neural network architectures within hybrid systems
for best performance of the system as a whole. Parameters that have
to be optimized within these constraints include the number of hidden
units, number of hidden layers, time-delays, connectivity within the
network, input windows, the number of network modules, number of states
and others. The proposed workshop intends to discuss and evaluate the
importance of these architectural parameters and different integration
strategies for speech recognition systems. Participating researchers
interested in speech recognition are welcome to present short case
studies on the optimization of neural networks, preferably with an
evaluation of the optimization steps. The workshop could also be of
interest to researchers working on constructive/destructive learning
algorithms because the relevance of different architectural parameters
should be considered for the design of these algorithms.
=========================================================================
SELF-ORGANIZATION AND UNSUPERVISED LEARNING IN VISION
Organizer: Jonathan A. Marshall, Univ. of North Carolina
Speakers: Suzanna Becker, University of Toronto
Irving Biederman, University of Southern California
Thomas H. Brown, Yale University
Joachim M. Buhmann, Lawrence Livermore National Laboratory
Heinrich Bulthoff, Brown University
Edward Callaway, Duke University
Allan Dobbins, McGill University
Gillian Einstein, Duke University
Charles Gilbert, The Rockefeller Universty
John E. Hummel, UCLA
Daniel Kersten, University of Minnesota
David Knill, University of Minnesota
Laurence T. Maloney, New York University
Jonathan A. Marshall, University of North Carolina at Chapel Hill
Paul Munro, University of Pittsburgh
Albert L. Nigrin, American University
Alice O'Toole, The University of Texas at Dallas
Jurgen Schmidhuber, University of Colorado
Nicol Schraudolph, University of California at San Diego
Michael P. Stryker, University of California at San Francisco
Patrick Thomas, Technische Universitat Muenchen
Rich Zemel, University of Toronto
Abstract:
This workshop considers the role that unsupervised learning
procedures (e.g. Hebb-type rules) may play in the self-organization
of cortical structures involved in the processing of visual
information. Researchers in visual neuroscience, visual psychophysics
and neural network modeling will be brought together to address
head-on the key issue of how animal visual systems got the way
they are. We hope that this will lead to a better understanding
of the factors that shape the structure of animal visual systems,
as well as better models of the neurophysiological processes
underlying vision.
=========================================================================
Developments in Bayesian methods for neural networks
Organizers: David MacKay, Caltech
Steve Nowlan, Salk Institute
Abstract:
The first day of this workshop will be 50% tutorial in content,
reviewing some new ways Bayesian methods may be applied to neural
networks. The rest of the workshop will be devoted to discussions of
the frontiers and challenges facing Bayesian work in neural networks,
including issues such as Monte Carlo clustering, data selection,
active query learning, prediction of generalisation, missing inputs,
unlabelled data and discriminative training, Discussion will be
moderated by John Bridle.
Speakers: Radford Neal
Jurgen Schmidhuber
John Moody
David Haussler + Michael Kearns
Sara Solla + Esther Levin
Steve Renals
Reading up before the workshop
- ------------------------------
People intending to attend this workshop are encouraged to obtain
preprints of relevant material before NIPS. A selection of preprints
are available by anonymous ftp, as follows:
unix> ftp hope.caltech.edu (or ftp 131.215.4.231)
login: anonymous
password: <your name>
ftp> cd pub/mackay
ftp> get README.NIPS
ftp> quit
Then read the file README.NIPS for further information.
Problems? Contact David MacKay, mackay@hope.caltech.edu
=========================================================================
Active Learning and Control
Organizers: David Cohn, Univ. of Washington
Don Sofge, MIT
Speakers: C. Atkeson, MIT
A. Barto, Univ. of Massachussetts, Amherst
J. Hwang, Univ. of Washington
M. Jordan, MIT
A. Moore, MIT
J. Schmidhuber, University of Colorado, Boulder
R. Sutton, GTE
S. Thrun, Carnegie-Mellon University
Abstract:
An "active" learning system is one that is not merely a passive
observer of its environment, but instead play an active role in
determining its inputs. This definition includes classification
networks that query for values in "interesting" parts of their domain,
learning systems that actively "explore" their environment, and
adaptive controllers that learn how to produce control outputs to
achieve a goal.
Common facets of these problems include building world models in
complex domains, exploring a domain to safely and efficiently, and,
planning future actions based on one's model.
In this workshop, our main focus will be to address key unsolved
problems which may be holding up progress on these problems rather
than presenting polished, finished results. Our hopes are that
unsolved problems in one field may be able to draw on insight from
research in other fields.
=========================================================================
Computer Vision vs Network Vision
Organizers: John Mayhew and Terry Sejnowski
Speakers: TBA
Abstract:
Computer vision has developed a methodology based on sound
engineering practice: 1. Break the problem down into well-defined
subproblems and mathematically analyze each part; 2. Develop efficient
algorithms for each module; 3. Implement each algorithm with the best
available technology. These are Marr's three levels: computational,
algorithmic, and implementational.
In contrast, proponents of neural networks have developed a different
methodology: 1. Find a good representation for the input data that makes
explicit the features needed to solve the problem; 2. Use learning algorithms
to cluster and categorize the data; 3. Glue together networks that solve
different parts of the problem with more learning. Networks are memory
intensive and constraints from the hardware level are as important as
constraints from the computational level.
This workshop is intended to provoke a lively and free-wheeling
discussion of the central issues in vision.
=========================================================================
Complexity Issues in Neural Computation and Learning
Organizers: Kai-Yeung Sui and Vwani Roychowdhury, Stanford Univ.
Speakers: TBA
Abstract: The goal of this workshop is to address recent developments
in understanding the capabilities and limitations of various models
for neural computation and learning. Topics will include: 1) circuit
complexity of neural networks, 2) capacity of neural networks, and
3) complexity issues in learning algorithms.
=========================================================================
RECURRENT NETWORKS: THEORY AND APPLICATIONS
Organizers: Luis Borges de Almeida, INESC
C. Lee Giles, NEC Research Institute
Richard Rohwer, Edinburgh University
Speakers: TBA
Abstract:
Recurrent neural networks have a very large potential for handling
dynamical / sequential problems, e.g. recognition and classification
of time-dependent signals like speech, modelling and control of
dynamical systems, learning of grammars and symbolic processing, etc.
However, the fulfillment of this potential remains an important
open issue. Training algorithms are very inefficient in terms of
memory and computational demands. Little is known about convenient
architectures. The number of known successful applications is very
limited. This is true even for static applications (operation in the
"fixed point mode").
The first day of this two-day workshop will focus on the outstanding
theoretical issues in recurrent neural networks, and the second
day will examine existing and potential real-world applications.
=========================================================================
VLSI Neural Networks and Neurocomputers
Organizers: Clifford Lau, Office of Naval Research
Jim Burr, Stanford University
Speakers: TBA
Abstract:
This two-day workshop will address the latest advances in VLSI
implementations of neural nets, and the design of high performance
neurocomputers. We will present an updated list of currently
available neurochips, and discuss a wide range of issues, including:
1) Design issues: Advantage and disadvantage of analog and digital
approaches; how much arithmetic precision is necessary;
which algorithms have been implemented; importantance of on-chip
learning; neurochip design in existing CAD environment.
2) Performance issues: Critical factors to achieve robust performance;
Tradeoffs between capacity and performance; scaling limits to
constructing large neural networks.
3) Use of neurochips: What input/output devices are necessary;
what programming support environment is necessary.
4) Application areas for supercomputing neurocomputers
------------------------------
End of Neuron Digest [Volume 8 Issue 9]
***************************************