Copy Link
Add to Bookmark
Report

Neuron Digest Volume 12 Number 14

eZine's profile picture
Published in 
Neuron Digest
 · 1 year ago

Neuron Digest   Thursday, 11 Nov 1993                Volume 12 : Issue 14 

Today's Topics:
information on NIPS*93 workshop accommodations
Music and Audition at NIPS (1st day at Vail)
NIPS*93 Hybrid Systems Workshop
NIPS-93 Workshop on catastrophic interference
NIPS Workshop on Spatial Perception
Post-NIPS Workshop on Robot Learning
NIPS*93 Workshop on Stability and Observability/program


Send submissions, questions, address maintenance, and requests for old
issues to "neuron-request@psych.upenn.edu". The ftp archives are
available from psych.upenn.edu (130.91.68.31). Back issues requested by
mail will eventually be sent, but may take a while.

----------------------------------------------------------------------

Subject: information on NIPS*93 workshop accommodations
From: "Michael C. Mozer" <mozer@dendrite.cs.colorado.edu>
Date: Tue, 19 Oct 93 12:40:14 -0700

The NIPS*93 brochure is a bit sketchy concerning accommodations at the
NIPS workshops, to be held at the Radisson Resort Vail December 2-4.

To make reservations at the Radisson, call (800) 648-0720. For general
information on the resort, the central number is (303) 476-4444. Reservations
can also be made by fax: (303) 476-1647. And if you would like to let the
glowing power of a live psychic answer your very personal questions, the
number is (900) 820-7131.

Note that rooms will be held for us only until the beginning of November, and
last year many participants had to sleep in the snow due to lack of foresight
in making reservations.

Concerning lift tickets: Unfortunately, the NIPS brochure was published
before we were able to obtain this year's lift ticket prices. The prices have
increased roughly $5/day over those published in the brochure. If you wish
to advance purchase tickets, though, we ask that you send in the amounts
published in the brochure. We will collect the difference on site. (Sorry,
it's the only feasible way to do recordkeeping at this point.) Lift tickets
may also be purchased on site at an additional expense of roughly $1/day.
Very sorry for the inconvenience.


Mike Mozer
NIPS*93 Workshop Chair


------------------------------

Subject: Music and Audition at NIPS (1st day at Vail)
From: "E. large" <large@cis.ohio-state.edu>
Date: Mon, 25 Oct 93 09:43:53 -0500


Resonance and the Perception of Musical Meter

Edward W. Large and John F. Kolen

The perception of musical rhythm is traditionally described as
involving, among other things, the assignment of metrical structure to
rhythmic patterns. In our view, the perception of metrical structure
is best described as a dynamic process in which the temporal
organization of musical events entrains the listener in much the same
way that two pendulum clocks hanging on the same wall synchronize
their motions so that they tick in lock step. In this talk, we
re-assess the notion of musical meter, and show how the perception of
this sort of temporal organization can be modeled as a system of
non-linearly coupled oscillators responding to musical rhythms.
Individual oscillators phase- and frequency- lock to components of
rhythmic patterns, embodying the notion of musical pulse, or beat. The
collective behavior of a system of oscillators represents a
self-organized response to rhythmic patterns, embodying a "perception"
of metrical structure. When exposed to performed musical rhythms the
system shows the ability to simultaneously perform quantization
(categorization of temporal intervals), and assignment of metrical
structure in real time. We discuss implications for psychological
theories of temporal expectancy, "categorical" perception of temporal
intervals, and the perception of metrical structure.


------------------------------

Subject: NIPS*93 Hybrid Systems Workshop
From: "Michael P. Perrone" <mpp@cns.brown.edu>
Date: Mon, 25 Oct 93 10:23:54 -0500

=====================================================================

NIPS*93 Postconference Workshop
December 4, 1993


Pulling It All Together: Methods for Combining Neural Networks

=====================================================================

Intended Audience:

Those interested in optimization algorithms for improving
neural network performance.


Organizer:

Michael P. Perrone, Brown University (mpp@cns.brown.edu)


Abstract:

This workshop will examine current hybrid methods for improving
neural network performance.

The past several years have seen a tremendous growth in the complexity of
the recognition, estimation and control tasks expected of neural networks.
In solving these tasks, one is faced with a large variety of learning
algorithms and a vast selection of possible network architectures. After
all the training, how does one know which is the best network? This
decision is further complicated by the fact that, even though standard
techniques such as MLPs and RBF nets are theoretically sufficient for
solving any task, they can be severely limited by problems such as over-
fitting, data sparsity and local optima.

The usual solution to these problems is a winner-take-all cross-validatory
model selection. However, recent experimental and theoretical work
indicates that we can improve performance by considering methods for
combining neural networks.

This workshop will discuss several such methods including Boosting,
Competing Experts, Ensemble Averaging, the GENSEP algorithm, Metropolis
algorithms, Stacked Generalization and Stacked Regression. The issues we
will cover in this workshop include Bayesian considerations, the role of
complexity, the role of cross-validation integrating a priori knowledge,
error orthogonality, task decomposition, network selection techniques,
overfitting, data sparsity and local optima.

Lively audience participation is encouraged.


Schedule December 4, 1993
======== ================


7:30-7:35 Opening Remarks

7:35-7:55 M. Perrone, (Brown University)
"Averaging Methods: Theoretical Issues and Real World Examples"

7:55-8:15 J. Friedman, (Stanford)
"A New Approach to Multiple Outputs Using Stacking"

8:15-8:35 S. Nowlan, (Salk Institute)
"Competing Experts"

8:35-8:55 H. Drucker, (AT&T)
"Boosting Compared to Other Ensemble Methods"

8:55-9:30+ Discussion


9:30-4:30 FREE TIME


4:30-4:50 C. Scofield, (Nestor Inc)
"Commercial Applications: Modular Approaches to Real World Tasks"

4:50-5:10 W. Buntine, (NASA Ames Research Center)
"Averaging and Probabilistic Networks: Automating the Process"

5:10-5:30 D. Wolpert, (Santa Fe Institute)
"Inferring a Function vs. Inferring an Inference Algorithm"

5:30-5:50 H. Thodberg, (Danish Meat Research Institute)
"Error Bars on Predictions from Deviations among Committee
Members (within Bayesian Backprop)"

5:50-6:10 S. Hashem, (Purdue University)
"Merits of Combining Neural Networks: Potential Benefits and Risks"

6:10-6:30+ Discussion & Closing Remarks

7:00 Workshop Wrap-Up (common to all sessions)

=====================================================================

General NIPS information and registration:

An electronic copy of the 1993 NIPS registration brochure is available
in postscript format via anonymous ftp at helper.systems.caltech.edu
in /pub/nips/NIPS_93_brochure.ps.Z.

For a hardcopy of the brochure or other information, please send a
request to nips93@systems.caltech.edu or to:

NIPS Foundation,
P.O. Box 60035,
Pasadena, CA 91116-6035

- -----------------------------------------------------------------
Michael P. Perrone Email: mpp@cns.brown.edu
Institute for Brain and Neural Systems Tel: 401-863-3920
Brown University Fax: 401-863-3934
Providence, RI 02912
- -----------------------------------------------------------------



------------------------------

Subject: NIPS-93 Workshop on catastrophic interference
From: Bob French <french@willamette.edu>
Date: Mon, 25 Oct 93 09:07:29 -0800



NIPS-93 Workshop:
================

CATASTROPHIC INTERFERENCE IN CONNECTIONIST NETWORKS:
CAN IT BE PREDICTED, CAN IT BE PREVENTED?


Date: Saturday, December 4, 1993, at Vail, Colorado
====

Intended audience: Connectionists, cognitive scientists and
================= applications-oriented users of connectionist
networks interested in a better understanding
of:
i) when and why their networks can suddenly
and completely forget previously learned
information;
ii) how it is possible to reduce or even
eliminate this phenomenon.

Organizer: Bob French
========= Computer Science Department
Willamette University, Salem OR
french@willamette.edu

Program:
========
When connectionist networks learn new information, they can
suddenly and completely forget everything they had previously learned.
This problem is called catastrophic forgetting or catastrophic
interference. Given the demonstrated severity of the problem, it is
intriguing to note that this problem has to date received very little
attention. When new information must be added to an already-trained
connectionist network, it is currently taken for granted that the
network will simply cycle through all of the old data again. Since
relearning all of the old data is both psychologically implausible as
well as impractical for very large data sets, is it possible to do
otherwise? Can connectionist networks be developed that do not forget
catastrophically -- or perhaps that do not forget at all -- in the
presence of new information? Or is catastrophic forgetting perhaps
the inevitable price for using fully distributed representations?
Under what circumstances will a network forget or not forget?
Further, can the amount of forgetting be predicted with any
reliability? These questions are of particular interest to anyone who
intends to use connectionist networks as a memory/generalization
device.
This workshop will focus on:
- the theoretical reasons for catastrophic interference;
- the techniques that have been developed to eliminate
it or to reduce its severity;
- the side-effects of catastrophic interference;
- the degree to which a priori prediction of catastrophic
forgetting is or is not possible.
As connectionist networks become more and more a part of
applications packages, the problem of catastrophic interference will
have to be addressed. This workshop will bring the audience up to
date on current research on catastrophic interference.


Speakers: Stephan Lewandowsky (lewan@constellation.ecn.uoknor.edu)
======== Department of Psychology
University of Oklahoma

Phil A. Hetherington (het@blaise.psych.mcgill.ca)
Department of Psychology
McGill University

Noel Sharkey (noel@dcs.exeter.ac.uk)
Connection Science Laboratory
Dept. of Computer Science
University of Exeter, U.K.

Bob French (french@willamette.edu)
Computer Science Department
Willamette University


Morning session:
- ---------------
7:30 - 7:45 Bob French: An Introduction to the Problem of
Catastrophic Interference in Connectionist Networks

7:45 - 8:15 Stephan Lewandowsky: Catastrophic Interference: Causes,
Solutions, and Side-Effects

8:15 - 8:30 Brief discussion

8:30 - 9:00 Phil Hetherington: Sequential Learning in Connectionist
Networks: A Problem for Whom?

9:00 - 9:30 General discussion


Afternoon session
- -----------------

4:30 - 5:00 Noel Sharkey: Catastrophic Interference and
Discrimination.

5:00 - 5:15 Brief discussion

5:15 - 5:45 Bob French: Prototype Biasing and the
Problem of Prediction

5:45 - 6:30 General discussion and closing remarks


Below are the abstracts for the talks to be presented in this workshop:


CATASTROPHIC INTERFERENCE: CAUSES, SOLUTIONS, AND SIDE-EFFECTS
Stephan Lewandowsky
Department of Psychology
University of Oklahoma

I briefly review the causes for catastrophic interference in
connectionist models and summarize some existing solutions. I then
focus on possible trade-offs between resolutions to catastrophic
interference and other desirable network properties. For example, it
has been suggested that reduced interference might impair
generalization or prototype formation. I suggest that these
trade-offs occur only if interference is reduced by altering the
response surfaces of hidden units.
- --------------------------------------------------------------------------

SEQUENTIAL LEARNING IN CONNECTIONIST NETWORKS: A PROBLEM FOR WHOM?
Phil A. Hetherington
Department of Psychology
McGill University

Training networks in a strictly blocked, sequential manner normally
results in poor performance because new items overlap with old items
at the hidden unit layer. However, catastrophic interference is not a
necessary consequence of using distributed representations. First,
examination by the method of savings demonstrates that much of the
early information is still retained: Items thought lost can be
relearned within a couple of trials. Second, when items are learned
in a windowed, or overlapped fashion, less interference obtains. And
third, when items are presented in a strictly blocked, sequential
manner to a network that already possesses a relevant knowledge base,
interference may not occur at all. Thus, when modeling normal human
learning there is no catastrophic interference problem. Nor is there
a problem when modeling strictly sequential human memory experiments
with a network that has a relevant knowledge base. There is only a
problem when simple, unstructured, tabula rasa networks are expected
to model the intricacies of human memory.
- --------------------------------------------------------------------------

CATASTROPHIC INTERFERENCE AND DISCRIMINATION
Noel Sharkey
Connection Science Laboratory
Dept. of Computer Science
University of Exeter
Exeter, U.K.

Connectionist learning techniques, such as backpropagation, have
been used increasingly for modelling psychological phenomena.
However, a number of recent simulation studies have shown that when a
connectionist net is trained, using backpropagation, to memorize sets
of items in sequence and without negative exemplars, newly learned
information seriously interferes with old. Three converging methods
were employed to show why and under what circumstances such
retroactive interference arises. First, a geometrical analysis
technique, derived from perceptron research, was introduced and
employed to determine the computational and representational
properties of feedforward nets with one and two layers of weights.
This analysis showed that the elimination of interference always
resulted in a breakdown of old-new discrimination. Second, a formally
guaranteed solution to the problems of interference and discrimination
was presented as the HARM model and used to assess the relative merits
of other proposed solutions. Third, two simulation studies were
reported that assessed the effects of providing nets with experience
of the experimental task. Prior knowledge of the encoding task was
provided to the nets either by block training them in advance or by
allowing them to extract the knowledge through sequential training.
The overall conclusion was that the interference and discrimination
problems are closely related. Sequentially trained nets employing the
backpropagation learning algorithm will unavoidably suffer from either
one or the other.
- --------------------------------------------------------------------------

PROTOTYPE BIASING IN CONNECTIONIST NETWORKS
Bob French
Computer Science Dept.
Willamette University

Previously learned representations bias new representations. If
subjects are told that a newly encountered object X belongs to an
already familiar category P, they will tend to emphasize in their
representation of X features of the prototype they have for the
category P. This is the basis of prototype biasing, a technique that
appears to significantly reduce the effects catastrophic forgetting.
The 1984 Congressional Voting Records database is used to
illustrate prototype biasing. This database contains the yes-no
voting records of Republican and Democratic members of Congress in
1984 on 16 separate issues. This database lends itself conveniently
to the use of a network having 16 "yes-no" input units, a hidden layer
and one "Republican/Democrat" output node. A "Republican" prototype
and a "Democrat" prototype are built, essentially by separately
averaging over Republican and Democrat hidden-layer representations.
These prototypes then "bias" subsequent representations of new
Democrats towards the Democrat prototype and of new Republicans
towards the Republican prototype.
Prototypes are learned by a second, separate backpropagation
network that associates teacher patterns with their respective
prototypes. Thus, ideally, when the "Republican" teacher pattern is
fed into it, it produces the "Republican" prototype on output. The
output from this network is continually fed back to the hidden layer
of the primary network and is used to bias new representations.
Also discussed in this paper are the problems involved in
predicting the severity of catastrophic forgetting.







------------------------------

Subject: NIPS Workshop on Spatial Perception
From: Terry Sejnowski <terry@helmholtz.sdsc.edu>
Date: Mon, 25 Oct 93 21:15:43 -0800

NIPS*93 WORKSHOP ANNOUNCEMENT

Title:
Processing of visual and auditory space and its modification by experience.

Intended Audience:
Researchers interested in spatial perception, sensory fusion and learning.

Organizers:
Josef P. Rauschecker Terrence Sejnowski
josef@helix.nih.gov terry@helmholtz.sdsc.edu

Program:
This workshop will address the question how spatial information is
represented in the brain, how it is matched and compared by the visual and
auditory systems, and how early sensory experience influences the
development of these space representations.
We will discuss neurophysiological and computational data from cats,
monkeys, and owls that suggest how the convergence of different sensory
space representations may be handled by the brain. In particular, we will
look at the role of early experience and learning in establishing these
representations. Lack of visual experience affects space processing in cats
and owls differently. We will therefore discuss various kinds of plasticity
in different spatial representations.
Half the available time has been reserved for discussion and informal
presentations. We will encourage lively audience participation.


Morning Session

(7:30 - 8:30) Presentations

Predictive Hebbian learning and sensory fusion
(Terry Sejnowski)

A connectionist model of the owl's sound localization system
(Dan Rosen)

Intermodal compensatory plasticity of sensory systems
(Josef Rauschecker)

8:30 - 9:30 Discussion


Afternoon Session

(4:30 - 5:30) Presentations

Neurophysiological processing of visual and auditory space in monkeys
(Richard Andersen)

Learning map registration in the superior colliculus with predictive
Hebbian learning
(Alex Pouget)

A neural network model for the detection of heading direction from optic
flow in the cat's visual system
(Markus Lappe)

5:30 - 6:30 Discussion

=====================================================================

General NIPS information and registration:

An electronic copy of the 1993 NIPS registration brochure is available
in postscript format via anonymous ftp at helper.systems.caltech.edu
in /pub/nips/NIPS_93_brochure.ps.Z.

For a hardcopy of the brochure or other information, please send a
request to nips93@systems.caltech.edu or to:

NIPS Foundation,
P.O. Box 60035,
Pasadena, CA 91116-6035


=====================================================================



------------------------------

Subject: Post-NIPS Workshop on Robot Learning
From: David Cohn <cohn@psyche.mit.edu>
Date: Tue, 26 Oct 93 13:53:43 -0500

The following workshop will be held on Friday, December 3rd in Vail,
CO as one of the Post-NIPS workshops. To be added to a mailing list
for further information about the workshop, send electronic mail to
"robot-learning-request@psyche.mit.edu".
- ---------------------------------------------------------------------------

NIPS*93 Workshop: Robot Learning II: Exploration and Continuous Domains
=================

Intended Audience: Researchers interested in robot learning, exploration,
================== and active learning systems in general

Organizer: David Cohn (cohn@psyche.mit.edu)
========== Dept. of Brain and Cognitive Sciences
Massachusetts Institute of Technology
Cambridge, MA 02139
Overview:
=========

The goal of this workshop will be to provide a forum for researchers
active in the area of robot learning and related fields. Due to the
limited time available, we will focus on two major issues: efficient
exploration of a learner's state space, and learning in continuous
domains.

Robot learning is characterized by sensor noise, control error,
dynamically changing environments and the opportunity for learning by
experimentation. A number of approaches, such as Q-learning, have
shown great practical utility learning under these difficult
conditions. However, these approaches have only been proven to
converge to a solution if all states of a system are visited
infinitely often. What has yet to be determined is whether we can
efficiently explore a state space so that we can learn without having
to visit every state an infinite number of times, and how we are to
address problems on continuous domains, where there are effectively an
infinite number of states to be visited.

This workshop is intended to serve as a followup to last year's
post-NIPS workshop on machine learning. The two problems to be
addressed this year were identified as two (of the many) crucial
issues facing robot learning.

The morning session of the workshop will consist of short
presentations discussing theoretical approaches to exploration and to
learning in continuous domains, followed by general discussion guided
by a moderator. The afternoon session will center on practical and/or
heuristic approaches to these problems in the same format. As time
permits, we may also attempt to create an updated "Where do we go from
here?" list, like that drawn up in last year's workshop.

Video demos will be encouraged. If feasible, we will attempt to have a
VCR set up after the workshop to allow for informal demos.

Preparatory readings from the presenters will be ready by early
November. To be placed on a list to receive continuing information about
the workshop (such as where and when the readings appear on-line), send
email to "robot-learning-request@psyche.mit.edu".

Tentative Program:
==================
December 3, 1993

Morning Session: Theoretical Approaches
- ---------------------------------------
7:30-8:30 Andrew Moore, CMU
"The Parti-game approach to exploration"
synopses of
different Leemon Baird, USAF
approaches "Reinforcement learning in continuous domains"
(20 min each)
Juergen Schmidhuber, TUM
Reinforcement-directed information acquisition in
Markov Environments

8:30-9:30 Open discussion

Afternoon Session: Heuristic Approaches
- ---------------------------------------
4:30-5:50 Long-Ji Lin, Siemens
"RatBot: A mail-delivery robot"
synopses of
different Stephan Schaal, MIT
approaches "Efficiently exploring high-dimensional spaces"
(20 min each)
Terry Sanger, MIT/JPL
"Trajectory extension learning"

Jeff Schneider, Rochester
"Learning robot skills in high-dimensional action spaces"

5:50-6:30 Open discussion


------------------------------

Subject: NIPS*93 Workshop on Stability and Observability/program
From: GARZONM@hermes.msci.memst.edu
Date: 27 Oct 93 15:29:32 -0600


A day at NIPS*93 on
STABILITY AND OBSERVABILITY
3 December 1993 at Vail, Colorado

Intended Audience: nneuroscientists, computer and cognitive
================= scientists, neurobiologists, mathematicians/
dynamical systems, electrical engineers, and
anyone interested in questions such as:

* what effects can noise, bounded precision and
uncertainty in inputs, weights and/or
transfer functions have on the i/o
behavior of a neural network?
* what is missed and what is observable in
computer simulations of the networks they
purport to simulate?
* how much architecture can be observed
in the behavior of a network-in-a-box?
* what can be done to improve and/or accelerate
convergence to stable equilibria during
learning and network updates while preserving
the intended dynamics of the process?
Organizers:
==========

Fernanda Botelho Max Garzon
botelhof@hermes.msci.memst.edu garzonm@hermes.msci.memst.edu
Mathematical Sciences Institute for Intelligent Systems
Memphis State University
Memphis, TN 38152 U.S.A.
[botelhof,garzonm]@hermes.msci.memst.edu


Program:
=======
Following is a (virtually) final schedule. Each talk is scheduled for
15 minutes with 5 minutes of interim for questions and comments. One
or contributed talk might still be added to the schedule (and
will cut into the panel discussion in the afternoon).

Morning Session:
- ---------------

7:30-7:50 M. Garzon, Memphis State University, Tennessee
Introduction and Overview

7:50-8:10 S. Kak, Louisiana State University, Baton Rouge
Stability and Observability in Feedback Networks

8:10-8:30 S. Piche, Microelectronics Technology Co., Austin, Texas
Sensitivity of Neural Networks to Errors

8:30-8:50 R. Rojas, Int. Computer Science Institute UCB
and Freie Universit\"at Berlin
Stability of Learning in Neural Networks

8:50-9:10 G. Chauvet and P. Chauvet,
Institut de Biologie Th\'eorique, U. d'Angers, France
Stability of Purkinje Units in the Cerebellar Cortex

9:10-9:30 N. Peterfreund and Y. Baram, Technion, Israel
Trajectory Control of Convergent Networks


Afternoon Session:
- ------------------

4:30-4:50 X. Wang, U. of Southern California and UCLA
Consistencies of Stability and Bifurcation

4:50-5:10 M. Casey, UCSD, San Diego, California
Computation Dynamics in Discrete-time Recurrent Nets

5:10-5:30 M. Cohen, Boston University, Massachussets
Synthesis of Decision Regions in Dynamical Systems

5:30-5:50 F. Botelho, Memphis State University, Tennessee
Observability of Discrete and Analog Networks

5:50-6:10 U. Levin and K. Narendra,
OGI/CSE Portland/Oregon and Yale University,
Recursive Identification Using Feedforward Nets

6:10-6:30 Panel Discussion

7:00 All-workshop wrap-up


Max Garzon (preferred) garzonm@hermes.msci.memst.edu
Math Sciences garzonm@memstvx1.memst.edu
Memphis State University Phone: (901) 678-3138/-2482
Memphis, TN 38152 USA Fax: (901) 678-2480/3299


------------------------------

End of Neuron Digest [Volume 12 Issue 14]
*****************************************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT