Copy Link
Add to Bookmark
Report
Neuron Digest Volume 12 Number 13
Neuron Digest Thursday, 11 Nov 1993 Volume 12 : Issue 13
Today's Topics:
NIPS*93 workshops
NIPS*93 workshop
NIPS*93 program
NIPS Workshop: Selective Attention
Send submissions, questions, address maintenance, and requests for old
issues to "neuron-request@psych.upenn.edu". The ftp archives are
available from psych.upenn.edu (130.91.68.31). Back issues requested by
mail will eventually be sent, but may take a while.
----------------------------------------------------------------------
Subject: NIPS*93 workshops
From: "Michael C. Mozer" <mozer@dendrite.cs.colorado.edu>
Date: Mon, 06 Sep 93 20:21:07 -0700
For the curious, a list of topics for the NIPS*93 post-conference workshops
is attached. The workshops will be held in Vail, Colorado, on December 3 and
4, 1993.
For further info concerning the individual workshops, please contact the
workshop organizers, whose names and e-mail are listed below. Abstracts are
not available at present, but will be distributed prior to the workshops.
For NIPS conference and workshop registration info, please write to: NIPS*93
Registration / NIPS Foundation / PO Box 60035 / Pasadena, CA 91116-6035 USA
- ----------------
December 3, 1993
- ----------------
Complexity Issues in Neural Computation and Learning
Vwani Roychowdhury & Kai-Yeung Siu
vwani@ecn.purdue.edu
Connectionism for Music and Audition
Andreas Weigend & Dick Duda
weigend@cs.colorado.edu
Memory-based Methods for Regression and Classification
Thomas Dietterich
tgd@cs.orst.edu
Neural Networks and Formal Grammars
Simon Lucas
sml@essex.ac.uk
Neurobiology, Psychophysics, and Computational Models of Visual Attention
Ernst Niebur & Bruno Olshausen
ernst@acquine.cns.caltech.edu
Robot Learning: Exploration and Continuous Domains
David Cohn
cohn@psyche.mit.edu
Stability and Observability
Max Garzon & F. Botelho
garzonm@maxpc.msci.memst.edu
VLSI Implementations
William O. Camp, Jr.
camp@owgvm6.vnet.ibm.com
What Does the Hippocampus Compute?
Mark Gluck & Bruce McNaughton
gluck@pavlov.rutgers.edu
- ----------------
December 4, 1993
- ----------------
Catastrophic Interference in Connectionist Networks: Can it be Predicted,
Can it be Prevented?
Bob French
french@willamette.edu
Connectionist Modeling and Parallel Architectures
Joachim Diederich & Ah Chung Tsoi
joachim@fitmail.fit.qut.edu.au
Dynamic Representation Issues in Connectionist Cognitive Modeling
Jordan Pollack
pollack@cis.ohio-state.edu
Functional Models of Selective Attention and Context Dependency
Thomas Hildebrandt
thildebr@aragorn.csee.lehigh.edu
Learning in Computer Vision and Image Understanding -- An Advantage over
Classical Techniques?
Hayit Greenspan
hayit@micro.caltech.edu
Memory-based Methods for Regression and Classification
Thomas Dietterich
tgd@cs.orst.edu
Neural Network Methods for Optimization Problems
Arun Jagota
jagota@cs.buffalo.edu
Processing of Visual and Auditory Space and its Modification by Experience
Josef Rauschecker
josef@helix.nih.gov
Putting it all Together: Methods for Combining Neural Networks
Michael Perrone
mpp@cns.brown.edu
- ---------------------------------------------------------
NOTE: The assignment of workshops to dates is tentative.
- ---------------------------------------------------------
------------------------------
Subject: NIPS*93 workshop
From: Arun Jagota <jagota@cs.Buffalo.EDU>
Date: Fri, 10 Sep 93 17:08:05 -0500
CALL FOR PARTICIPATION
NIPS*93 workshop on
Neural Network Methods for Optimization Problems
There are 4-5 slots remaining for brief oral presentations of 20-30 minutes
each. To be considered, submit either (i) a title and one page abstract or
(ii) a bibliography of recent work on the topic.
Please submit materials by electronic mail to Arun Jagota
(jagota@cs.buffalo.edu) by October 5. Later submissions risk not having
remaining open slots.
Program:
- -------
Ever since the work of Hopfield and Tank, neural networks have found
increasing use for the approximate solution of hard optimization problems.
The successes in the past have however been limited, when compared to
traditional methods. In this workshop we will discuss the state of the art
of neural network algorithms for optimization, examine their weaknesses and
strengths, and discuss potential for improvement. Second, as the algorithms
arise from different areas (e.g. some from statistical physics, others from
computer science) we hope that researchers from these disciplines will share
their own insights with others. Third, we also hope to discuss theoretical
issues that arise in using neural network algorithms for optimization.
Finally, we hope to have people to discuss parallel implementation issues
or case studies.
- ---------------------
Arun Jagota
------------------------------
Subject: NIPS*93 program
From: Bartlett Mel <mel@cns.caltech.edu>
Date: Mon, 04 Oct 93 14:41:11 -0800
NIPS*93 MEETING PROGRAM and REGISTRATION REMINDER
The 1993 Neural Information Processing Systems (NIPS*93) meeting is
the seventh meeting of an inter-disciplinary conference which brings
together neuroscientists, engineers, computer scientists, cognitive
scientists, physicists, and mathematicians interested in all aspects
of neural processing and computation. There will be an afternoon of
tutorial presentations (Nov. 29), two and a half days of regular
meeting sessions (Nov. 30 - Dec. 2), and two days of focused workshops
at a nearby ski area (Dec. 3-4).
An electronic copy of the 1993 NIPS registration brochure is available
in postscript format via anonymous ftp at helper.systems.caltech.edu
in /pub/nips/NIPS_93_brochure.ps.Z. For a hardcopy of the brochure or
other information, please send a request to nips93@systems.caltech.edu
or to: NIPS Foundation, P.O. Box 60035, Pasadena, CA 91116-6035.
EARLY REGISTRATION DEADLINE (for $100 discount): Oct. 30
_________________
NIPS*93 ORAL PRESENTATIONS PROGRAM
Tues. AM: Cognitive Science
8:30 Invited Talk: Jeff Elman, UC San Diego:
From Weared to Wore: A Connectionist Account of the
History of the Past Tense
9:00 Richard O. Duda, San Jose State Univ.:
Connectionist Models for Auditory Scene Analysis
9:20 Reza Shadmehr and Ferdinando A. Mussa-Ivaldi, MIT:
Computational Elements of the Adaptive Controller of the Human Arm
9:40 Catherine Stevens and Janet Wiles, University of Queensland:
Tonal Music as a Componential Code: Learning Temporal Relationships
Between and Within Pitch and Timing Components
10:00 Poster Spotlights:
Thea B. Ghiselli-Crispa and Paul Munro, Univ. of Pittsburgh:
Emergence of Global Structure from Local Associations
Tony A. Plate, University of Toronto:
Estimating Structural Similarity by Vector Dot Products of
Holographic Reduced Representations
10:10 BREAK
Speech Recognition
10:40 Jose C. Principe, Hui-H. Hsu and Jyh-M. Kuo, Univ. of Florida:
Analysis of Short Term Neural Memory Structures for Nonlinear Prediction
11:00 Eric I. Chang and Richard P. Lippmann, MIT Lincoln Laboratory:
Figure of Merit Training for Detection and Spotting
11:20 Gregory J. Wolff, K. Venkatesh Prasad, David G. Stork and
Marcus Hennecke, Ricoh California Research Center:
Lipreading by Neural Networks: Visual Preprocessing, Learning and
Sensory Integration
11:40 Poster Spotlights:
Steve Renals, Mike Hochberg and Tony Robinson, Cambridge University:
Learning Temporal Dependencies In Large-Scale Connectionist
Speech Recognition
Ying Zhao, John Makhoul, Richard Schwartz and George
Zavaliagkos, BBN Systems and Technologies:
Segmental Neural Net Optimization for Continuous Speech Recognition
11:50 Rod Goodman, Caltech: Posner Memorial Lecture
Tues. PM: Temporal Prediction and Control
2:00 Invited Talk: Doyne Farmer, Prediction Co.:
Time Series Analysis of Nonlinear and Chaotic Time Series: State Space
Reconstruction and the Curse of Dimensionality
2:30 Kenneth M. Buckland and Peter D. Lawrence, Univ. of British Columbia:
Transition Point Dynamic Programming
2:50 Gary W. Flake, Guo-Zhen Sun, Yee-Chun Lee and Hsing-Hen Chen,
University of Maryland:
Exploiting Chaos to Control The Future
3:10 Satinder P. Singh, Andrew G. Barto, Roderic Grupen and
Christopher Connolly, University of Massachusetts:
Robust Reinforcement Learning in Motion Planning
3:30 BREAK
Theoretical Analysis
4:00 Scott Kirkpatrick, Naftali Tishby, Lidror Troyansky,
The Hebrew Univ. of Jerusalem, and Geza Gyorgi, Eotvos Univ.:
The Statistical Mechanics of K-Satisfaction
4:20 Santosh S. Venkatesh, Changfeng Wang, Univ. of Pennsylvania,
and Stephen Judd, Siemens Corporate Research:
When To Stop: On Optimal Stopping And Effective Machine Size In Learning
4:40 Wolfgang Maass, Technische Univ. Graz:
Agnostic PAC-Learning Functions on Analog Neural Nets
5:00 H.N. Mhaskar, California State Univ. and Charles A. Micchelli, IBM:
How To Choose An Activation Function
5:20 Poster Spotlights
Iris Ginzburg, Tel Aviv Univ. and Haim Sompolinsky, Hebrew Univ.:
Correlation Functions on a Large Stochastic Neural Network
Xin Wang, Qingnan Li and Edward K. Blum, USC:
Asynchronous Dynamics of Continuous-Time Neural Networks
Tal Grossman and Alan Lapedes, Los Alamos National Laboratory:
Use of Bad Training Data for Better Predictions
Wed. AM: Learning Algorithms
8:30 Invited Talk: Geoff Hinton, Univ. of Toronto:
Using the Minimum Description Length Principle to Discover Factorial
Codes
9:00 Richard S. Zemel, Salk Institute, and G. Hinton, Univ. of Toronto:
Developing Population Codes By Minimizing Description Length
9:20 Sreerupa Das and Michael C. Mozer, University of Colorado:
A Hybrid Gradient-Descent/Clustering Technique for Finite State
Machine Induction
9:40 Eric Saund, Xerox Palo Alto Research Center:
Unsupervised Learning of Mixtures of Multiple Causes in Binary Data
10:00 BREAK
10:30 A. Uzi Levin and Todd Leen, Oregon Graduate Institute:
Fast Pruning Using Principal Components
10:50 Christoph Bregler and Stephen Omohundro, ICSI:
Surface Learning with Applications to Lip Reading
11:10 Melanie Mitchell, Santa Fe Inst. and John H. Holland, Univ. Michigan:
When Will a Genetic Algorithm Outperform Hill Climbing
11:30 Oded Maron and Andrew W. Moore, MIT:
Hoeffding Races: Accelerating Model Selection Search for Classification
and Function Approximation
11:50 Poster Spotlights:
Zoubin Ghahramani and Michael I. Jordan, MIT:
Supervised Learning from Incomplete Data via an EM Approach
Mats Osterberg and Reiner Lenz, Linkoping Univ.
Unsupervised Parallel Feature Extraction from First Principles
Terence D. Sanger, LAC-USC Medical Center:
Two Iterative Algorithms for Computing the Singular Value Decomposition
from Input/Output Samples
Patrice Y. Simard and Edi Sackinger, AT&T Bell Laboratories:
Efficient Computation of Complex Distance Metrics Using Hierarchical
Filtering
Wed. PM: Neuroscience
2:00 Invited Talk: Eve Marder, Brandeis Univ.:
Dynamic Modulation of Neurons and Networks
2:30 Ojvind Bernander, Rodney Douglas and Christof Koch, Caltech:
Amplifying and Linearizing Apical Synaptic Inputs to
Cortical Pyramidal Cells
2:50 Christiane Linster and David Marsan, ESPCI,
Claudine Masson and Michel Kerzberg, CNRS:
Odor Processing in the Bee: a Preliminary Study of the
Role of Central Input to the Antennal Lobe
3:10 M.G. Maltenfort, R. E. Druzinsky, C. J. Heckman and
W. Z. Rymer, Northwestern Univ.:
Lower Boundaries of Motoneuron Desynchronization Via
Renshaw Interneurons
3:30 BREAK
Visual Processing
4:00 K. Obermayer, The Salk Institute, L. Kiorpes, NYU and
Gary G. Blasdel, Harvard Medical School:
Development of Orientation and Ocular Dominance Columns
in Infant Macaques
4:20 Yoshua Bengio, Yann Le Cun and Donnie Henderson, AT&T Bell Labs:
Globally Trained Handwritten Word Recognizer using Spatial
Representation, Spatial Displacement Neural Networks and
Hidden Markov Models
4:40 Trevor Darrell and A. P. Pentland, MIT:
Classification of Hand Gestures using a View-based
Distributed Representation
5:00 Ko Sakai and Leif H. Finkel, Univ. of Pennsylvania:
A Network Mechanism for the Determination of Shape-from-Texture
5:20 Video Poster Spotlights (to be announced)
Thurs. AM: Implementations and Applications
8:30 Invited Talk: Dan Seligson, Intel:
A Radial Basis Function Classifier with On-chip Learning
9:00 Michael A. Glover, Current Technology, Inc. and
W. Thomas Miller III, University of New Hampshire:
A Massively-Parallel SIMD Processor for Neural Network and
Machine Vision Application
9:20 Steven S. Watkins, Paul M. Chau, and Mark Plutowski, UCSD,
Raoul Tawel and Bjorn Lambrigsten, JPL:
A Hybrid Radial Basis Function Neurocomputer
9:40 Gert Cauwenberghs, Caltech :
A Learning Analog Neural Network Chip with Continuous-Time
Recurrent Dynamics
10:00 BREAK
10:30 Invited Talk: Paul Refenes, University College London:
Neural Network Applications in the Capital Markets
11:00 Jane Bromley, Isabelle Guyon, Yann Le Cun, Eduard Sackinger
and Roopak Shah, AT&T Bell Laboratories:
Signature Verification using a "Siamese" Time Delay Neural Network
11:20 John Platt and Ralph Wolf, Synaptics, Inc.:
Postal Address Block Location Using a Convolutional Locator Network
11:40 Shumeet Baluja and Dean Pomerleau, Carnegie Mellon University:
Non-Intrusive Gaze Tracking Using Artificial Neural Networks
12:00 Adjourn to Vail for Workshops
_____________________
NIPS*93 POSTER PROGRAM
Tues. PM Posters:
Cognitive Science (CS)
CS-1 Blasig Using Backpropagation to Automatically Generate Symbolic Classification Rules
CS-2 Munro, Ghiselli-Crispa Emergence of Global Structure from Local Associations
CS-3 Plate Estimating structural similarity by vector dot products of Holographic Reduced Representations
CS-4 Shultz, Elman Analyzing Cross Connected Networks
CS-5 Sperduti Encoding of Labeled Graphs by Labeling RAAM
Speech Processing (SP)
SP-1 Farrell, Mammone Speaker Recognition Using Neural Tree Networks
SP-2 Hirayama, Vatikiotis-Bateson, Kawato Inverse Dynamics of Speech Motor Control
SP-3 Renals, Hochberg, Robinson Learning Temporal Dependencies In Large-Scale Connectionist Speech Recognition
SP-4 Zhao, Makhoul, Schwartz, Zavaliagkos Segmental Neural Net Optimization for Continuous Speech Recognition
Control, Navigation and Planning (CT)
CT-1 Atkeson Using Local Trajectory Optimizers To Speed Up Global Optimization In Dynamic Programming
CT-2 Boyan, Littman A Reinforcement Learning Scheme for Packet Routing Using a Network of Neural Networks
CT-3 Cohn Queries and Exploration Using Optimal Experiment Design
CT-4 Duff, Barto Monte Carlo Matrix Inversion and Reinforcement Learning
CT-5 Gullapalli, Barto Convergence of Indirect Adaptive Asynchronous Dynamic Programming Algorithms
CT-6 Jaakkola, Jordan, Singh Stochastic Convergence Of Iterative DP Algorithms
CT-7 Moore The Parti-game Algorithm for Variable Resolution Reinforcement Learning in Multidimensional State-spaces
CT-8 Nowlan, Cacciatore Mixtures of Controllers for Jump Linear and Non-linear Plants
CT-9 Wada, Koike, Vatikiotis-Bateson, Kawato A Computational Model for Cursive Handwriting Based on the Minimization Principle
Learning Theory, Generalization and Complexity (LT)
LT-01 Cortes, Jackel, Solla, Vapnik, Denker Learning Curves: Asymptotic Values and Rates of Convergence
LT-02 Fefferman Recovering A Feed-Forward Net From Its Output
LT-03 Grossman, Lapedes Use of Bad Training Data for Better Predictions
LT-04 Hassibi, Sayed, Kailath H-inf Optimality Criteria for LMS and Backpropagation
LT-05 Hush, Horne Bounds on the complexity of recurrent neural network implementations of finite state machines
LT-06 Ji A Bound on Generalization Error Using Network-Parameter-Dependent Information and Its Applications
LT-07 Kowalczyk Counting function theorem for multi-layer networks
LT-08 Mangasarian, Solodov Backpropagation Convergence Via Deterministic Nonmonotone Perturbed Minimization
LT-09 Plutowski, White Delete-1 Cross-Validation Estimates IMSE
LT-10 Schwarze, Hertz Discontinuous Generalization in Large Commitee Machines
LT-11 Shapiro, Prugel-Bennett Non-Linear Statistical Analysis and Self-Organizing Competitive Networks
LT-12 Wahba Structured Machine Learning for 'Soft' Classification, with Smoothing Spline ANOVA Models and Stacked Tuning, Testin
g
and Evaluation
LT-13 Watanabe Solvable models of artificial neural networks
LT-14 Wiklicky On the Non-Existence of a Universal Learning Algorithm for Recurrent Neural Networks
Dynamics/Statistical Analysis (DS)
DS-1 Coolen, Penney, Sherrington Coupled Dynamics of Fast Neurons and Slow Interactions
DS-2 Garzon, Botelho Observability of neural network behavior
DS-3 Gerstner, van Hemmen How to Describe Neuronal Activity: Spikes, Rates, or Assemblies?
DS-4 Ginzburg, Sompolinsky Correlation Functions on a Large Stochastic Neural Network
DS-5 Leen, Orr Momentum and Optimal Stochastic Search
DS-6 Ruppin, Meilijson Optimal signalling in Attractor Neural Networks
DS-7 Wang, Li, Blum Asynchronous Dynamics of Continuous-Time Neural Networks
Recurrent Networks (RN)
RN-1 Baird, Troyer, Eeckman Grammatical Inference by Attentional Control of Synchronization in an Oscillating Elman Net
RN-2 Bengio, Frasconi Credit Assignment through Time: Alternatives to Backpropagation
RN-3 Kolen Fool's Gold: Extracting Finite State Machines From Recurrent Network Dynamics
RN-4 Movellan A Reinforcement Algorithm to Learn Trajectories with Stochastic Neural Networks
RN-5 Saunders, Angeline, Pollack Structural and behavioral evolution of recurrent networks
Applications (AP)
AP-01 Baldi, Brunak, Chauvin, Krogh Hidden Markov Models in Molecular Biology: Parsing the Human Genome
AP-02 Eeckman, Buhmann, Lades A Silicon Retina for Face Recognition
AP-03 Flann A Hierarchal Approach to Recognizing On-line Cursive Handwriting
AP-04 Graf, Cosatto, Ting Locating Address Blocks with a Neural Net System
AP-05 Karunanithi Identifying Fault-Prone Software Modules Using Feed-Forward Networks: A Case Study
AP-06 Keymeulen Comparison Training for a Rescheduling Problem in Neural Networks
AP-07 Lapedes, Steeg Use of Adaptive Networks to Find Highly Predictable Protein Structure Classes
AP-08 Schraudolph, Dayan, Sejnowski Using the TD(lambda) Algorithm to Learn an Evaluation Funcion for the Game of Go
AP-09 Smyth Probabilistic Anomaly Detection in Dynamic Systems
AP-10 Tishby, Singer Decoding Cursive Scripts
Wed. PM posters:
Learning Algorithms (LA)
LA-01 Gold, Mjolsness Clustering with a Domain-Specific Distance Metric
LA-02 Buhmann Central and Pairwise Data Clustering by Competitive Neural Networks
LA-03 de Sa Learning Classification without Labeled Data
LA-04 Ghahramani, Jordan Supervised learning from incomplete data via an EM approach
LA-05 Tresp, Ahmad, Neuneier Training Neural Networks with Deficient Data
LA-06 Osterberg, Lenz Unsupervised Parallel Feature Extraction from First Principles
LA-07 Sanger Two Iterative Algorithms for Computing the Singular Value Decomposition from Input/Output Samples
LA-08 Leen, Kambhatla Fast Non-Linear Dimension Reduction
LA-09 Schaal, Atkeson Assessing The Quality of Learned Local Models
LA-10 Simard, Sackinger Efficient computation of complex distance metrics using hierarchical filtering
LA-11 Tishby, Ron, Singer The Power of Amnesia
LA-12 Wettscherek, Dietterich Locally Adaptive Nearest Neighbor Algorithms
LA-13 Liu Robust Parameter Estimation and Model Selection for Neural Network Regression
LA-14 Wolpert Bayesian Backpropagation Over Functions Rather Than Weights
LA-15 Thodberg Bayesian Backprop in Action: Pruning, Ensembles, Error Bars and Application to Strectroscopy
LA-16 Dietterich, Jain, Lanthop Dynamic Reposing for Drug Activity Prediction
LA-17 Ginzburg, Horn Combined Neural Networks For Time Series Analysis
LA-18 Graf, Simard Backpropagation without Multiplication
LA-19 Harget, Bostock A Comparative Study of the Performance of a Modified Bumptree with Radial Basis Function Networks and the
Standard Multi-Layer Perceptron
LA-20 Najafi, Cherkassky Adaptive Knot Placement Based on Estimated Second Derivative of Regression Surface
Constructive/Pruning Algorithms (CP)
CP-1 Fritzke Supervised Learning with Growing Cell Structures
CP-2 Hassibi, Stork, Wolff Optimal Brain Surgeon: Extensions, streamlining and performance comparisons
CP-3 Kamimura Generation of Internal Representations by alpha-transformation
CP-4 Leerink, Jabri Constructive Learning Using Internal Representation Conflicts
CP-5 Utans Learning in Compositional Hierarchies: Inducing the Structure of Objects from Data
CP-6 Watanabe An Optimization Method of Layered Neural Networks Based on the Modified Information Criterion
Neuroscience (NS)
NS-01 Bialek, Ruderman Statistics of Natural Images: Scaling in the Woods
NS-02 Boussard, Vibert Dopaminergic neuromodulation brings a dynamical plasticiy to the retina
NS-03 Doya, Selverston, Rowat A Hodgkin-Huxley Type Neuron Model that Learns Slow Non-Spike Oscillation
NS-04 Gusik, Eaton Directional Hearing by the Mauthner System
NS-05 Horiuchi, Bishofberger, Koch Building an Analog VLSI, Saccadic Eye Movement System
NS-06 Lewicki Bayesian Modeling and Classification of Neural Signals
NS-07 Montague, Dayan, Sejnowski Foraging in an Uncertain Environment Using Predictive Hebbian Learning
NS-08 Rosen, Rumelhart, Knudsen A Connectionist Model of the Owl's Sound Localization System
NS-09 Sanger Optimal Unsupervised Motor Learning Predicts the Internal Representation of Barn Owl Head Movements
NS-10 Siegal An Analog VLSI Model Of Central Pattern Generation In The Medicinal Leech
NS-11 Usher, Stemmler, Koch High spike rate variability as a consequence of network amplification of local fluctuations
Visual Processing (VP)
VP-1 Ahmad Feature Densities are Required for Computing Feature Corresponces
VP-2 Buracas, Albright Proposed function of MT neurons' receptive field surrounds: computing shapes of objects from velocity fields
VP-3 Geiger, Diamantaras Resolving motion ambiguities
VP-4 Mjolsness Two-Dimensional Object Localization by Coarse-to-fine Correlation Matching
VP-5 Sajda, Finkel Dual Mechanisms for Neural Binding and Segmentation and Their Role in Cortical Integration
VP-6 Yuille, Smirnakis, Xu Bayesian Self-Organization
Implementations (IM)
IM-01 Andreou, Edwards VLSI Phase Locking Architecture for Feature Linking in Multiple Target Tracking Systems
IM-02 Coggins, Jabri WATTLE: A Trainable Gain Analogue VLSI Neural Network
IM-03 Elfadel, Wyatt The "Softmax" Nonlinearity: Derivation Using Statistical Mechanics and Useful Properties as a Multiterminal
Analog Circuit Element
IM-04 Muller, Kocheisen, Gunzinger High Performance Neural Net Simulation on a Multiprocessor System with "Intelligent"
Communication
IM-05 Murray, Burr, Stork, et al. Digital Boltzmann VLSI for constraint satisfaction and learning
IM-06 Niebur, Brettle Efficient Simulation of Biological Neural Networks on Massively Parallel Supercomputers with Hypercube
Architecture
IM-07 Oliveira, Sangiovanni-Vincentelli Learning Complex Boolean Functions: Algorithms and Applications
IM-08 Shibata, Kotani, Yamashita et al. Implementing Intelligence on Silicon Using Neuron-Like Functional MOS Transistors
IM-09 Watts Event-Driven Simulation of Networks of Spiking Neurons
------------------------------
Subject: NIPS Workshop: Selective Attention
From: Thomas Hildebrandt <thildebr@aragorn.csee.lehigh.edu>
Date: Tue, 05 Oct 93 14:08:11 -0500
I wish to call your attention to a workshop on selective attention
which I will be hosting at this year's NIPS conference.
===================================================================
NIPS*93 Postconference Workshop
Functional Models of Selective Attention and Context Dependency
December 4, 1993
Intended Audience: Those applying NNs to vision and speech analysis
and pattern recognition tasks, as well as computational
neurobiologists modelling attentional mechanisms.
Organizer: Thomas H. Hildebrandt
thildebr@athos.eecs.lehigh.edu
ABSTRACT: Classification based on trainable models still fails to
achieve the current ideal of human-like performance. One identifiable
reason for this failure is the disparity between the number of
training examples needed to achieve good performance (large) and the
number of labelled samples available for training (small). On certain
tasks, humans are able to generalize well when given only one
exemplar. Clearly, a different mechanism is at work.
In human behavior, there are numerous examples of selective attention
improving a person's recognition capabilities. Models using context
or selective attention seek to improve classification performance by
modifying the behavior of a classifier based on the current (and
possibly recent) input data. Because they treat learning and
contextual adaptation as two different processes, these models solve
the memory/plasticity dilemma by incorporating both. In other words,
they differ fundamentally from models which attempt to provide
contextual adaptation by allowing all the weights in the network to
continue evolving while the system is in operation.
Schedule December 4, 1993
======== ================
7:30 - 7:35 Opening Remarks
7:35 - 8:00 Current Research in Selective Attention
Thomas H. Hildebrandt, Lehigh University
8:00 - 8:30 Context-varying Preferences and Traits in a Class of
Neural Networks
Daniel S. Levine, University of Texas at Arlington
Samuel J. Leven, For a New Social Science
8:30 - 9:00 ETS - A Formal Model of an Evolving Learning Machine
L.Goldfarb, J.Abela, V.Kamat, University of New Brunswick
9:00 - 9:30 Recognizing Handwritten Digits Using a Selective
Attention Mechanism
Ethem Alpaydin, Bogazici University, Istanbul TURKEY
9:30 - 4:30 FREE TIME
4:30 - 5:00 Context and Selective Attention in the Capital Markets
P. N. Refenes, London Business School
5:00 - 5:30 The Global Context-Sensitive Constraint Satisfaction Property
in Adaptive Perceptual Pattern Recognition
Jonathan A. Marshall, University of North Carolina
5:30 - 6:00 Neural Networks for Context Sensitive Representation
of Synonymous and Homonymic Patterns
Albert Nigrin, American University
6:00 - 6:30 Learn to Pay Attention, Young Network!
Barak A. Pearlmutter, Siemens Corp. Research Ctr.,
Princeton NJ
6:30 - 6:35 Closing Remarks
7:00 Workshop Wrap-Up (common to all sessions)
=====================================================================
The topic to be covered differs from that recently announced by Ernst
Niebur and Bruno Olshausen, in that "functional" models are not
necessarily tied to neurophysiological structures. Thanks to the
Workshop Chair, Mike Mozer, the two workshops were scheduled on
different days, so that it is possible for interested parties to
attend both.
An electronic copy of the 1993 NIPS registration brochure is available
in postscript format via anonymous ftp at helper.systems.caltech.edu
in /pub/nips/NIPS_93_brochure.ps.Z. For a hardcopy of the brochure or
other information, please send a request to nips93@systems.caltech.edu
or to: NIPS Foundation, P.O. Box 60035, Pasadena, CA 91116-6035.
Feel free to contact me for more information on the workshop.
Thomas H. Hildebrandt
Electrical Engineering & Computer Science
Lehigh University
Bethlehem, PA 18015
Work: (215) 758-4063
FAX: (215) 758-6279
thildebr@athos.eecs.lehigh.edu
------------------------------
End of Neuron Digest [Volume 12 Issue 13]
*****************************************