Copy Link
Add to Bookmark
Report

Neuron Digest Volume 05 Number 42

eZine's profile picture
Published in 
Neuron Digest
 · 1 year ago

Neuron Digest   Saturday, 28 Oct 1989                Volume 5 : Issue 42 

Today's Topics:
NIPS'89 Postconference Workshops
IJCNN 1990 - Request for Volunteers
NIPS-89 workshop


Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"
Use "ftp" to get old issues from hplpm.hpl.hp.com (15.255.176.205).

------------------------------------------------------------

Subject: NIPS'89 Postconference Workshops
From: Alex.Waibel@SPEECH2.CS.CMU.EDU
Date: Fri, 06 Oct 89 21:19:38 -0400


Below are the preliminary program and brief descriptions of the workshop
topics covered during this years NIPS-Postconference Workshops to be
held in Keystone from November 30 through December 2 (right following
the NIPS conference). Please register for both conference and Workshops
using the general NIPS conference registration forms. With it, please
indicate which workshop topic below you may be most interested in
attending. Your preferences are in no way binding or limiting you to any
particular workshop but will help us in allocating suitable meeting rooms
and scheduling workshop sessions in an optimal way. For your convenience,
you may simply include a copy of the form below with your registration
material marking it for your three most prefered workshop choices in order
of preference (1,2 and 3).

For registration information (both NIPS conference as well as Postconference
Workshops), please contact the Local Arrangements Chair, Kathie Hibbard,
by sending email to hibbard@boulder.colorado.edu, or by writing to:
Kathie Hibbard
NIPS '89
University of Colorado
Campus Box 425
Boulder, Colorado 80309-0425

For technical questions relating to individual conference workshops, please
contact the individual workshop leaders listed below. Please feel free to
contact me with any questions you may have about the workshops in general.
See you in Denver/Keystone,

Alex Waibel
NIPS Workshop Program Chairman
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
412-268-7676, waibel@cs.cmu.edu


================================================================
____________________________________________________________
! POST CONFERENCE WORKSHOPS AT KEYSTONE !
! THURSDAY, NOVEMBER 30 - SATURDAY, DECEMBER 2, 1989 !
!____________________________________________________________!

Thursday, November 30, 1989
5:00 PM: Registration and Reception at Keystone

Friday, December 1, 1989
7:30 - 9:30 AM: Small Group Workshops
4:30 - 6:30 PM: Small Group Workshops
7:30 - 10:30 PM: Banquet and Plenary Discussion

Saturday, December 2, 1989
7:30 - 9:30 AM: Small Group Workshops
4:30 - 6:30 PM: Small Group Workshops
6:30 - 7:15 PM: Plenary Discussion, Summaries
7:30 - 11:00 PM: Fondue Dinner, MountainTop Restaurant

================================================================


PLEASE MARK YOUR PREFERENCES (1,2,3) AND ENCLOSE WITH REGISTRATION MATERIAL:
- -----------------------------------------------------------------------------

______1. LEARNING THEORY: STATISTICAL ANALYSIS OR VC DIMENSION?
______2. STATISTICAL INFERENCE IN NEURAL NETWORK MODELLING
______3. NEURAL NETWORKS AND GENETIC ALGORITHMS
______4. VLSI NEURAL NETWORKS: CURRENT MILESTONES AND FUTURE HORIZONS
______5. APPLICATION OF NEURAL NETWORK PROCESSING TECHNIQUES TO REAL
WORLD MACHINE VISION PROBLEMS
______6. IMPLEMENTATIONS OF NEURAL NETWORKS ON DIGITAL, MASSIVELY
PARALLEL COMPUTERS
______7. LARGE, FAST, INTEGRATED SYSTEMS BASED ON ASSOCIATIVE MEMORIES
______8. NEURAL NETWORKS FOR SEQUENTIAL PROCESSING WITH APPLICATIONS IN
SPEECH RECOGNITION
______9. LEARNING FROM NEURONS THAT LEARN
______10. NEURAL NETWORKS AND OPTIMIZATION PROBLEMS
11. (withdrawn)
______12. NETWORK DYNAMICS
______13. ARE REAL NEURONS HIGHER ORDER NETS?
______14. NEURAL NETWORK LEARNING: MOVING FROM BLACK ART TO A PRACTICAL
TECHNOLOGY
______15. OTHERS ?? __________________________________________________






1. LEARNING THEORY: STATISTICAL ANALYSIS OR VC DIMENSION?

Sara A. Solla
AT&T Bell Laboratories
Crawford Corner Road
Holmdel, NJ 07733-1988

Phone: (201) 949-6057
E-mail: solla@homxb.att.com

Recent success at describing the process of learning in layered neural
networks and the resulting generalization ability has emerged from two
different approaches. Work based on the concept of VC dimension emphasizes
the connection between learning and statistical inference in order to analyze
questions of bias and variance. The statistical approach uses an ensemble
description to focus on the prediction of network performance for a
specific task.

Participants interested in learning theory are invited to discuss the
differences and similarities between the two approaches, the mathematical
relation between them, and their respective range of applicability. Specific
questions to be discussed include comparison of predictions for required
training set sizes, for the distribution of generalization abilities, for
the probability of obtaining good performance with a training set of
fixed size, and for estimates of problem complexity applicable to the
determination of learning times.



2. STATISTICAL INFERENCE IN NEURAL NETWORK MODELLING

Workshop Chair: Richard Golden
Stanford University
Psychology Department
Stanford, CA 94305
(415) 725-2456
E-mail: golden@psych.stanford.edu

This workshop is designed to show how the theory of statistical inference
is directly applicable to some difficult neural network modelling problems.
The format will be tutorial in nature (85% informal lecture, 15%
discussion). Topics to be discussed include: obtaining probability
distributions for neural networks, interpretation and derivation of optimal
learning cost functions, evaluating the generalization performance of
networks, asymptotic sampling distributions of network weights, statistical
mechanics calculation of learning curves in some simple examples,
statistical tests for comparing internal representations and deciding
which input units are relevant to the prediction task. Dr. Naftali Tishby
(AT&T Bell Labs) and Professor Halbert White (UCSD Economics Department)
are the invited experts.



3. Title: NEURAL NETWORKS AND GENETIC ALGORITHMS

Organizers: Lawrence Davis (Bolt Beranek and Newman, Inc.)
Michael Rudnick (Oregon Graduate Center)

Description: Genetic algorithms have many interesting relationships with
neural networks. Recently, a number of researchers have investigated some
of these relationships. This workshop will be the first forum bringing
those researchers together to discuss the current and future directions of
their work. The workshop will last one day and will have three parts.
First, a tutorial on genetic algorithms will be given, to ground those
unfamiliar with the technology. Second, seven researchers will summarize
their results. Finally there will be an open discussion on the topics
raised in the workshop. We expect that anyone familiar with neural network
technology will be comfortable with the content and level of discussion in
this workshop.




4. VLSI NEURAL NETWORKS:
CURRENT MILESTONES AND FUTURE HORIZONS

Moderators:

Joshua Alspector and Daniel B. Schwartz
Bell Communications Research GTE Laboratories, Inc.
445 South Street 40 Sylvan Road
Morristown, NJ 07960-19910 Waltham, MA 02254
(201) 829-4342 (617) 466-2414
e-mail: josh@bellcore.com e-mail: dbs%gte.com@relay.cs.net

This workshop will explore the areas of applicability of neural network
implementations in VLSI. Several speakers will discuss their present
implementations and speculate about where their work may lead. Workshop
attendees will then be encouraged to organize working groups to address
several issues which will be raised in connection with the presentations.
Although it is difficult to predict which issues will be selected, some
examples might be:

1) Analog vs. digital implementations.
2) Limits to VLSI complexity for neural networks.
3) Algorithms suitable for VLSI architectures.

The working groups will then report results which will be included in the
workshop summary.



5. APPLICATION OF NEURAL NETWORK PROCESSING TECHNIQUES TO REAL
WORLD MACHINE VISION PROBLEMS


Paul J. Kolodzy (617) 981-3822 kolodzy@ll.ll.mit.edu
Murali M. Menon (617) 981-5374


This workshop will discuss the application of neural networks to
vision applications, including image restoration and pattern
recognition. Participants will be asked to present their specific
application for discussion to highlight the relevant issues.
Examples of such issues include, but are not limited to, the use
of deterministic versus stochastic search procedures for neural
network processing, using networks to extract shape, scale and
texture information for recognition and using network mapping
techniques to increase data separability. The discussions will
be driven by actual applications with an emphasis on the advantages
of using neural networks at the system level in addition to the
individual processing steps. The workshop will attempt to cover a
wide breadth of network architectures and invites participation
from researchers in machine vision, neural network modeling,
pattern recognition and biological vision.




6. IMPLEMENTATIONS OF NEURAL NETWORKS ON
DIGITAL, MASSIVELY PARALLEL COMPUTERS

Dr. K. Wojtek Przytula
and
Prof. S.Y. Kung
Hughes Research Laboratories, RL 69
3011 Malibu Cyn. Road
Malibu, CA 90265

Phone: (213) 317-5892
E-mail: wojtek%csfvax@hac2arpa.hac.com


Implementations of neural networks span a full spectrum from software
realizations on general-purpose computers to strictly special-purpose hardware
realizations. Implementations on programmable, parallel machines, which are to
be discussed during the workshop, constitute a compromise between the two
extremes. The architectures of programmable parallel machines reflect the
structure of neural network models better than those of sequential machines,
thus resulting in higher processing speed. The programmability provides more
flexibility than is available in specialized hardware implementations and
opens a way for realization of various models on a single machine. The issues
to be discussed include: mapping neural network models onto existing parallel
machines, design of specialized programmable parallel machines for neural
networks, evaluation of performance of parallel machines for neural networks,
uniform characterization of the computational requirements of various neural
network models from the point of view of parallel implementations.



7. LARGE, FAST, INTEGRATED SYSTEMS
BASED ON ASSOCIATIVE MEMORIES

Michael R. Raugh

Director of Learning Systems Division
Research Institute for Advanced Computer Science (RIACS)
NASA Ames Research Center, MS 230-5
Moffett Field, CA 94035

e-mail: raugh@riacs.edu
Phone: (415) 694-4998


This workshop will address issues in the construction of large systems
that have thousands or even millions of hidden units. It will present and
discuss alternatives to backpropagation that allow large systems to learn
rapidly. Examples from image analysis, weather prediction, and speech
transcription will be discussed.

The focus on backpropagation with its slow learning has kept researchers from
considering such large systems. Sparse distributed memory and related
associative-memory structures provide an alternative that can learn,
interpolate, and abstract, and can do so rapidly.

The workshop is open to everyone, with special encouragement to those working
in learning, time-dependent networks, and generalization.



8. NEURAL NETWORKS FOR SEQUENTIAL PROCESSING WITH
APPLICATIONS IN SPEECH RECOGNITION

Herve Bourlard

Philips Research Laboratory Brussels
Av. Van Becelaere 2, Box 8
B-1170 Brussels, Belgium

Phone: 011-32-2-674-22-74

e-mail address: bourlard@prlb.philips.be
or: prlb2!bourlard@uunet.uu.net

Speech recognition must contend with the statistical and sequential nature of
the human speech production system. Hidden Markov Models (HMM) provide
a powerful method to cope with both of these, and their use made a
breakthrough in speech recognition. On the other hand, neural networks
have recently been recognized as an alternative tool for pattern recognition
problems such as speech recognition. Their main useful properties are their
discriminative power and their capability to deal with non-explicit knowledge.
However, the sequential aspect remains difficult to handle in connectionist
models. If connections are supplied with delays, feedback loops can be added
providing dynamic and implicit memory. However, in the framework of
continuous speech recognition, it is still difficult to use only neural
networks for the segmentation and recognition of a sentence into a sequence
of speech units, which is efficiently solved in the HMM approach
by the well known ``Dynamic Time Warping'' algorithm.

This workshop should be the opportunity for reviewing neural network
architectures which are potentially able to deal with sequential and
stochastic inputs. It should also be discussed to which extent the different
architectures can be useful in recognizing isolated units (phonemes,
words, ...) or continuous speech. Amongst others, we should consider
spatiotemporal models, time-delayed neural networks (Waibel, Sejnowsky),
temporal flow models (Watrous), hidden-to-input (Elman) or output-to-input
(Jordan) recurrent models, focused back-propagation networks (Mozer) or
hybrid approaches mixing neural networks and standard sequence matching
techniques (Sakoe, Bourlard).



9. LEARNING FROM NEURONS THAT LEARN

Moderated by
Thomas P. Vogl
Environmental Research Institute of Michigan
1501 Wilson Blvd.
Arlington, VA 22209
Phone: (703) 528-5250
E-mail: TVP%nihcu.bitnet@cunyvm.cuny.edu
FAX: (703) 524-3527

In furthering our understanding of artificial and biological neural
systems, the insights that can be gained from the perceptions of
those trained in other disciplines can be particularly fruitful.
Computer scientists, biophysicists, engineers, psychologists,
physicists, and neurobiologists tend to have different perspectives
and conceptions of the mechanisms and components of "neural
networks"
and to weigh differently their relative importance. The
insights obvious to practitioners of one of these disciplines are
often far from obvious to those trained in another, and therefore
may be especially relevant to the solutions of ornery problems.

The workshop provides a forum for the interdisciplinary discussion
of biological and artificial networks and neurons and their
behavior. Informal group discussion of ongoing research, novel
ideas, approaches, comparisons, and the sharing of insights will
be emphasized. The specific topics to be considered and the depth
of the analysis/discussion devoted to any topic will be determined
by the interest and enthusiasm of the participants as the
discussion develops. Participants are encouraged to consider
potential topics in advance, and to present them informally but
succinctly (under five minutes) at the beginning of the workshop.



10. NEURAL NETWORKS AND OPTIMIZATION PROBLEMS
----------------------------------------

Prof. Carsten Peterson
University of Lund
Dept. of Theoretical Physics
Solvegatan 14A
S-223 62 Lund
Sweden
phone: 011-46-46-109002
bitnet: THEPCAP%SELDC52

Workshop description:

The purpose of the workshop is twofold; to establish the present state
of the art and to generate novel ideas. With respect to the former, firm
answers to the following questions should emerge: (1). Does the Hopfield-
Tank approach or variants thereof really work with respect to quality,
reliability, parameter insensitivity and scalability? (2). If this is
the case, how does it compare with other cellular approaches like "elastic
snake"
and genetic algorithms? Novel ideas should focus on new encoding
schemes and new application areas (in particular, scheduling problems).
Also, if time allows, optimization of neural network learning architectures
will be covered.

People interested in participating are encouraged to communicate their
interests and expertise to the chairman via e-mail. This would facilitate
the planning.


12. Title: NETWORK DYNAMICS

Chair: Richard Rohwer
Centre for Speech Technology Research
Edinburgh University
80, South Bridge
Edinburgh EH1 1HN, Scotland

Phone: (44 or 0) (31) 225-8883 x280

e-mail: rr%uk.ac.ed.eusip@nsfnet-relay.ac.uk

Summary:

This workshop will be an attempt to gather and improve our
knowledge about the time dimension of the activation patterns produced
by real and model neural networks. This broad subject includes the
description, interpretation and design of these temporal patterns. For
example, methods from dynamical systems theory have been used to
describe the dynamics of network models and real brains. The design
problem is being approached using dynamical training algorithms.
Perhaps the most important but least understood problems concern the
cognitive and computational significance of these patterns. The
workshop aims to summarize the methods and results of researchers from
all relevant disciplines, and to draw on their diverse insights in order
to frame incisive, approachable questions for future research into network
dynamics.


Richard Rohwer JANET: rr@uk.ac.ed.eusip
Centre for Speech Technology Research ARPA: rr%uk.ac.ed.eusip@nsfnet-relay.ac.uk
Edinburgh University BITNET: rr@eusip.ed.ac.uk,
80, South Bridge rr%eusip.ed.UKACRL
Edinburgh EH1 1HN, Scotland UUCP: ...!{seismo,decvax,ihnp4}
!mcvax!ukc!eusip!rr
PHONE: (44 or 0) (31) 225-8883 x280 FAX: (44 or 0) (31) 226-2730



13. ARE REAL NEURONS HIGHER ORDER NETS?

Most existing artificial neural networks have processing elements which
are computationally much simpler than real neurons. One approach to
enhancing the computational capacity of artificial neural networks is to
simply scale up the number of processing elements, but there are limits
to this. An alternative is to build modules or subnets and link these
modules in a larger net. Several groups of investigators have begun
to analyze the computational abilities of real single neurons in terms of
equivalent neural nets, in particular higher order nets, in which the
inputs explicitly interact (eg. sigma-pi units). This workshop would
introduce participants to the results of these efforts, and examine the
advantages and problems of applying these complex processors in larger
networks.

Dr. Thomas McKenna
Office of Naval Research
Div. Cognitive and Neural Sciences
Code 1142 Biological Intelligence
800 N. Quincy St.
Arlington, VA 22217-5000

phone:202-696-4503
email: mckenna@nprdc.arpa
mckenna@nprdc.navy.mil



14. NEURAL NETWORK LEARNING:
MOVING FROM BLACK ART TO A PRACTICAL TECHNOLOGY


Scott E. Fahlman
School of Computer Science
Carnegie-Mellon University
Pittsburgh, PA 15213

Internet: fahlman@cs.cmu.edu
Phone: (412) 268-2575

There are a number of competing algorithms for neural network learning, all
rather new and poorly understood. Where theory is lacking, a reliable
technology can be built on shared experience, but it usually takes a long
time for this experience to accumulate and propagate through the community.
Currently, each research group has its own bag of tricks and its own body
of folklore about how to attack certain kinds of learning tasks and how to
diagnose the problem when things go wrong. Even when groups are willing to
share their hard-won experience with others, this can be hard to accomplish.

This workshop will bring together experienced users of back-propagation and
other neural net learning algorithms, along with some interested novices,
to compare views on questions like the following:

I. Which algorithms and variations work best for various classes of
problems? Can we come up with some diagnostic features that tell us what
techniques to try? Can we predict how hard a given problem will be?

II. Given a problem, how do we go about choosing the parameters for various
algorithms? How do we choose what size and shape of network to try? If
our first attempt fails, are there symptoms that can tell us what to try
next?

III. What can we do to bring more coherence into this body of folklore, and
facilitate communication of this informal kind of knowledge? An online
collection of standard benchmarks and public-domain programs is one idea,
already implemented at CMU. How can we improve this, and what other ideas
do we have?




------------------------------

Subject: IJCNN 1990 - Request for Volunteers
From: Karen Haines <khaines@GALILEO.ECE.CMU.EDU>
Date: Mon, 09 Oct 89 14:33:05 -0400


This is a first call for volunteers to help at the IJCNN conference,
to be held at the Omni Shorham Hotel in Washington D.C., on January
15-19, 1990.

Full admittance to the conference and a copy of the proceedings is
offered in exchange for your assistance throughout the conference. In general,
each volunteer is expected to work one shift, either in the morning or
the afternnon, each day of the conference. Hours for morning shift are,
approximately, 7:00 am until 12:00 noon, and for the afternoon, 12:00 noon
to 5:00 pm. In addition, assistance will be required for the social events.
If you can`t work all week long please contact Karen Haines to see what
can be worked out. There will be a mandatory meeting for all volunteers
on January 14.

To sign up please contact:

Karen Haines - Volunteer Coordinator
3138 Beechwood Blvd.
Pittsburgh, PA 15217

office: (412) 268-3304
message: (412) 422-6026
email: khaines@galileo.ece.cmu.edu
or,

Nina Kowalski - Assistant Volunteer Coordinator
209 W. 29th St. FLR 2
Baltimore, MD 21211

message: (301) 889-0587
email: nina@alpha.ece.jhu.edu

If you have further questions, please feel free to contact me.

Thank you,
Karen Haines



------------------------------

Subject: NIPS-89 workshop
From: Scott.Fahlman@B.GP.CS.CMU.EDU
Date: Wed, 18 Oct 89 11:18:58 -0400


The following workshop is one of those scheduled for December 1 and 2 in
Keystone Colorado, immediately following the NIPS-89 conference. I am
sending the announcement to this list because the issues to be discussed at
the workshop ought to be of particular interest to readers of this list.

It is my hope to get a bunch of experienced net-hackers together in one
place and to compare notes about the practical issues that arise in
applying this technology -- the sort of stuff that doesn't usually show up
in published papers. In addition to backprop people, I hope to have some
people in the workshop who have practical experience with various other
learning algorithms.

I'd like to get some idea of who might attend this workshop. Please send a
note to me (sef@cs.cmu.edu -- NOT to nn-bench!) if you think you might be
there. Please indicate whether you are probable or just possible,
depending on what other workshops look interesting. Also please indicate
whether you think you'll have some experiences of your own to share, or
whether you're basically a spectator, hoping to pick up some tips from
others. The more active participants we get, the more valuable this will
be for everyone.

This is a one-day workshop, so it can be combined with certain others.
I've requested a slot on the first day, but that's not settled yet. If
there's some other one-day post-NIPS workshop on the list that you'd really
like to attend as well, please tell me and I'll pass a summary along to the
people doing the scheduling.

- -- Scott

***************************************************************************

NEURAL NETWORK LEARNING:
MOVING FROM BLACK ART TO A PRACTICAL TECHNOLOGY


Scott E. Fahlman
School of Computer Science
Carnegie-Mellon University
Pittsburgh, PA 15213

Internet: fahlman@cs.cmu.edu
Phone: (412) 268-2575

There are a number of competing algorithms for neural network learning, all
rather new and poorly understood. Where theory is lacking, a reliable
technology can be built on shared experience, but it usually takes a long
time for this experience to accumulate and propagate through the community.
Currently, each research group has its own bag of tricks and its own body
of folklore about how to attack certain kinds of learning tasks and how to
diagnose the problem when things go wrong. Even when groups are willing to
share their hard-won experience with others, this can be hard to accomplish.

This workshop will bring together experienced users of back-propagation and
other neural net learning algorithms, along with some interested novices,
to compare views on questions like the following:

1. Which algorithms and variations work best for various classes of
problems? Can we come up with some diagnostic features that tell us what
techniques to try? Can we predict how hard a given problem will be?

2. Given a problem, how do we go about choosing the parameters for various
algorithms? How do we choose what size and shape of network to try? If
our first attempt fails, are there symptoms that can tell us what to try
next?

3. What can we do to bring more coherence into this body of folklore, and
facilitate communication of this informal kind of knowledge? An online
collection of standard benchmarks and public-domain programs is one idea,
already implemented at CMU. How can we improve this, and what other ideas
do we have?


------------------------------

End of Neurons Digest
*********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT