Copy Link
Add to Bookmark
Report

Neuron Digest Volume 07 Number 19

eZine's profile picture
Published in 
Neuron Digest
 · 1 year ago

Neuron Digest   Wednesday, 10 Apr 1991                Volume 7 : Issue 19 

Today's Topics:
TR - On planning and exploartion in non-discrete worlds
Paper announcements
NN application in molecular biology
TR - Segmentation, Binding, and illusory conjunctions
preprint of paper on visual binding
Two papers on information transfer / problem decomposition


Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"
Use "ftp" to get old issues from hplpm.hpl.hp.com (15.255.176.205).

------------------------------------------------------------

Subject: TR - On planning and exploartion in non-discrete worlds
From: Sebastian Thrun <thrun@gmdzi.uucp>
Date: Tue, 19 Mar 91 03:39:54 -0100

Well, there is a new TR available on the neuroprose archieve which is
more or less an extended version of the NIPS paper I announced some weeks
ago:



ON PLANNING AND EXPLORATION IN NON-DISCRETE WORLDS

Sebastian Thrun Knut Moeller
German National Research Center Bonn University
for Computer Science
St. Augustin, FRG Bonn, FRG



The application of reinforcement learning to control problems has
received considerable attention in the last few years
[Anderson86,Barto89,Sutton84]. In general there are two principles to
solve reinforcement learning problems: direct and indirect techniques,
both having their advantages and disadvantages.

We present a system that combines both methods. By interaction with an
unknown environment a world model is progressively constructed using the
backpropagation algorithm. For optimizing actions with respect to future
reinforcement planning is applied in two steps: An experience network
proposes a plan, which is subsequently optimized by gradient descent with
a chain of model networks. While operating in a goal-oriented manner due
to the planning process the experience network is trained. Its
accumulating experience is fed back into the planning process in form of
initial plans, such that planning can be gradually reduced. In order to
ensure complete system identification, a competence network is trained to
predict the accuracy of the model. This network enables purposeful
exploration of the world.

The appropriateness of this approach to reinforcement learning is
demonstrated by three different control experiments, namely a target
tracking, a robotics and a pole balancing task.

Keywords: backpropagation, connectionist networks, control, exploration,
planning, pole balancing, reinforcement learning, robotics, neural
networks, and, and, and...

=-------------------------------------------------------------------------

The TR can be retrieved by ftp:

unix> ftp cheops.cis.ohio-state.edu

Name: anonymous
Guest Login ok, send ident as password
Password: neuron
ftp> binary
ftp> cd pub
ftp> cd neuroprose
ftp> get thrun.plan-explor.ps.Z
ftp> bye

unix> uncompress thrun.plan-explor.ps
unix> lpr thrun.plan-explor.ps

= -------------------------------------------------------------------------


If you have trouble in ftping the files, do not hesitate to contact me.


--- Sebastian Thrun
(st@gmdzi.uucp, st@gmdzi.gmd.de)



------------------------------

Subject: Paper announcements
From: thanasis kehagias <ST401843@brownvm.brown.edu>
Date: Tue, 19 Mar 91 18:29:54 -0500


I have just placed two papers of mine in the ohio-state archive.

The first one is in the file kehagias.srn1.ps.Z and the relevant figures
in the companion file kehagias.srn1fig.ps.Z.

The second one is in the file kehagias.srn2.ps.Z and the relevant figures
in the companion file kehagias.srn2fig.ps.Z.

Detailed instructions for getting and printing these files will be
included in the end of this message.

Some of you have received versions of these files in email previously.
In that case read a postscript at the end of this message.

=-----------------------------------------------------------------------
Stochastic Recurrent Network training
by the Local Backward-Forward Algorithm

Ath. Kehagias

Brown University
Div. of Applied Mathematics

We introduce Stochastic Recurrent Networks, which are collections of
interconnected finite state units. At every discrete time step, each
unit goes into a new state, following a probability law that is
conditional on the state of neighboring units at the previous time step.
A network of this type can learn a stochastic process, where ``learning''
means maximizing the probability Likelihood function of the model. A new
learning (i.e. Likelihood maximization) algorithm is introduced, the
Local Backward-Forward Algorithm. The new algorithm is based on the Baum
Backward-Forward Algorithm (for Hidden Markov Models) and improves speed
of learning substantially. Essentially, the local Backward-Forward
Algorithm is a version of Baum's algorithm which estimates local
transition probabilities rather than the global transition probability
matrix. Using the local BF algorithm, we train SRN's that solve the 8-3-8
encoder problem and the phoneme modelling problem.

This is the paper kehagias.srn1.ps.Z, kehagias.srn1fig.ps.Z . The paper
srn1 has undergone significant revision. It had too many typos, bad
notation and also needed reorganization . All of these have been done.
Thanks to N. Chater, S. Nowlan and A.T. Tsoi and M. Perrone for many
useful suggestions along these lines.

=--------------------------------------------------------------------
Stochastic Recurrent Network training
Prediction and Classification of Time Series

Ath. Kehagias

Brown University
Div. of Applied Mathematics

We use Stochastic Recurrent Networks of the type introduced in [Keh91a]
as models of finite-alphabet time series. We develop the Maximum
Likelihood Prediction Algorithm and the Maximum A Posteriori
Classification Algorithm (which can both be implemented in recurrent PDP
form). The prediction problem is: given the output up to the present
time: Y^1,...,Y^t and the input up to the immediate future:
U^1,...,U^t+1, predict with Maximum Likelihood the output Y^t+1 that the
SRN will produce in the immediate future. The classification problem is:
given the output up to the present time: Y^1,...,Y^t and the input up to
the present time: U^1,...,U^t, as well as a number of candidate SRN's:
M_1, M_2, .., M_K, find the network that has Maximum Posterior
Probability of producing Y^1,...,Y^t. We apply our algorithms to
prediction and classification of speech waveforms.


This is the paper kehagias.srn2.ps.Z, kehagias.srn2fig.ps.Z .
=-----------------------------------------------------------------------
To get these files, do the following:

gvax> ftp cheops.cis.ohio-state.edu
220 cheops.cis.ohio-state.edu FTP server ready.
Name: anonymous
331 Guest login ok, send ident as password.
Password:neuron
ftp> Guest login ok, access restrictions apply.
ftp> cd pub/neuroprose
ftp> binary
200 Type set to I.
ftp> get kehagias.srn1.ps.Z
ftp> get kehagias.srn1fig.ps.Z
ftp> get kehagias.srn2.ps.Z
ftp> get kehagias.srn2fig.ps.Z
ftp> quit
gvax> uncompress kehagias.srn1.ps.Z
gvax> uncompress kehagias.srn1fig.ps.Z
gvax> uncompress kehagias.srn2.ps.Z
gvax> uncompress kehagias.srn2fig.ps.Z
gvax> lqp kehagias.srn1.ps
gvax> lqp kehagias.srn1fig.ps
gvax> lqp kehagias.srn2.ps
gvax> lqp kehagias.srn2fig.ps



POSTSCRIPT: All of the people that sent a request (about a month ago) for
srn1 in its original form are in my mailing list and most got copies of
new versions of srn1,srn2 in email. Some of these files did not make it
through internet, because of size restrictions etc. so you may want to
fpt them now. Incidentally, if you want to be removed from the mailing
list (for when the next paper in the series comes by) send me mail.


Thanasis Kehagias

------------------------------

Subject: NN application in molecular biology
From: BRUNAK@nbivax.nbi.dk
Date: Wed, 20 Mar 91 13:06:00 +0100


The following preprint is available in postscript form by anonymous ftp

"Prediction of human mRNA donor and acceptor sites from the DNA
sequence"
. S. Brunak, J. Engelbrecht and S. Knudsen.

Journal of Molecular Biology, to appear.


Abstract:

Artificial neural networks have been applied to the prediction of splice
site location in human pre-mRNA. A joint prediction scheme where
prediction of transition regions between introns and exons regulates a
cutoff level for splice site assignment was able to predict splice site
locations with confidence levels far better than previously reported in
the literature. The problem of predicting donor and acceptor sites in
human genes is hampered by the presence of numerous amounts of false
positives --- in the paper the distribution of these false splice sites
is examined and linked to a possible scenario for the splicing mechanism
in vivo. When the presented method detects 95% of the true donor and
acceptor sites it makes less than 0.1% false donor site assignments and
less than 0.4% false acceptor site assignments. For the large data set
used in this study this means that on the average there are one and a
half false donor sites per true donor site and six false acceptor sites
per true acceptor site. With the joint assignment method more than a
fifth of the true donor sites and around one fourth of the true acceptor
sites could be detected without accompaniment of any false positive
predictions. Highly confident splice sites could not be isolated with a
widely used weight matrix method or by separate splice site networks. A
complementary relation between the confidence levels of the
coding/non-coding and the separate splice site networks was observed,
with many weak splice sites having sharp transitions in the
coding/non-coding signal and many stronger splice sites having more
ill-defined transitions between coding and non-coding.

Subject category: Genes, under the sub--headings: expression, sequence
and structure.

Keywords: Intron--splicing, human genes, exon selection, neural network,
computer--prediction.


=-----------------------------------------------------------------------

You will need a POSTSCRIPT printer to print the file.
To obtain a copy of the preprint, use anonymous ftp from
cheops.cis.ohio-state.edu (here is what the transaction looks like):

unix> ftp
ftp> open cheops.cis.ohio-state.edu
Connected to cheops.cis.ohio-state.edu.
220 cheops.cis.ohio-state.edu FTP server (Version blah blah) ready.
Name (cheops.cis.ohio-state.edu:yourname): anonymous
331 Guest login ok, send ident as password.
Password: anything
230 Guest login ok, access restrictions apply.
ftp> cd pub/neuroprose
250 CWD command successful.
ftp> bin
200 Type set to I.
ftp> get brunak.netgene.ps.Z
200 PORT command successful.
150 Opening BINARY mode data connection for brunak.netgene.ps.Z
226 Transfer complete.
local: brunak.netgene.ps.Z remote: brunak.netgene.ps.Z
ftp> quit
221 Goodbye.
unix> uncompress brunak.netgene.ps.Z
unix> lpr brunak.netgene.ps




Hardcopies are also available:

S. Brunak and J. Engelbrecht
Department of Structural Properties of Materials
Building 307
The Technical University of Denmark
DK-2800 Lyngby, Denmark
brunak@nbivax.nbi.dk


------------------------------

Subject: TR - Segmentation, Binding, and illusory conjunctions
From: HORN%TAUNIVM.BITNET@bitnet.CC.CMU.EDU
Date: Thu, 21 Mar 91 18:29:54 -0500


The following preprint is available. Requests can be sent to HORN at
TAUNIVM.BITNET :


Segmentation, Binding and Illusory Conjunctions

D. HORN, D. SAGI and M. USHER

Abstract

We investigate binding within the framework of a model of excitatory and
inhibitory cell assemblies which form an oscillating neural network. Our
model is composed of two such networks which are connected through their
inhibitory neurons. The excitatory cell assemblies represent memory
patterns. The latter have different meanings in the two networks,
representing two different attributes of an object, such as shape and
color.

The networks segment an input which contains mixtures of such pairs into
staggered oscillations of the relevant activities. Moreover, the phases
of the oscillating activities representing the two attributes in each
pair lock with each other to demonstrate binding. The system works very
well for two inputs, but displays faulty correlations when the number of
objects is larger than two. In other words, the network conjoins
attributes of different objects, thus showing the phenomenon of "illusory
conjunctions"
, like in human vision.


------------------------------

Subject: preprint of paper on visual binding
From: "Erik D. Lumer" <lumer@parc.xerox.com>
Date: Fri, 05 Apr 91 10:27:48 -0800

The following paper is available in hardcopy form. If interested, send
e-mail requests to "lumer@parc.xerox.com"
=------------------------------------------------------------

"Binding Hierarchies: A Basis for Dynamic Perceptual Grouping"

Erik Lumer and Bernado Huberman

Dynamics of Computation Group,
Xerox Palo Alto Research Center
and
Stanford University

(submitted to Neural Computation)
-----------------------------------

Since it has been argued that the brain binds its fragmentary
representations of perceptual events via phase-locking of stimulated
neuron oscillators, it is important to determine how extended
synchronization can occur in a clustered organization of cells posessing
an intrisic distribution of firing rates. In order to answer that that
question, we establish the basic conditions for the existence of a
binding mechanism based on phase-locked oscillations. In addition, we
present a simple hierarchical architecture of feedback units which not
only induces robust synchronization and segregation of perceptual groups,
but serves as a generic binding machine.


------------------------------

Subject: Two papers on information transfer / problem decomposition
From: "Lorien Y. Pratt" <pratt@paul.rutgers.edu>
Date: Fri, 05 Apr 91 17:08:52 -0500

The following two papers are now available via FTP from the neuroprose
archives. The first is for AAAI91, so written towards an AI/Machine
learning audience. The second is for IJCNN91, so more neural
network-oriented. There is some overlap between them: the AAAI paper
reports briefly on the study describved in more detail in the IJCNN
paper. Instructions for retrieval are at the end of this message.

--Lori

#@#@#@#@#@#@#@#@#@#@#@#@#@#@#@#@#@#@#@#@#@#@#@#@#@#@#@#@#@#@#@#@#@#@#@#@#@

Direct Transfer of Learned Information Among Neural Networks

To appear: Proceedings of AAAI-91


Lorien Y. Pratt and Jack Mostow and Candace A. Kamm

Abstract

A touted advantage of symbolic representations is the ease of
transferring learned information from one intelligent agent to
another. This paper investigates an analogous problem: how to use
information from one neural network to help a second network learn a
related task. Rather than translate such information into symbolic
form (in which it may not be readily expressible), we investigate the
direct transfer of information encoded as weights.

Here, we focus on how transfer can be used to address the important
problem of improving neural network learning speed. First we present
an exploratory study of the somewhat surprising effects of pre-setting
network weights on subsequent learning. Guided by hypotheses from this
study, we sped up back-propagation learning for two speech recognition
tasks. By transferring weights from smaller networks trained on
subtasks, we achieved speedups of up to an order of magnitude compared
with training starting with random weights, even taking into account
the time to train the smaller networks. We include results on how
transfer scales to a large phoneme recognition problem.

@%@%@%@%@%@%@%@%@%@%@%@%@%@%@%@%@%@%@%@%@%@%@%@%@%@%@%@%@%@%@%@%@%@%@%@%@%@

Improving a Phoneme Classification Neural Network
through Problem Decomposition

To appear: Proceedings of IJCNN-91

L. Y. Pratt and C. A. Kamm

Abstract

In the study of neural networks, it is important to determine whether
techniques that have been validated on smaller experimental tasks can
be scaled to larger real-world problems. In this paper we discuss how
a methodology called {\em problem decomposition} can be applied to
AP-net, a neural network for mapping acoustic spectra to phoneme
classes. The network's task is to recognize phonemes from a large
corpus of multiple-speaker, continuously-spoken sentences. We review
previous AP-net systems and present results from a decomposition study
in which smaller networks trained to recognize subsets of phonemes are
combined into a larger network for the full signal-to-phoneme mapping
task. We show that, by using this problem decomposition methodology,
comparable performance can be obtained in significantly fewer
arithmetic operations.


%^%^%^%^%^%^%^%^%^%^%^%^%^%^%^%^%^%^%^%^%^%^%^%^%^%^%^%^%^%^%^%^%^%^%^%^%^%

To retrieve:

unix> ftp cheops.cis.ohio-state.edu (or 128.146.8.62)
Name: anonymous
Password: neuron
ftp> cd pub/neuroprose
ftp> binary
ftp> get pratt.aaai91.ps.Z
ftp> get pratt.ijcnn91.ps.Z
ftp> quit
unix> uncompress pratt.aaai91.ps.Z pratt.ijcnn91.ps.Z
unix> lpr pratt.aaai91.ps pratt.ijcnn91.ps


------------------------------

End of Neuron Digest [Volume 7 Issue 19]
****************************************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT