Copy Link
Add to Bookmark
Report

Neuron Digest Volume 04 Number 32

eZine's profile picture
Published in 
Neuron Digest
 · 1 year ago

Neuron Digest   Sunday, 11 Dec 1988                Volume 4 : Issue 32 

Today's Topics:
Tech report abstracts
NN training program at UCSD
No hidden neurons, BP vs Perceptrons. Report available.
Stefan Shrier on Abduction Machines for Grammars
Stanford Adaptive Networks Colloquium
TR from ICSI on "Knowledge-Intensive Recruitment Learning"
INTERFACE Call for Commentators and/or Original Contributions.

[[ Editor's Note: As keeping with reader requests, this issue is
strictly tech reports, and announcements. "Discussions" next issue. -PM ]]

Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"

------------------------------------------------------------

Subject: Tech report abstracts
From: honavar@cs.wisc.edu (A Buggy AI Program)
Date: Wed, 30 Nov 88 17:23:01 -0600


The following technical reports are now available.
Requests for copies may be sent to:
Linda McConnell
Technical reports librarian
Computer Sciences Department
University of Wisconsin-Madison
1210 W. Dayton St.
Madison, WI 53706.
USA.

or by e-mail, to: linda@shorty.cs.wisc.edu

PLEASE DO NOT REPLY TO THIS MESSAGE, BUT WRITE TO THE
TECH REPORTS LIBRARIAN FOR COPIES.

-- Vasant

Computer Sciences TR 793 (also in the proceedings of the 1988
connectionist models summer school, (ed) Sejnowski, Hinton, and
Touretzky, Morgan Kauffmann, San Mateo, CA)

A NETWORK OF NEURON-LIKE UNITS THAT LEARNS TO PERCEIVE
BY GENERATION AS WELL AS REWEIGHTING OF ITS LINKS

Vasant Honavar and Leonard Uhr

Computer Sciences Department
University of Wisconsin-Madison
Madison, WI 53706. U.S.A.

Abstract

Learning in connectionist models typically involves the modif-
ication of weights associated with the links between neuron-like
units; but the topology of the network does not change. This paper
describes a new connectionist learning mechanism for generation in
a network of neuron-like elements that enables the network to
modify its own topology by growing links and recruiting units as
needed (possibly from a pool of available units). A combination of
generation and reweighting of links, and appropriate brain-like
constraints on network topology, together with regulatory mechan-
isms and neuronal structures that monitor the network's performance
that enable the network to decide when to generate, is shown capa-
ble of discovering, through feedback-aided learning, substantially
more powerful, and potentially more practical, networks for percep-
tual recognition than those obtained through reweighting alone.

The recognition cones model of perception (Uhr1972, Hona-
var1987, Uhr1987) is used to demonstrate the feasibility of the
approach. Results of simulations of carefully pre-designed recog-
nition cones illustrate the usefulness of brain-like topological
constraints such as near-neighbor connectivity and converging-
diverging heterarchies for the perception of complex objects (such
as houses) from digitized TV images. In addition, preliminary
results indicate that brain-structured recognition cone networks
can successfully learn to recognize simple patterns (such as
letters of the alphabet, drawings of objects like cups and apples),
using generation-discovery as well as reweighting, whereas systems
that attempt to learn using reweighting alone fail to learn.

-----------------------------------------------------
Computer Sciences TR 805

Experimental Results Indicate that
Generation, Local Receptive Fields and Global Convergence
Improve Perceptual Learning in Connectionist Networks

Vasant Honavar and Leonard Uhr
Computer Sciences Department
University of Wisconsin-Madison


Abstract


This paper presents and compares results for three types of
connectionist networks:

[A] Multi-layered converging networks of neuron-like units, with
each unit connected to a small randomly chosen subset of units
in the adjacent layers, that learn by re-weighting of their
links;

[B] Networks of neuron-like units structured into successively
larger modules under brain-like topological constraints (such
as layered, converging-diverging heterarchies and local recep-
tive fields) that learn by re-weighting of their links;

[C] Networks with brain-like structures that learn by generation-
discovery, which involves the growth of links and recruiting
of units in addition to re-weighting of links.


Preliminary empirical results from simulation of these net-
works for perceptual recognition tasks show large improvements in
learning from using brain-like structures (e.g., local receptive
fields, global convergence) over networks that lack such structure;
further substantial improvements in learning result from the use of
generation in addition to reweighting of links. We examine some of
the implications of these results for perceptual learning in con-
nectionist networks.


------------------------------

Subject: NN training program at UCSD
From: elman@amos.ling.ucsd.edu (Jeff Elman)
Date: Wed, 30 Nov 88 19:47:48 -0800


RESEARCH AND TRAINING PROGRAM IN NEURAL MODELLING FOR
DEVELOPMENTAL PSYCHOLOGISTS
University of California, San Diego

The Center for Research in Language at UCSD has just
obtained a pilot grant from the John D. and Catherine T.
MacArthur Foundation, to provide 5 - 10 developmental
psychologists at any level (dissertation students through
senior investigators) with short-term training in neural
computation. The program has two goals:

(1) To encourage developmental psychologists in target
interest areas (speech, language, early visual-motor
and cognitive development, future oriented processes)
to begin making use of connectionist modelling as a
tool for evaluating theories of learning and change;

(2) To encourage greater use of realistic developmental
data in the connectionist enterprise.

Our experience at UCSD suggests that a well-prepared
and computer literate developmental psychologist can learn
to make productive use of neural modelling techniques in a
relatively short period of time, i.e. 2 weeks to 3 months,
depending on level of interest and prior experience. Appli-
cants may request training periods in this range at any
point from 9/89 through 8/90. Depending on the trainee's
needs and resources, we will provide (1) lodging at UCSD,
(2) travel (in some cases), (3) access to SUN and VAX works-
tations with all necessary software, and (4) hourly services
of an individual programmer/tutor who will supervise the
trainee's progress through self-paced learning materials
while assisting in the implementation of the trainee's pro-
posed developmental project. Trainees are also welcome to
attend seminars and workshops, and to consult with the rela-
tively large number of faculty involved in connectionist
modelling at UCSD.

Applicants are asked to submit 5 - 10 page proposals
outlining a specific modelling project in a well-defined
domain of developmental psychology. Criteria for evaluating
proposals will include (1) the scientific merit and feasi-
bility of the project itself (2) the applicant's computer
sophistication and probability of success with short term
training, (3) the probability that the applicant can and
will continue working at the interface between neural model-
ling and developmental psychology (including access to ade-
quate computer facilities at the applicant's home site).
Applicants should indicate the preferred duration and start-
ing date for the training program.

Applications should be submitted to Jeff Elman, Direc-
tor, Center for Research on Language, University of
California, San Diego, La Jolla, Ca. 92093. For further
information, contact Jeff Elman (619-534-1147) or Elizabeth
Bates (619-534-3007). Email inquiries may be sent to
elman@amos.ling.ucsd.edu or bates@amos.ling.ucsd.edu.


------------------------------

Subject: No hidden neurons, BP vs Perceptrons. Report available.
From: sontag@fermat.rutgers.edu
Date: Fri, 02 Dec 88 10:45:58 -0500

The following technical report is now available from the Rutgers Center for
Systems and Control. Please send requests to
sycon@fermat.rutgers.edu
including your complete mailing address. If an electronic version (latex
file) is sufficient, please specify. (This is far better for us, since it
saves printing and mailing costs.)

-eduardo sontag
____________________________________________________________________________
Report SYCON-88-12
Backpropagation Separates when Perceptrons Do, E.D. Sontag and H.J. Sussmann,
Nov. 88. (9 pages.)

We consider in this paper the behavior of the least squares problem
that arises when one attempts to train a feedforward net with no
hidden neurons. It is assumed that the net has monotonic non-linear
output units. Under the assumption that a training set is
**separable**, that is that there is a set of achievable outputs for
which the error is zero, we show that there are no non-global minima.
More precisely, we assume that the error is of a **threshold** LMS
type, in that the error function is zero for values "beyond" the
target value.

Our proof gives in addition the following stronger result: the
continuous gradient adjustment procedure is such that **from any
initial weight configuration** a separating set of weights is obtained
**in finite time**. Thus we have a precise analogue of the perceptron
learning theorem.

We contrast our results with the more classical pattern recognition
problem of threshold LMS with linear output units.
____________________________________________________________________________

NOTE: the report now includes comments about the relation with the works:

Shrivastava, Y., and S. Dasgupta, ``Convergence issues in perceptron
based adaptive neural network models,'' in {\it Proc.25th. Allerton
Conf.Comm. Contr. and Comp.}, U.of Illinois, Urbana, Oct. 1987, pp.
1133-1141.

and

Wittner, B.S., and J.S. Denker, ``Strategies for teaching layered
networks classification tasks,'' in {\it Proc. Conf. Neural Info.
Proc. Systems,} Denver, 1987, Dana Anderson (Ed.), AIP Press.

Both of these were brought to our attention after Geoff's posting to
the net. In summary, the main difference with the latter is that our
convergence theorem does allow for sigmoidal nonlinearities. But the
idea that "thresholds" --or as Steve Hanson and others prefer,
"margins," -- are needed was clearly stated in their paper, which
should get all the credit in that regard. The main differences with
the first of the above papers are also explained.


------------------------------

Subject: Stefan Shrier on Abduction Machines for Grammars
From: pratt@zztop.rutgers.edu (Lorien Y. Pratt)
Organization: Rutgers Univ., New Brunswick, N.J.
Date: 02 Dec 88 21:54:04 +0000

This is the last talk of the semester. Thanks for helping to make this
a successful colloquium series!
--Lori

Fall, 1988
Neural Networks Colloquium Series
at Rutgers

Abduction Machines for Grammar Discovery
----------------------------------------

Stefan Shrier
Grumman-Ctec, McLean, VA

Room 705 Hill center, Busch Campus
Friday December 9, 1988 at 11:10 am
Refreshments served before the talk


Abstract

Abduction machines (AMs) discover regularity structure in patterns.
For language patterns (e.g., English sentences) several such machines
demonstrate how they learn some aspects of language. The machines
embody algorithms that train to learn word classes and grammars.
These machines exhibit linguistic competence in the sense that they
can produce and process "new" sentences to which they had not been
exposed during training. A computer model, which simulates a
learner, acquires an interesting subset of English grammar from
another computer model which simulates a teacher who knows the
language.

Lorien Y. Pratt Computer Science Department
pratt@paul.rutgers.edu Rutgers University
Busch Campus
(201) 932-4634 Piscataway, NJ 08854

------------------------------

Subject: Stanford Adaptive Networks Colloquium
From: netlist@psych.Stanford.EDU (Mark Gluck)
Date: Mon, 05 Dec 88 06:57:35 -0800

Stanford University Interdisciplinary Colloquium Series:
Adaptive Networks and their Applications
Dec. 6th (Tuesday, 3:15pm)

**************************************************************************

Self-Organization in a Perceptual Network

RALPH LINSKER

IBM T. J. Watson Research Center
Yorktown Heights, New York
Tel.: (914)-945-1077; e-mail: linsker@ibm.com

**************************************************************************

Abstract

What principles might help to account for the strikingly
complex sets of feature-analyzing properties found in
mammalian perceptual systems, and for their organization and
integration?

A Hebb-type synaptic modification rule causes model cells in
a feedforward network to develop feature-analyzing proper-
ties (R. Linsker, Proc. Natl. Acad. Sci. USA 83, 7508-12,
8390-94, 8779-83 (Oct.-Nov. 1986)). These include center-
surround and orientation-selective cells (arranged in orien-
tation columns) that have qualitative similarities to cells
of the first several stages of the mammalian visual pathway.
Furthermore, under certain conditions Hebb-type rules gener-
ate model cells each of whose output activities conveys max-
imum information about the input activity values presented
to it (R. Linsker, Computer 21 (3) 105-117 (March 1988)).

These results suggest a potential organizing principle,
which I call "maximum information preservation," for each
processing stage of a multilayered perceptual network having
feedforward and lateral (intralayer) connections. According
to this principle, each processing stage develops so that
the output signal values (from that stage) jointly convey
maximum information about the input values (to that stage),
subject to certain constraints. The quantity that is maxi-
mized is a Shannon information rate. I will discuss some
consequences of the principle, and its possible role in bi-
ological and machine perceptual systems.


**************************************************************************

Location: Room 380-380W, which can be reached through the lower level
between the Psychology and Mathematical Sciences buildings.

Technical Level: These talks will be technically oriented and are intended
for persons actively working in related areas. They are not intended
for the newcomer seeking general introductory material.

Information: To be added to the network mailing list, netmail to
netlist@psych.stanford.edu For additional information,
contact Mark Gluck (gluck@psych.stanford.edu).

Co-Sponsored by: Departments of Electrical Engineering (B. Widrow) and
Psychology (D. Rumelhart, M. Pavel, M. Gluck), Stanford Univ.


------------------------------

Subject: TR from ICSI on "Knowledge-Intensive Recruitment Learning"
From: baker%icsi.Berkeley.EDU@berkeley.edu (Paula Ann Baker)
Date: Mon, 05 Dec 88 16:02:05 -0800

*********************************************************************


Technical Report available from the
International Computer Science Institute

"Knowledge-Intensive Recruitment Learning"

TR-88-010

Joachim Diederich

International Computer Science Institute
1947 Center Street
Berkeley, CA 94704

Abstract

The model described here is a knowledge-intensive connectionist
learning system which uses a built-in knowledge representation module
for inferencing, and this reasoning capability in turn is used for
knowledge-intensive learning. The method requires only the
presentation of a single example to build a new concept
representation. On the connectionist network level, the central
process is the recruitment of new units and the assembly of units to
represent new conceptual information. Free, uncommitted subnetworks
are connected to the built-in knowledge network during learning. The
goal of knowledge-intensive connectionist learning is to improve the
operationality of the knowledge representation: Mediated inferences,
i.e. complex inferences which require several inference steps, are
transformed into immediate inferences; in other words, recognition is
based on the immediate excitation from features directly associated
with a concept.


This technical report is an extended version of: J. Diederich: Steps
toward knowledge-intensive connectionist learning. To appear in:
Pollack, J. & Barnden, J. (Eds.): Advances in Connectionist and Neural
Computation Theory. Ablex Publ. 1988


Please send requests for copies by e-mail to:
info@icsi.berkeley.edu

or by post to:

Librarian
International Computer Science Institute
1947 Center Street, Suite 600
Berkeley, CA 94704


**************************************************************


------------------------------

Subject: INTERFACE Call for Commentators and/or Original Contributions.
From: MUSICO%BGERUG51.BITNET@CUNYVM.CUNY.EDU
Date: Fri, 09 Dec 88 14:38:00 +0100

INTERFACE Call for Commentators and/or Original Contributions.
--------------------------


MUSIC AND DYNAMIC SYSTEMS
=========================

INTERFACE - Journal of New Music Research - is an international
journal published by Swets & Zeitlinger B.V., Lisse, The Netherlands
(this year vol. 17). It is devoted to the discussion of all questions
which fall into the borderline areas between music on the one hand,
physical and human sciences or related technologies on the other hand.
New fields of research, as well as new methods of investigation in
known fields receive special emphasis.

INTERFACE is planning a special issue on MUSIC AND DYNAMIC
SYSTEMS. The motivation comes from two sources :

First there is the renewed interest in Dynamic Systems Theory
from the point of view of massive parallel computing and artificial
intelligence research. Massive parallel techniques and technology
have very recently been applied to music perception/cognition and to
strategies for automated composition. The approach is an alternative
to the classical symbol-based approaches to cognition and problem
solving and it is believed that it may establish a new paradigm that
dominates research for the coming decennia.

The second motivation comes from a recently received original
contribution to INTERFACE by two Romenian scientists : Cosmin and
Mario Georgescu. They propose a system approach to musicology based
on the General Systems Theory. The paper ("A System Approach to
Music"
) is challenging in that it raises a number of methodological
problems (e.g. problems of verification) in musicology. The authors
claim that "The paper should be considered primarily as an exposition
of principles and as an argument in favour of the credibility degree
of the system approach in musicology. The change of this approach
into an effective analysis tool for musical work is a future task that
goes beyond the aim of this paper."
.

However, General Systems Theory is by no means the only possible
application of Systems Theory to music. The massive parallel approach
in computing and the application of Dynamic Systems Theory to the
field of music perception and cognition, automated compositional
strategies, or historical musicology allows new insights in our
understanding and comprehention of the complex phenomenon which we all
admire. How far can we go in modeling the complex dynamics of MUSIC?

--------------------------


- Contributions to this special issue of INTERFACE on MUSIC AND
DYNAMIC SYSTEMS may be sent to Marc Leman before june 30 (publication
of this issue is planned in the fall of 1989).

- Commentators interested in the Georgescu's paper (61pp.) may ask
for a copy.

---------------------------

Please send your correspondence for this issue to :

Marc Leman (editor)
University of Ghent
Institute for Psychoacoustics and Electronic Music
Blandijnberg 2
B-9000 GHENT
Belgium
e-mail : musico@bgerug51.bitnet

The address of the publisher is :
Swets Publishing Service
Heereweg 347
2161 CA Lisse
The Netherlands


------------------------------

End of Neurons Digest
*********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT