Copy Link
Add to Bookmark
Report

Neuron Digest Volume 04 Number 27

eZine's profile picture
Published in 
Neuron Digest
 · 11 months ago

Neuron Digest   Monday, 21 Nov 1988                Volume 4 : Issue 27 

Today's Topics:
Neural Nets as A/D Converter
Notes on Neural Networks (Two Experiments)
Questionaire about Axon/Netset
Neural Nets and Search
Neural nets and edge detection
Wanted: references to unsupervised learning
Technical definition of trainability
Stanford Adaptive Networks Colloquium


Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"

------------------------------------------------------------

Subject: Neural Nets
From: U89_APEREZ1@VAXC.STEVENS-TECH.EDU (Andres Perez)
Date: Mon, 14 Nov 88 14:46:28 -0500


I am currently a Senior at Stevens Institute of Technology and
I am working on my Senior Design Thesis on Neural Networks and their
circuit implementation; more specifically, I'm using Neural Nets to
build an A/D converter using Hopfield's Model as outlined in his
article on IEEE trans. on circ. and sys., vol cas-33, no.5, may 1986,
pp.533.

Is there any information on the specific circuit implementation
of Hopfield's model (such as type of Op.Amps., etc) or some
documentation were this model is clearly explained; or maybe you could
tell me where such information could be obtained.

I appreciate the attention to this letter and thank you in advance.

Andres E. Perez

Mail: Stevens Tech
P.O. Box S-1009
Hoboken, NJ 07030 (USA)

BITNET-mail: U89_AEP@SITVXB

[[ Editor's Note: I betray my ignorance about this particular subject.
Perhaps some kind reader could help this fellow? -PM ]]

------------------------------

Subject: Notes on Neural Networks
From: kanecki@VACS.UWP.WISC.EDU (David Kanecki)
Organization: The Internet
Date: 15 Nov 88 04:35:47 +0000

[[ Editor's Note: This was recently posted on AI-Digest. I have lightly
edited it for format and the usual punctuation et al. Recently, Mr.
Kanecki became a Neuron Digest subscriber as well. He is also looking for
a position, but I would prefer not to publish resumes in the Digest.
Contact him directly for a copy. -PM ]]


Notes on Neural Networks:

During the month of September while trying various experiments on neural
networks I noted two observations:

1. Based on how the data for the A and B matrix are set up, the learning
equation of:
T
w(n)=w(n-1)+nn(t(n)-o(n)*i (n)

may take more presentations for the system to learn than A and B output.

2. Neural Networks are self correcting in that if an incorrect W matrix is
given by using the presentation/ update process the W matrix will give
the correct answers, but the value of the individual elements will
differ when compared to a correct W matrix.


Case 1: Different A and B matrix setup

For example, in applying neural networks to the XOR problem I used the
following A and B matrix:

A H | H B
- ------- |------
0 0 0 | 0 0
0 1 0 | 0 1
1 0 0 | 0 1
0 1 1 | 1 1

My neural network learning system took 12 presentations to arrive at the
correct B matrix when presented with the corresponding A matrix. The W
matrix was:

W(12) = | -0.5 0.75 |
| -0.5 0.75 |
| 3.5 -1.25 |

For the second test I set the A and B matrix as follows:

A H | B
- ------------
0 0 0 | 0
0 1 0 | 1
1 0 0 | 1
1 1 1 | 0

This setup took 8 presentations for my neural network learning system to
arrive at a correct B matrix when presented with the corresponding A
matrix. The final W matrix was:

W(8) = | -0.5 -0.5 2.0 |

Conclusion: These experiments indicate to me that a system's learning rate
can be increased by presenting the least amount of extraneous data.


--------------


Case 2: Self Correction of Neural Networks

In this second experiment I found that neural networks exhibit great
flexibility. This experiment turned out to be a happy accident. Before I
had developed my neural network learning system I was doing neural network
experiments by spreadsheet and hand transcription. During the transcription
three elements in 6 X 5 W matrix had the wrong sign. For example, the
resulting W matrix was:

| 0.0 2.0 2.0 2.0 2.0 |
|-2.0 0.0 4.0 0.0 0.0 |
W(0)= | 0.0 2.0 -2.0 2.0 -2.0 |
| 0.0 2.0 0.0 -2.0 2.0 |
|-2.0 4.0 1.0 0.0 0.0 |
| 2.0 -4.0 2.0 0.0 0.0 |

W(24) = | 0.0 2.0 2.0 2.0 2.0 |
|-1.53 1.18 1.18 -0.25 -0.15 |
| 0.64 0.12 -0.69 1.16 -0.50 |
| 0.27 -0.26 -0.06 -0.53 0.80 |
|-1.09 1.62 0.79 -0.43 -0.25 |
| 1.53 -1.18 -0.68 0.25 0.15 |

By applying the learning algorithm it took 24 presentations the W matrix to
give correct B matrix when presented with corresponding A matrix.

But, when the experiment was run on my neural network learning system I had
a W(0) matrix of:

W(0) = | 0.0 2.0 2.0 2.0 2.0 |
|-2.0 0.0 4.0 0.0 0.0 |
| 0.0 2.0 -2.0 2.0 -2.0 |
| 0.0 2.0 -2.0 -2.0 2.0 |
|-2.0 4.0 0.0 0.0 0.0 |
| 2.0 -4.0 0.0 0.0 0.0 |

After 5 presentations the W(5) matrix came out to be:

W(5) = | 0.0 2.0 2.0 2.0 2.0 |
|-2.0 0.0 4.0 0.0 0.0 |
| 0.0 2.0 -2.0 2.0 -2.0 |
| 0.0 2.0 -2.0 -2.0 2.0 |
| 2.0 -4.0 0.0 0.0 0.0 |

Conclusion: Neural networks are self correcting but the final W matrix way
have different values. Also, if a W matrix does not have to go through the
test/update procedure the W matrix could be used both ways in that a A
matrix generates the B matrix and a B matrix generates the A matrix as in
the second example.

----------------


I am interested in communicating and discussing various aspects of neural
networks. I can be contacted at:

kanecki@vacs.uwp.wisc.edu

or at:

David Kanecki
P.O. Box 93
Kenosha, WI 53140

------------------------------

Subject: Questionaire
From: cs162faw@sdcc18.ucsd.EDU (Phaedrus)
Organization: University of California, San Diego
Date: 15 Nov 88 18:14:44 +0000


About two weeks ago, I posted a desire for Axon/Netset information, I'm
afraid my scope was much to small, considering I only received two
responses. I'm sorry to muck up the newsgroup, but I really do need this
information, and my posting disappeared after a week or so. If you've ever
used a neural network simulator or if you have good opinions regarding
representations. Provided is a questionnare regarding
Neural-Networking/PDP.

Information from this questionnare will be used to design a user interface
for an industrial neural network program which may perform any of the
traditional PDP problems (e.g., back prop, counter prop, constraint
satisfact, etc). The program can handle connections set up in any fashion
(e.g., fully connected, feed-back connected, whatever), and it can also
toggle between syncronous or asyncronous modes.

What we're really interested in is what you feel is "hard" or "easy" about
neural net representations.

1. What type of research have you done ?

2. What type of research are you likely to do in the future ?

3. What is your programming background ?

4. What simulators have you used before ?
What did you like about their interfaces ?

5. Have you used graphical interfaces before ?
Did you like them ?
Do you think that you could use them for research-oriented problems ?
Why or why not ?

6. Do you prefer to work with numerical representations of
networks ? Weight matrices ? Connection Matrices ?
Why or why not ?

7. Would you like to use a graphical PDP interface if it could
craft complicated networks easily ? Why or why not ?

8. Do you forsee any difficulties you might have with graphical
interfaces ?
Any other comments along the same vein will be appreciated.

Your opinion is REALLY wanted, so please take 5 minutes and hit 'r-'!!!
Thank you,
James Shaw

[[ Editor's note: I almost hesitated in including this, since the
commercial overtones are a bit much. I personally would suggest that,
rather than blindly polling the ubiquitous net, Mr. Shaw should either
recruit some volunteers to do bone fide user-interface research or expend
some energy and identify some selcted customers qua users.

On the other hand, his questionnaire provides some interesting items to
think about when considering ANN simulators. -PM ]]

------------------------------

Subject: Neural Nets and Search
From: zeiden@ai.cs.wisc.edu (Matthew Zeidenberg)
Date: 15 Nov 88 23:44:51 +0000

Does anyone know of any papers on neural network solutions to problems
involving heuristic search? I do not mean optimization problems such as
Traveling Salesman, although these are obviously related.

------------------------------

Subject: Neural nets and edge detection
From: sda@cs.exeter.ac.uk (Steven Dakin)
Date: 16 Nov 88 10:59:28 +0000


Recently I read an interesting article on applications of neural
nets in the field of visual edge detection. Basically it showed that single
layer nets couldn't be trained using a simple learning rule (eg. delta) to
recognize lines of various orientations. The problem is that a set of
weights to recognize verticals destroy all the information about
horizontality inherent in the pattern of connectivity. The author proposed
a solution using hidden units.

So far so good.

However some months on, and I can't for the life of me remember the
reference or the author. I have a feeling it may be unpublished, but if
anyone has come across this piece, or indeed any related work in the field,
please mail me.

I'll compile and mail a list if there's enough.

Thanks.

Steven Dakin (sda@uk.ac.exeter.cs)

------------------------------

Subject: Wanted: references to unsupervised learning
From: Jose A Ambros-Ingerson (Dept of ICS, UC Irvine)
<jose%ci7.ics.uci.edu@PARIS.ICS.UCI.EDU>
Date: Wed, 16 Nov 88 18:29:16 -0800

I'm currently writing a survey paper on the subject of Unsupervised
Learning. I intend to cover both traditional pattern classification
techniques as well as neural network approaches. I will however, try to
make an emphasis on biologically plausible models.

I would greatly appreciate any suggestions as to reading material. In
order to reduce duplications I include my present selections:

Gail A. Carpenter and Stephen Grossberg, Neural Dynamics of Category
Learning and Recognition: Structural Invariants, Reinforcement and
Evoked Potentials, In Pattern Recognition and Concepts in Animals,
People and Machines, M. L. Commons, S. M. Kosslyn, E. R. J.
Herrnstein (eds), Lawrence Erlbaum Associates, 1986.
Richard O. Duda and Peter E. Hart, Pattern Classification and Scene
Analysis, John Wiley \& Sons, 1973.
Gerald M. Edelman, George N. {Reeke, Jr}, Selective Networks Capable
of Representative Transformations, Limited Generalizations, and
Associative Memory, Proc. Natl. Acad. Sci., 79:2091-2095, 1982.
Brian Everitt, Cluster Analysis, Second edition, 1980, Halsted Press.
Leif H. Finkel, Gerald M. Edelman, Interaction of Synaptic
Modification Rules within Populations, Proc.Natl.Acad.Sci.82:1291-1295,1985.
Douglas H. Fisher, Knowledge Acquisition Via Incremental Conceptual
Clustering, Machine Learning,2:139-172, 1987.
K. Fukushima, Cognitron: {A} Self-organizing Multilayered Neural
Network, Biol. Cybernetics, 20:121-136, 1975.
A K. Fukushima, NeoCognitron: {A} Self-organizing Neural Network Model
for a Mechanism of Pattern Recognition Unaffected by Shift in
Position, Biol. Cybernetics, 36:193-202, 1980.
Mark A. Gluck, James E. Corter, Information, Uncertainty, and the
Utility of Categories, Proceedings of the Cognitive Science Society
Conference, 1985, Lawrence Erlbaum.
S. Grossberg, Adaptive Pattern Classification and Universal Recoding:
{I}. Parallel Development and Coding of Neural Feature Detectors.
Biol. Cybernetics, 23:121-134, 1976
S. Grossberg, Adaptive Pattern Classification and Universal Recoding:
{II}. Feedback, Expectation, Olfaction, Illusions, Biol. Cybernetics,
23:187-202,1976
Teuvo Kohonen, Self-{O}rganization and Associative Memory, 1984,
Springer-Verlag.
Chr. {von der} Malsburg, Self-{O}rganization of Orientation Sensitive
Cells in the Striate Cortex, Kibernetik, 14:85-100,1973.
J. Pearson, L. Finkel, G. Edelman, Plasticity in the Organization of
Adult Cerebral Cortical Maps: {A} Computer Simulation Based on
Neuronal Group Selection ,The Journal of Neuroscience, 7:12:4209-4223,1987
David E. Rumelhart, David Zipser, Feature Discovery by Competitive Learning
Cognitive Science, 1985, 9:75-112
T. Y. Young, T. W. Calbert, Classification Estimation and Pattern
Recognition, 1974, Elsevier.

Thanks in advance,

Jose A. Ambros-Ingerson ArpaNet: jambros@ics.uci.edu
Dept. of Information and Computer Science Phone: (714) 856-7310
University of California
Irvine CA, 92717

------------------------------

Subject: Technical definition of trainability
From: jwang@cwsys2.CWRU.EDU (Jun Wang)
Date: Wed, 16 Nov 88 22:06:05 -0500

I am currently working on research of theory and methodology of artificial
neural net in a general setting from system point of view. My approach to
the problem is by formalization, categorization and characterization. I
hope a complete theory and methodology on neural system as a means of
deriving decision rules can be developed based on in-depth analysis and
synthesis. I got some elementary results in this direction.

I am very interested in trainability of neural nets. The following is my
definition of trainability from a working paper under preparation. It is a
Tex file (I made some modification), I hope it is readable.

**********************************************************************
Definition 3.10 (Trainability): Given architecture and propagation
rule, and learning rule, an artificial neural net (ANN) is trainable if
and only if a set of definite parameters $w$ can be obtained which
result in minimum of errors, precisely, an ANN is trainable iff

\forall \epsilon > 0, \exists T>0, \exists w(T)\in W, if t>=
T || w(t+\delta t) - w(t)|| <= \epsilon
and
E=\min\sum^P_{p=1}\mu_p|| z^p- y(x^p, w(t)) ||_p
where (x^p,z^p) \in S_{tr}.

An ANN is globally trainable if it is trainable under arbitrary
initial conditions. An ANN is globally and absolutely trainable if it is
globally trainable at optimum parameters with respect to given E(w),
i.e. \min_{w\in W} E(w(t))=\sum_{p=1}^P ||t^p - o^p(w(t))||_p=0, or
\lim_{t\to\infty}E(w(t))=0.

*************************************************************************

If anybody has some comments or suggestions on this property of neural
nets, or knows someone has been working on this, please tell me via E-mail
me or postal mail. Thanks.

Jun Wang
Dept. of Systems Engg.
Case Western Reserve Univ.
Cleveland, Ohio 44106
jwang@cwsys2.cwru.edu
jwang@cwcais.cwru.edu


------------------------------

Subject: Stanford Adaptive Networks Colloquium
From: netlist@psych.Stanford.EDU (Mark Gluck)
Date: Thu, 17 Nov 88 17:10:19 -0800

Stanford University Interdisciplinary Colloquium Series:
Adaptive Networks and their Applications

Nov. 22nd (Tuesday, 3:15pm)

**************************************************************************

Toward a model of speech acquisition: Supervised learning
and systems with excess degrees of freedom

MICHAEL JORDAN

E10-034C
Department of Brain and Cognitive Sciences
Massachussetts Institute of Technology
Cambridge, MA 02139
<jordan@psyche.mit.edu>

**************************************************************************

Abstract

The acquisition of speech production is an interesting domain for the
development of connectionist learning methods. In this talk, I will focus
on a particular component of the speech learning problem, namely, that of
finding an inverse of the function that relates articulatory events to
perceptual events. A problem for the learning of such an inverse is that
the forward function is many-to-one and nonlinear. That is, there are many
possible target vectors corresponding to each perceptual input, but the
average target is not in general a solution.

I will argue that this problem is best resolved if targets are specified
implicitly with sets of constraints, rather than as particular vectors (as
in direct inverse system identification). Two classes of constraints are
distinguished---paradigmatic constraints, which implicitly specify inverse
images in articulatory space, and syntagmatic constraints, which define
relationships between outputs produced at different points in time. (The
latter include smoothness constraints on articulatory representations, and
distinctiveness constraints on perceptual representations). I will discuss
how the interactions between these classes of constraints may account for
two kinds of variability in speech: coarticulation and historical change.

**************************************************************************

Location: Room 380-380W, which can be reached through the lower level
between the Psychology and Mathematical Sciences buildings.

Technical Level: These talks will be technically oriented and are intended
for persons actively working in related areas. They are not intended
for the newcomer seeking general introductory material.

Information: To be added to the network mailing list, netmail to
netlist@psych.stanford.edu For additional information,
contact Mark Gluck (gluck@psych.stanford.edu).

Upcoming talks:
Dec. 6: Ralph Linsker (IBM)

Co-Sponsored by: Departments of Electrical Engineering (B. Widrow) and
Psychology (D. Rumelhart, M. Pavel, M. Gluck), Stanford Univ.


------------------------------

End of Neurons Digest
*********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT