Copy Link
Add to Bookmark
Report

Neuron Digest Volume 04 Number 28

eZine's profile picture
Published in 
Neuron Digest
 · 1 year ago

Neuron Digest   Tuesday, 22 Nov 1988                Volume 4 : Issue 28 

Today's Topics:
Re: Neuron Digest V4 #25 (music and nets)
Refs/opinions wanted -- Neural nets & approximate reasoning
RE: Relaxation Labelling
Re: Refs/opinions wanted -- Neural nets & approximate reasoning
Re: Neuron Digest V4 #26
Talk on ASPN at AI Forum 11-22-88
More on learning arbitrary transfer functions
Free Recurrent Simulator
FTPing full.tar.Z


Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"

------------------------------------------------------------

Subject: Re: Neuron Digest V4 #25 (music and nets)
From: Stephen Smoliar <smoliar@vaxa.isi.edu>
Date: Fri, 18 Nov 88 18:14:14 -0800

I don't know if you want to publish this. I may be the only reader seeking
clarification. However, it seems relevant to the general issue of what we
expect to read in these digests.

[[ Editor's note: I'm including this, since I believe Stephen has
thoughtfully voiced many of my own concerns. I have not imposed editorial
rights since delaying CLAU's message about the quality of papers at
meetings. In fact, I am disposed to be lax in any restrictions.

I am not fond of ad hominem arguments and implore submitters to consider
their words carefully. What do you readers feel? Should I exert more
control over content? On the broader issue, how can one cope with
information overload... even on a relatively specialized subject as Neural
Nets?

On with Stephen's article... -PM ]]

***************************

Given my rather intense interest in music and my burgeoning curiosity
regarding interesting applications of neural nets, my attention has been
drawn to the recent exchange between Jamshed Bharucha and Eliot Handelman.
I, for one, read the Neuron Digest because of the second word in its name:
My stack of articles to read is large enough as it is to allow for new
items to get pushed on because I wish to investigate a new path of
research. I find it very useful to have a service which distills recent
results and helps me to decide whether or not I want to read further in the
matter.

In this respect, I was most grateful to Eliot Handelman for taking the
trouble to review Bharucha's recent publication. When I first became aware
of Bharucha's work, I immediately asked myself if his papers should go into
competition with the others on my stack. On the basis of Handelman's
review, I decided in the negative. My impression was that there may
eventually be something in Bharucha's work which would interest me, but I
certainly did not have to rush to see what he had written. More likely, I
could wait until the material had undergone some refinement.

Having made this conclusion, I found myself somewhat distressed at
Bharucha's recent reply to Handelman. Given the current information
overload, I have very little sympathy for anyone who writes "I suggest you
take the time to read ALL my work, as well as the major sources I cite."

Having spent my graduate school years writing for a newspaper, I have
little sympathy for someone who cannot tell a story with a moderate amount
of brevity in such a way that the underlying message will stand by itself.
(I recently had to engage in this exercise in explaining the work of Gerald
Edelman, so I appreciate that this is no easy job. Nevertheless, I tend to
hold stubbornly to the creed "If you can't write it right, don't write it
at all;"
and, perhaps unreasonably, I expect to find this attitude shared
throughout my professional community. If Bharucha cannot provide us Digest
readers with a concise summary of his position as a defense against
Handelman's review, I suggest that he withdraw until he can muster the
words to do so.

The other observation which caused me some distress was the following
sentence in Bharucha's final paragraph: "It is not uncommon for musicians
to read more into the claims made by psychologists and computer scientists,
and probably vice versa, because of the different histories, code words and
writing styles of the different fields."
This danger is, indeed, very
valid; and the only way it can be avoided is for the WRITER to show proper
respect for the fact that his reader audience may be broader than usual.
To prepare a paper on the topic of music and not assume that there will be
musicians interested in it is sheer folly. My own thesis advisor was
particularly aware of this and wanted to make sure that anything I wrote
would be acceptable to a music professor who lacked a broad base of
computer expertise. Such a professor sat on my committee, and I welcomed
the challenge of his presence. If Bharucha lacks this ability to
communicate without the argot of his profession, I, for one, would just as
soon wait until this shortcoming has been remedied.

------------------------------

Subject: Refs/opinions wanted -- Neural nets & approximate reasoning
From: bradb@ai.toronto.edu (Brad Brown)
Organization: Department of Computer Science, University of Toronto
Date: 18 Nov 88 06:18:08 +0000


I am working on a paper which compares symbolic and neural
network approaches to approximate reasoning, including fuzzy
sets, probabilities logic, and approximate solutions to
problems. I would very much appreciate references and
personal comments.

Given current hardware technology and current neural net
(NN) learning algorithms, NNs seem to have some desirable
properties that symbolic systems do not, but suffer from
implementation problems that prevent them from being useful
or efficient in many cases. A summary of my thinking, which
includes many generalizations and omits justification,
follows.

Neural network-based systems have advantages over symbolic
systems for the following reasons.

(1) For some classes of problems, NN learning algorithms
are known. In these cases, "programming" a NN is often
a matter of presenting it with training information and
letting it learn.

Symbolic systems have more known algorithms and can be
applied to more problems than NNs, but constructing
symbolic programs is labour intensive. The resulting
programs are typically problem-specific.

(2) Neural nets can adapt to changes in their environment.
For instance, a financial expert system implemented as
a NN could use new information to modify its
performance over time to reflect changing market
conditions. Symbolic systems are usually either static
or require re-training on a substantial fraction of the
dataset to adapt to new data.

Neural nets are forgiving in their response to input.
Inputs that are similar are treated similarly. In
symbolic systems it is very difficult to give the
system a notion of what constitutes a "similar" input
so input errors or input noise are big problems for
symbolic systems.

(3) NNs are good at constraint problems and have the
desirable property of finding good compromises when a
single best solution does not exist.

(4) NNs can deal with multiple sources of information. For
instance, a financial system could consider inputs from
both stock market information and internal company
sales information, which are not causally related. The
learning procedure can be expected to find weights that
"weigh" different kinds of evidence and judge
accordingly. Symbolic systems require extensive manual
tuning to be able to effectively use multiple
orthogonal sources of information.

On the other hand, practical applications of NNs are held
back by

(1) Lack of well-understood training algorithms for many
tasks. Many interesting tasks simply cannot be solved
with NNs because no one knows how to train them.

(2) Difficulty in running neural nets on commercially
available hardware. Neural net simulations require
vast CPU and memory resources so NN systems may not be
cost effective compared to equivalent symbolic systems.

(3) Absence of an ability to easily explain why a
particular result was achieved. Because knowledge is
distributed throughout the network and there is no
concept of the network as a whole proceeding stepwise
toward a solution, explaining results is difficult.

All things considered, I am a believer in neural networks.
I see them as the "natural" way to make big advances towards
"human-level" intelligence, but the field is too new to be
applied to many practical applications right now. Symbolic
approaches draw on a more mature and complete base of
experience. Nevertheless, it is very difficult to get
symbolic systems to show some of the nice traits seen in
neural networks, like an ability to deal with noise and
approximate inputs and to produce good compromise solutions.
An interesting compromise would be the integration of neural
networks into symbolic reasoning systems, which has been
tried with some success by at least one expert system group.

-----------------------------------------------------------


Comments and criticisms on these thoughts would be greatly
appreciated. References to current work on neural networks
for approximate reasoning and comparisons between neural
networks and symbolic processing systems would also be very
much appreciated. Thank you very much for your time and
thoughts.


(-: Brad Brown :-)

bradb@ai.toronto.edu

------------------------------

Subject: RE: Relaxation Labelling
From: <EACARME%EBRUPC51.BITNET@CUNYVM.CUNY.EDU>
Date: Fri, 18 Nov 88 11:35:00 +0100

I studied the relationship between Relaxation Labelling and NN in a couple
of papers:

"Exploring three possibilities in network design: Spontaneous node
activity, node plasticity and temporal coding"
. In "Neural Computers",
edited by R. Eckmiller and C. von der Malsburg, NATO ASI Series F, Vol.41,
pp. 301-310, Springer-Verlag, 1988.

"Relaxation and neural learning: Points of convergence and divergence".
Journal of Parallel and Distributed Computing, to appear.

It turns out that many results (convergence, consequences of symmetric
compatibility/connectivity, etc.) have been proven independently in both
fields. In the papers, I show how both short-term NN functioning and
long-term learning can be formulated as relaxation labelling processes.

You can contact me at the following address:

Carme Torras
Institut de Cibernetica (CSIC-UPC)
Diagonal 647
08028-Barcelona
SPAIN
e-mail: eacarme@ebrupc51.bitnet


------------------------------

Subject: Re: Refs/opinions wanted -- Neural nets & approximate reasoning
From: songw@csri.toronto.edu (Wenyi Song)
Organization: University of Toronto, CSRI
Date: 20 Nov 88 02:02:03 +0000

In the previous article, bradb@ai.toronto.edu (Brad Brown) writes:
>...
> On the other hand, practical applications of NNs are held
> back by
>...
> (3) Absence of an ability to easily explain why a
> particular result was achieved. Because knowledge is
> distributed throughout the network and there is no
> concept of the network as a whole proceeding stepwise
> toward a solution, explaining results is difficult.

It may remain difficult, if not impossible, to explain results of NN in
terms of traditional symbolic processing. However this is not a drawback if
you do not attempt to unify them into a grand theory of AI :-)

An alternative is to explain the phenomenology in terms of the dynamics of
neural networks. It seems to me that this is the correct way to go. We
gain much better global predicability of information processing in neural
networks by trading off controllability of local quantum steps.

The Journal of Complexity devoted a special issue on neural computation
this year.

[[ Editor's note: Could someone give a reference to this journal and
specific issue? -PM]]

------------------------------

Subject: Re: Neuron Digest V4 #26
From: ganymede!tmb@wheaties.ai.mit.edu
Organization: The Moons of Jupiter (MIT Artificial Intelligence Lab)
Date: Sun, 20 Nov 88 04:14:26 -0500

Valiant and friends have come up with theories of the sort you
desire, but only for boolean concepts (binary y's in your notation)
and learning algorithms in general, not neural nets in particular.
"Graded concepts" are continuous. To my knowledge, no work has
addressed the theoretical learnability of graded concepts. Before
trying to come up with theoretical learnability results for neural
networks, one should probably address the graded concept learning
problem in general. The Valiant approach of a Probably Almost
Correct (PAC) learning criterion should be applicable to graded
concepts. - -- Michael R. Hall | Bell Communications Research

I have been working on the problem of probably approximately correct
learning of real functions of real variables with respect to classes of
sample distributions, different noise models, and constraints (e.g.
smoothness constraints) on the functions.

Some of that work is published (in the form of an extended abstract) in the
proceedings of the 1988 conference of the INNS (Thomas Breuel: "Problem
Intrinsic Bounds on Sample Complexity"
). The full paper corresponding to
the abstract and a collection of other results are in preparation.


------------------------------

Subject: Talk on ASPN at AI Forum 11-22-88
From: aluko@Portia.Stanford.EDU (Stephen Goldschmidt)
Organization: Stanford University
Date: 21 Nov 88 19:05:53 +0000

I will be presenting an informal talk on an algorithm called ASPN which
synthesizes a network of polynomial elements to approximate a function
specified in terms of examples.

The most remarkable apsect of ASPN is that it chooses the network structure
and complexity based on the examples supplied. Also, the network produced
is independent of the order of the examples in the knowledge base.

ASPN has been successfully applied to problems in many areas, most recently
in flight control and guidance law design. It is a successor to, and
improvement upon the PNETTR family and also the GMDH approaches of the past
30 years.

I participated both in the implementation and application of this
little-known algorithm.

I will be speaking on Tuesday night November 22, 1988 at 7pm at the monthly
AI forum at Bldg. 202 on Hannover St. in the Stanford Industrial Park in
Palo Alto. You can call my answering machine at (415)494-1748 or send
e-mail to aluko@portia.stanford.edu.

*-*-*- Stephen R. Goldschmidt -*-*-*

------------------------------

Subject: More on learning arbitrary transfer functions
From: leary#bob%a.sdscnet@SDS.SDSC.EDU
Date: Mon, 21 Nov 88 20:03:02 +0000

Recently Prof. Halbert White of the UCSD Economics Dept. gave a talk at an
ACM SIGAMS meeting. I quote from the flyer announcing the talk: "Professor
White will discuss his new paper which rigorously establishes that standard
multilayer feedforward networks with as few as one hidden layer using
arbitrary squashing functions are capable of approximating any Borel
measurable (i.e. realistic) function from one Euclidean space to another to
any desired degree of accuracy, provided that sufficiently many hidden
units are available. In this sense, Multi-layered feedforward networks are
a class of Universal Approximators."


Bob Leary
San Diego Supercomputer Center
leary@sds.sdsc.edu

------------------------------

Subject: Free Recurrent Simulator
From: Barak.Pearlmutter@F.GP.CS.CMU.EDU
Date: 21 Nov 88 23:09:00 -0500

I wrote a bare bones simulator for recurrent temporally recurrent neural
networks in C. It simulates a network of the sort described in "Learning
State Space Trajectories in Recurrent Neural Networks"
, and is named
"full". Full simulates only fully connected networks, uses only arrays,
and has no user interface at all. It was intended to be easy to translate
into other languages, to vectorize and parallelize well, etc. It
vectorized fully on the convex on the first try with no source
modifications. Although it is short, it is actually usable and it works
well.

If you wish to use full, I'm allowing access to a compressed tar file
through anonymous ftp from host DOGHEN.BOLTZ.CS.CMU.EDU, user "ftpguest",
password "oaklisp", file "full/full.tar.Z". Be sure to use the BINARY
command, and don't use the CD command or you'll be sorry.

I am not going to support full in any way, and I don't have time to mail
copies out. If you don't have FTP access perhaps someone with access will
post full to the usenet, and perhaps some archive server somewhere will
include it.

Full is copyrighted, but I'm giving people permission to use if for
academic purposes. If someone were to sell a it, modified or not, I'd be
really angry.

------------------------------

Subject: FTPing full.tar.Z
From: Barak.Pearlmutter@F.GP.CS.CMU.EDU
Date: 22 Nov 88 12:55:00 -0500

People have been having problems ftping full.tar.Z despite their avoiding
the CD command. The solution is to specify the remote and local file names
separately:

ftp> get
remote file: full/full.tar.Z
local file: full.tar.Z

For the curious, the problem is that when you type "get full/full.tar.Z" to
ftp it tries to retrieve the file "full/full.tar.Z" from the remote host
and put it in the local file "full/full.tar.Z". If the directory "full/"
does not exist at your end you get an error message, and said message does
not say which host the file or directory does not exist on.

Sorry for the inconvenience.

--Barak.

------------------------------

End of Neurons Digest
*********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT