Copy Link
Add to Bookmark
Report
Neuron Digest Volume 05 Number 23
Neuron Digest Friday, 19 May 1989 Volume 5 : Issue 23
Today's Topics:
Administrivia - PLEASE READ!
Back-prop vs. linear regression
Beginner's Books
Looking for Neural Net\Music Applications
Re: Looking for Neural Net/Music Applications
Re: Looking for Neural Net\Music Applications
Looking for Neural Net/Music Applications
Re: Looking for Neural Net/Music Applications
Re: Looking for Neural Net/Music Applications
Neural Net Applications (Weather Forcasting)
RE: Neuron Digest V5 #20
Position Available
request - Parallel Theorem Proving
SAIC's Bomb "Sniffer"
speech recognition
Texture Segmentation using the Boundary Contour System
Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"
ARPANET users can get old issues via ftp from hplpm.hpl.hp.com (15.255.16.205).
------------------------------------------------------------
Subject: Administrivia - PLEASE READ!
From: "Neuron-Digest Moderator -- Peter Marvit" <neuron@hplabs.hp.com>
Date: Fri, 19 May 89 10:52:16 -0700
Several important topics:
1. As the academic year ends, many of your accounts may be disabled.
*PLEASE* tell me ahead of time. In general, if I received too many "unable
to deliver" messages, I will unceremoniously remove your address from the
mailing list. Thus, if you don't get an issue for several weeks, you may
have been dropped.
2. Some folks have asked if an informal get-together could be held
during IJCNN in Washington this June. I've talked to the organizers and
they'll provide a room for a "Bird of a Feather Session." The problem?
Schedule. The conference is jam-packed. I suggest a (bring your own) lunch
meeting Monday. If you are interested in attending (or organizing) a BOF
for Neuron Digest subscribers or on some specialized topic. *PLEASE* send me
mail ASAP. If enough (e.g., 15) answer, I'll do some legwork. IF lots
(e.g., >40), then I'll need help.
3. Regarding the content of the Digest. I give priority to messages
sent directly to <neuron-request> by reader for inclusion. As many of you
know, all postings from the (unmoderated) USENET bulletin board
comp.ai.neural-nets are gatewayed to me; I then edit and include some of
them in Digest form. There has been a lot of activity there recently and I
had not caught up as I had hoped, due to a disk failure. I will try to get
the backlog out, but I will continue to make messages sent directly to me
the priority. Digests will also (in general) be either all discussion or
all announcements.
4. Thank you, kind readers, for making this Digest so successful.
We now have over 680 address, many of which are redistribution points. We
have subscribers from all over the world! I look forward to getting more
postings from all of you! Comments and suggestions on format always
appreciated.
-Peter Marvit
<neuron-request@hplabs.hp.com>
------------------------------
Subject: Back-prop vs. linear regression
From: Brady@udel.edu
Date: Wed, 10 May 89 07:50:54 -0400
Can someone point me to sources describing how backpropagation differs from
linear regression?
------------------------------
Subject: Beginner's Books
From: "Bryan Koen" <C484739@UMCVMB.MISSOURI.EDU>
Date: Thu, 11 May 89 13:45:55 -0500
Are there any GOOD beginner's texts out there on Neural nets?
Bryan Koen
C484739@UMCVMB.BITNET
[[Editor's Response: It depends on what you mean "beginner." I don't know
of any which are suitable for Junior High or unsophisticated High School.
However, I still regard the PDP series (Rumelhart and McClelland) as the
best overview of the field. We are seeing a blossoming of books, both
specialized and general, which cover many aspects of Neural Nets and related
topics. I would also say that the "best" beginning book might depend on the
field you're coming from. Other opinions? -PM]]
------------------------------
Subject: Looking for Neural Net\Music Applications
From: cs178wbp@sdcc18.ucsd.EDU (Slash)
Organization: University of California, San Diego
Date: Sat, 15 Apr 89 04:03:31 +0000
I am looking for neural network applications in music and/or references to
work that has been done in this area. I am about to begin work on a project
that will attempt to have a network learn how to do basic jazz
improvisation.
More specifically, I am interested in input representation techniques and
schemes for musical notation (i.e bars, notes, rests, ties, triplets, dots,
etc...).
Any references to prior work in this area is welcome and will be greatly
appreciated.
E-mail Address : cs178wbp@icse4.ucsd.edu
------------------------------
Subject: Re: Looking for Neural Net/Music Applications
From: Richard Fozzard <fozzard@BOULDER.COLORADO.EDU>
Organization: University of Colorado, Boulder
Date: 18 Apr 89 18:31:17 +0000
Though this reference is not for jazz, but rock music, I thought you (and
the newsgroup) still might be interested :-)
SEMINAR
Jimi Hendrix meets the Giant Screaming Buddha:
Recreating the Sixties via Back Propagation in Time
Garrison W. Cottrell
Department of Sixties Science
Southern California Condominium College
As the Sixties rapidly recede from the cultural memory, one
nagging problem has been the lack of a cultural milieu that could
produce another guitar player like Jimi Hendrix. Recent research
has shown that part of the problem has been the lack of high
microgram dosages of LSD due to the war on drugs. Recent
advances in neural network technology have provided legal ways to
artificially recreate and combine models of Hendrix and LSD in
recurrent PDP networks. The basic idea is to train a recurrent
back propagation network via Williams & Zipser's (1988) algorithm
to learn the map between musical scores of Hendrix music and the
hand movements as recorded in various movies. The network is
then given simulated doses of LSD and allowed to create new music
on its own.
The first component of the model, following (Jordan, 1988)
is to have the network learn a forward model of the motor
commands that drive robot hands which play a guitar. Usually
this is done via allowing the network to "babble", i.e., having
the network produce random motor outputs, allowing it to learn
the map between its motor outputs and the sound produced. Once
the network has learned the motor-acoustic map, it may then be
exposed to environmental patterns corresponding to the desired
input-output map in acoustic space. Thus for example, the plan
vector for the network will be a representation of the score of
the Star Spangled Banner, presented a bar at a time. Over
several iterations on each bar, the teaching signal is Hendrix'
corresponding rendition[1]. Thus the model learns through the
Jimi Hendrix Experience. Once the model is trained, we now have
a neural network model, Jimi Matrix, that can sight read. We can
now see how Hendrix would have played the hits of today. One of
the first songs we plan to apply this to is the smash hit, "Don't
Worry, Be Happy".
It has long been suspected that the ability to produce maps
of this sort is due to some hidden degrees of freedom[2]. One
form of an extra degree of freedom is Lysergic Diethyl Amide,
better known as LSD. Current models of the effect of LSD only
produce simple forms of doubly periodic patterns on visual cortex
that correspond to so-called "form constants" perceived by people
while hallucinating (Ermentrout & Cowan, 1979). However, most of
these studies were done on subjects who only ingested .125
Haas[3]. Much more complicated, cognitive level hallucinations
occur at higher doses. In order to model the Giant Screaming
Buddha hallucination that occurs about 45 minutes after ingesting
1 Haas, new models are necessary. The basic idea is that 1 Haas
produces oscillations in association cortex that then feed back
on area 17, producing the visual sensation of the oft-reported
mythic figure. Applying this to the Jimi Matrix model, it is no
wonder that "six turned out to be nine" (Hendrix, 1967). By the
judicious introduction of simulated LSD into the Jimi Matrix
model, we will use this as a "chaotic cognitive generator". We
estimate that with this technique, we can produce an album of
all-new Hendrix material every eight hours on a Sun-4.
____________________
[1]"Excess degrees of freedom" does not begin to describe this
map. Hence radical new techniques will be necessary. This is
another area where simulated LSD comes in.
[2]As with hidden units, the big question is where people hide
their extra degrees of freedom. Our research suggests that Hen-
drix' were actually hidden in guitar strings pre-soaked in lyser-
gic acid. This accounts his habit of playing guitar with his
tongue, and destroying the evidence afterward.
[3]The Haas is a unit of acid dose. 1 Haas == 6 hits, or about
750 micrograms.
Richard Fozzard
University of Colorado "Serendipity empowers"
fozzard@boulder.colorado.edu
------------------------------
Subject: Re: Looking for Neural Net\Music Applications
From: baggi@icsi.berkeley.edu (Denis L. Baggi)
Organization: Postgres Research Group, UC Berkeley
Date: Sat, 29 Apr 89 04:37:07 +0000
I am doing something somewhat related and I enclose a description of the
state of the project a few months ago - excuse the introductory tone, that's
an abstract of a talk:
[[ Editor's Note. Abstract omitted for brevity. Readers should consult
Neuron-Digest Vol. 5 #4 (15 Jan 89) for the original, plus my commentary on
the talk. -PM]]
My network does not improvise solo lines, but generates what a jazz pianist,
bassist and drummer improvise from an harmonic grid. Thus the only problems
of notation I have are those related to harmony: e.g. Cm7, F#7(-5) etc.
One of your problems has to do with the fact that in jazz, by definition,
notated music has no meaning, it is only in the instant it's being played
that it exists. One could argue that's a truism, but in classical music, in
a certain sense, the notation identifies the music, while in jazz it is only
the instant: i.e., the notation for jazz is the record - since 1917 -, as
the canvas is the medium for painting, and NOT the score.
As for previous work in the area, I know only of David Levitt's, Christopher
Fry's and David Wessel's - the latter two not well published. None use
connectionist models, the first two use LISP flavors and the latter C++.
I am at anybody's disposal for further information.
Denis Baggi
International Computer Science Institute, Berkeley
University of California, Berkeley
------------------------------
Subject: Looking for Neural Net/Music Applications
From: androula@cb.ecn.purdue.edu (Ioannis Androulakis)
Organization: Purdue University Engineering Computer Network
Date: Tue, 09 May 89 05:46:08 +0000
I apologize for this posting, since it is actually a question addressed to
Jerry Ricario, concerinig one of his postings long time ago. It is about the
work he is doing attempting to have a NN learn how to do basic jazz
"impovisation" My question is the following, how do you define
"improvisation" and, once you do that, what do you mean by "learn how to
imporovise" I believe that imporvisation is not the output of some neurons
that learned how to do something. What I do not undertstand is what you
expect the network to learn. If we will ever be able to construct a network
that has the ability to imporvise, as a human, then we would have achieved
much more that this imporvisation. Who knows, this way we will may be able
to "construct" a new Chopin or a List, both masters of imporvisation......
Thank you, and once again I apologize, although I will be waiting for any
answer since I happen to be interested in both AI and music.
yannis
androula@helium.ecn.purdue.edu
------------------------------
Subject: Re: Looking for Neural Net/Music Applications
From: chank@cb.ecn.purdue.edu (King Chan)
Organization: Purdue University Engineering Computer Network
Date: Tue, 09 May 89 17:30:47 +0000
[[Regarding the question of learning and improvisation]]
I am aware of AI application to musical composition. Specifically,
research at MIT produced interesting model-based composition programs
for jazz, rock, and rag time. This was on exhibit at chicago's
museum of science and technology.
There is a possibility of learning even for improvisation.
Music can be considered as a collection of primitives, patterns of
which make a piece of music. The learning aspect can be spoken of
as the ability to pass a judgement on such a piece as being aesthetically
appealing to a musician or not. It is this judgement that allows a
adaptive approach to the development of music. The judgement is the
part of the musician's knowledge that needs to be learned by the program
if it is to make any good improvisations.
QED
KING CHAN
(chessnut)
------------------------------
Subject: Re: Looking for Neural Net/Music Applications
From: lwyse@cochlea.usa (Wyse)
Organization: Boston University Center for Adaptive Systems
Date: Wed, 10 May 89 18:54:13 +0000
Two exciting publications coming up this year: Computer Music Journal (MIT
Press), and INTERFACE (a journal of research in music, sorry-publisher
unknown) are both devoting special issues to neural networks and music.
Interface will have a more "systems-theoretic" flavor.
------------------------------
Subject: Neural Net Applications (Weather Forcasting)
From: cs178wbg@sdcc18.ucsd.EDU (-___^___-)
Organization: University of California, San Diego
Date: Mon, 17 Apr 89 04:47:42 +0000
We are currently investigating the future possibilities for incorporating
a nerual network (possibly back propogation) for weather forcasting. We are
still in the early stages of programming much less deciding on which
simulator would be most appropriate for this project. We are interested in
any previous or present work done on this particular subject. Your replys
will be greatly appreciated.
Please e-mail your response cs178wbg@sdcc18.ucsd.edu.
Thank you,
Ian M. Dacanay
Rodel Agpaoa
------------------------------
Subject: RE: Neuron Digest V5 #20
From: GEURDES%HLERUL55.BITNET@CUNYVM.CUNY.EDU
Date: Tue, 02 May 89 15:42:00 +0700
I (J.F. Geurdes) am interested in the biochemistry of cognition. When I was
a student I wrote a doctorals thesis on the subject of 'quantum biochemistry
of arginine-9-vasopressine' ,a study in which molecular parameters like net
atomic charge and gross conformation where correlated with effectivity of
substituents of Arg-VP (effectivity data where obtained from experiments of
De Wied an authority on the subject of neuropeptides ). I regretted to quit
this type of research.
The main conclusion of my study was that the subject is terrible difficult
but equal interesting. A preliminary conclusion could be drawn however. The
electrostatic picture of the 'tail' of this peptide seems to be of some
importance in the binding to 'memory intermediate receptors in the brain.
I am eager to hear what your reference (Perth it was ?) has to say on the
subject.
Greetings
J.F. Geurdes
------------------------------
Subject: Position Available
From: plunkett@daimi.dk (Kim Plunkett)
Organization: DAIMI: Computer Science Department, Aarhus University, Denmark
Date: Thu, 27 Apr 89 15:11:52 +0000
The Institute of Psychology, University of Aarhus, Denmark is announcing a
new position at the Associate Professor level. Applicants should document
research within the area of psychology or Cognitive Science which involves
the relation between information and computer technology, and psychological
processes. Qualifications within the latter area - the relation to computer
technology and psychology - will be given special consideration.
For further details, please contact Dr. Kim Plunkett:
psykimp@dkarh02.bitnet
(Deadline for receipt of applications: June 2nd, 1989)
------------------------------
Subject: request - Parallel Theorem Proving
From: rawlins%etive.edinburgh.ac.uk@NSFnet-Relay.AC.UK
Date: Sat, 13 May 89 13:54:19 -0000
I am interested in hearing about any work that is being done in the area of
automatic theorem proving using massively parallel systems. Besides the
work of Derthick at CMU and of Ballard at Rochester; are there any other
novel approaches?
If anyone is actively researching into this area perhaps we could exchange
ideas.
Thanks,
Pete Rawlins.
email: rawlins@ed.ac.uk
------------------------------
Subject: SAIC's Bomb "Sniffer"
From: john%mpl@ucsd.edu (John McInerney)
Date: Wed, 03 May 89 10:18:11 -0700
I just saw reported in EE Times:
Neural nose to sniff out explosives at JFK airport
Santa Clara, Calif.-- The Federal Aviation Administration next month will
install the first neural-based bomb detector at New York's JFK International
Airport.
Later the article goes on to quote an FAA spokesman to say, "in line with
the basic premise the FAA is trying to follow of getting the human being out
of the loop."
I find the above very interesting expecially in contrast to Winston's
plenary talk at AAAI 87 where he said that he would not trust a neural
network in a nuclear power plant. (I hope I am not misquoting him.) My
feeling at the time was that a neural network is exactly the kind of system
that you would want. Instead of having the system die with "IBSLOG SYSTEM
ERROR SYSTAT *.* CORE DUMPED; EXPECTED '= ON INPUT LINE 17" the net would do
something "more reasonable." What the net would do might not be exactly what
a human operator might do, but it is certainly better than crashing because
that specific input had never been tested before.
Like many others I am concerned about the second statement above regarding
"getting the human being out of the loop." In this case there seems to be a
problem with the amount of luggage that goes through these systems and the
small probabilities of ever finding something. Hopefully the machine can
deal with the tedium much better than a human. I guess I feel uneasy with a
fully automatic system, given the inherent unreliability of hardware and
software (even nets!), in such a life-and-death situations.
John McInerney
john%cs@ucsd.edu
[[Editor's Note: In fact, quite a bit of work is being done in England
(University of Glastonberry?) on an "artificial nose." While most of the
applications (and funding) has been for food and tobacco applications, the
general principles appear as sound as any. The physical setup is a number
of different sensors (the chemistry of which still eludes me) which provide
variable electric signal when presented with "odors." By combining the
output of these sensors into a Neural Network, a reasonably unique signature
is obtained by way of traditional signal recognition techniques. The set up
has been described as "biologically inspired" though certainly not as complex
as Walter Freeman's models.
If someone could provide contacts, or a specific reference, I would
appreciate it. Otherwise, I'll try to dig out the name of the speaker from
that University. I will leave the ethical question of implementing the
system described in the message above to readers' discussion. -PM]]
------------------------------
Subject: speech recognition
From: HAHN_K%DMRHRZ11.BITNET@CUNYVM.CUNY.EDU
Date: Thu, 04 May 89 13:58:50 +0700
I'm looking for references concerning connectionist speech recognition,
early stages (phoneme recognition, feature extraction) as well as later
processing (word recognition etc.). I'd appreciate any pointers.
Thanx,
Klaus.
Klaus Hahn Bitnet: HAHN_K@DMRHRZ11
Department of Psychology
Gutenbergstr. 18
University of MARBURG
D-3550 Marburg
West-Germany
------------------------------
Subject: Texture Segmentation using the Boundary Contour System
From: dario@techunix.BITNET (Dario Ringach)
Organization: Technion - Israel Inst. Tech., Haifa Israel
Date: Wed, 12 Apr 89 05:49:36 +0000
How can the Boundary Contour System segment textures with identical
second-order statistics? I mean, first-order differences are easily
discovered by the "contrast sensitive" cells at the first stage of the BCS
(the OC filter), while the CC-loop can account for second- order (dipole)
statistics; but how can the BCS segment textures, as the ones presented by
Julez [1], which have even identical third-order statistics but are easily
discriminable? Is the BCS/FCS model consistent with Julez's Textons theory?
If so, in which way?
Thanks!
Dario Ringach
dario@techunix.bitnet
[1] B. Julez and R. Bergen, 'Textons, the Fundamental Elements in
Preattentive Vision and Perception of Structures', The Bell
Technical Journal, Vol. 62, No. 6, pp. 1619, 1983.
------------------------------
End of Neurons Digest
*********************