Copy Link
Add to Bookmark
Report

Neuron Digest Volume 04 Number 09

eZine's profile picture
Published in 
Neuron Digest
 · 1 year ago

Neuron Digest	Sunday,  2 Oct 1988		Volume 4 : Issue 9 

Today's Topics:
1988 Tech Report
Connectionism and Spatial Reasoning
scaling in neural networks
Request for help with ART 2 implementation
proc. INNS
Analog Vs. Digital Weights
Re: temporal domain in vision
Re: Neuron Digest V4 #6 (Proceedings)


Send submissions, questions, mailing list maintenance and requests for back
issues to "Neuron-request@hplabs.hp.com"

------------------------------------------------------------

Subject: 1988 Tech Report
From: jam@bu-cs.bu.edu (Jonathan Marshall)
Date: Fri, 16 Sep 88 14:22:16 -0400


The following material is available as Boston University Computer Science
Department Tech Report #88-010. It may be obtained from rmb@bu-cs.bu.edu
or by writing to Regina Blaney, Computer Science Dept., Boston Univ., 111
Cummington St., Boston, MA 02215, U.S.A. I think the price is $7.00.
- -----------------------------------------------------------------------

SELF-ORGANIZING NEURAL NETWORKS
FOR PERCEPTION OF VISUAL MOTION

Jonathan A. Marshall

ABSTRACT

The human visual system overcomes ambiguities, collectively known as
the aperture problem, in its local measurements of the direction in
which visual objects are moving, producing unambiguous percepts of
motion. A new approach to the aperture problem is presented, using an
adaptive neural network model. The neural network is exposed to
moving images during a developmental period and develops its own
structure by adapting to statistical characteristics of its visual
input history. Competitive learning rules ensure that only connection
``chains'' between cells of similar direction and velocity sensitivity
along successive spatial positions survive. The resultant
self-organized configuration implements the type of disambiguation
necessary for solving the aperture problem and operates in accord with
direction judgments of human experimental subjects. The system not
only accommodates its structure to long-term statistics of visual
motion, but also simultaneously uses its acquired structure to
assimilate, disambiguate, and represent visual motion events in
real-time.
- ------------------------------------------------------------------------

I am now at the Center for Research in Learning, Perception, and
Cognition, 205 Elliott Hall, University of Minnesota, Minneapolis,
MN 55414. I can still be reached via my account jam@bu-cs.bu.edu .

--J.A.M.


------------------------------

Subject: Connectionism and Spatial Reasoning
From: prassler@lan.informatik.tu-muenchen.dbp.de (Erwin Prassler)
Date: Thu, 22 Sep 88 18:06:32 +0000

To people working or interested in the field of representation of large-scale
space and spatial reasoning !!

I'm a member of an AI and Cognitive Science group at the Technical University
of Munich, West-Germany, working on connectionist models for spatial reasoning
processes. I'm currently working on a parallel processing model for cognitive-
map based path/route-finding.
Since I am planning a research visit to the United States I am looking for
people working on similar topics, who might be interested in a collaboration.
I expect to be financially independent through a six months scholarship from
the German Academic Exchange Service.

Some personal data:

Name:
Erwin Prassler
Education:
Technical University of Munich
Diploma in Computer Science, 1985
Address:
Department of Computer Science
Technical University of Munich
Arcisstr.21
D-8000 Munich 2
West-Germany
e-mail:
unido!tumult!prassler@uunet.UU.NET

PS If anybody out there is interested I could mail a copy of an extended
abstract that I have submitted to SGAICO-88 in Zurich.

------------------------------

Subject: scaling in neural networks
From: Alex.Waibel@SPEECH2.CS.CMU.EDU
Date: Thu, 22 Sep 88 14:13:06 -0400


Below the abstract to a paper describing our recent research addressing the
problem of scaling in neural networks for speech recognition. We show that
by exploiting the hidden structure (previously learned abstractions) of
speech in a modular way and applying "conectionist glue", larger more
complex networks can be constructed at only small additional cost in
learning time and complexity. Resulting recognition performance is as good
or better than comparable monolithically trained nets and as good as the
smaller network modules. This work was performed at ATR Interpreting
Telephony Research Laboratories, in Japan.

I am now working at Carnegie Mellon University, so you may request copies
from me here or directly from Japan.
>From CMU:

Dr. Alex Waibel
Computer Science Department
Carnegie-Mellon University
Pittsburgh, PA 15213
phone: (412) 268-7676
email: ahw@speech2.cs.cmu.edu

>From Japan, please write for technical report TR-I-0034
(with CC to me), to:

Ms. Kazumi Kanazawa
ATR Interpreting Telephony Research Laboratories
Twin 21 MID Tower,
2-1-61 Shiromi, Higashi-ku,
Osaka, 540, Japan
email: kddlab!atr-la.atr.junet!kanazawa@uunet.UU.NET
Please CC to: ahw@speech2.cs.cmu.edu

- -------------------------------------------------------------------------

Modularity and Scaling in Large Phonemic Neural Networks
Alex Waibel, Hidefumi Sawai, Kiyohiro Shikano
ATR Interpreting Telephony Research Laboratories

ABSTRACT

Scaling connectionist models to larger connectionist systems is difficult,
because larger networks require increasing amounts of training time and
data and the complexity of the optimization task quickly reaches
computationally unmanageable proportions. In this paper, we train several
small Time-Delay Neural Networks aimed at all phonemic subcategories
(nasals, fricatives, etc.) and report excellent fine phonemic
discrimination performance for all cases. Exploiting the hidden structure
of these smaller phonemic subcategory networks, we then propose several
techniques that allow us to "grow" larger nets in an incremental and
modular fashion without loss in recognition performance and without the
need for excessive training time or additional data. These techniques
include {\em class discriminatory learning, connectionist glue,
selective/partial learning and all-net fine tuning}.

A set of experiments shows that stop consonant networks (BDGPTK)
constructed from subcomponent BDG- and PTK-nets achieved up to 98.6%
correct recognition compared to 98.3% and 98.7% correct for the component
BDG- and PTK-nets. Similarly, an incrementally trained network aimed at
{\em all} consonants achieved recognition scores of 95.9% correct. These
result were found to be comparable to the performance of the subcomponent
networks and significantly better than several alternative speech
recognition strategies.

------------------------------

Subject: Request for help with ART 2 implementation
From: <DBIGWOOD%UMDARS.BITNET@CUNYVM.CUNY.EDU> (Doug Bigwood)
Date: Fri, 23 Sep 88 12:04:00 -0400

I have implemented an artificial neural system based on Carpenter and
Grossberg's ART2 (Adaptive Resonance Theory) architecture as described in
Applied Optics. Vol. 28, No. 23. Dec. 1, 1987.

I have two problems. The first is that I can not reproduce the results of
one of their experiments, the results of which are shown in figure 8 of the
paper. Specifically, I need to set the vigilance parameter very high,
about .998, rather than .95, in order to get the result in 8(a). The
second problem is that I can't derive equation (19) which describes changes
in the bottom- up LTM traces. The derivation of equation (18) for the
top-down traces is straight forward. The only way I can get (19) is by
setting zJi equal to ziJ (and they are not equal).

I would appreciate any help, advice, explanations, code examples, etc. for
either of these problems. I am most concerned about the vigilance problem
because the net I have now is too sensitive to changes in the vigilance
parameter.

Thanks in advance.

Doug Bigwood
Lockheed-EMSCO
dbigwood@umdars.umd.edu [Internet]
dbigwood@umdars [Bitnet]

P.S. The issue of Applied Optics cited above contains several excellent
papers on neural networks.

[[ I seem to remember a fellow last Spring who was working on ART 1; I've
heard rumours that Grossberg was going to distribute his implementation of
ART 2 along with his recent book, but I have no independent confirmation
and have not seen it yet. If Stephen or one of his colleagues is reading,
perhaps the questions above could be answered? -PM]]

------------------------------

Subject: proc. INNS
From: mike@bucasb.bu.edu (Michael Cohen)
Date: Sun, 25 Sep 88 12:23:26 -0400

I am confused, I though this years first INNS conference (Sept 6-10 1988
at the Park Plaza hotel were not available. Society published abstracts
only to save money. However, each talk is available on audio cassette
and Plenary Talks, Tutorials and perhaps selected others on VCR from
Commonwealth Video.
Michael Cohen ---- Center for Adaptive Systems
Boston University (617-353-7857)
Email: mike@bucasb.bu.edu
Smail: Michael Cohen
Center for Adaptive System
Department of Mathematics, Boston University
111 Cummington Street
Boston, Mass 02215

[[ My apologies. My citation in the last Digest was indeed for the IEEE
ICNN, not the NNS's INNS in Sept. I guess I'll need to mind my P&Q's and
RTFM, PDQ! A entry below and the next digest contains similar useful
corrections, with a few additional insights. -PM]]

------------------------------

Subject: Analog Vs. Digital Weights
From: borgstrm@icarus.eng.ohio-state.edu (Tom Borgstrom)
Date: Sun, 25 Sep 88 21:13:25 +0000

I am interested in finding performance/capacity comparisons between neural
networks that use discrete synaptic weights and those that use continuous
valued weights.

I have one reference: "The Capacity of the Hopfield Associative Memory", by
R.J. McEliece, E.C. Posner, et al.; IEEE Transactions on Information
Theory, Vol. IT-33, No. 4, July 1987. The authors claim to "only lose 19
percent of capacity by ... three level quantization."
Is this true? Has
anyone else done hardware/software simulations to verify this?

Please reply by e-mail; I will post a summary if there is a large enough
response.

- -=-
Tom Borgstrom |borgstrm@icarus.eng.ohio-state.edu
The Ohio State University|...!osu-cis!tut!icarus.eng.ohio-state.edu!borgstrm
2015 Neil Avenue |
Columbus, Ohio 43210 |

[[ Obviously, many simulators are using discrete, though real-valued valued
weights, since their platform is a digital computer. Your question is
intriguing, however, in cases of extreme values. What of weights which are
either zero or one (two levels)? At what point is discretization too
much. I suspect that biological verisimilitude requires far more
gradations for weights. I suspect also that the degree to which you can
use a few discrete values depnds on the application and architecture.
Comments, readers? -PM]]

------------------------------

Subject: Re: temporal domain in vision
From: dmocsny@uceng.UC.EDU (daniel mocsny)
Date: 26 Sep 88 13:21:46 +0000

I have received some e-mail on the work of Richmond and Optican, including
a reference to one of their publications and requests for said reference.
My mailer could not reply to one of these requests (hello Toshitera Homma)
so I post it here.

] From edelman@wheaties.ai.mit.edu Fri Sep 16 13:43 EDT 1988
] You may want to read the following:
] Temporal encoding of two-dimensional patterns by single
] units in primate inferior temporal cortex,
] J. Neurophysiology, 57 (1), Jan. 1987
] Shimon Edelman
] Center for Biological Information Processing
] Dept. of Brain and Cognitive Sciences, MIT

Thanks, Shimon.

Dan Mocsny

------------------------------

Subject: Re: Neuron Digest V4 #6
From: pastor@bigburd.PRC.Unisys.COM (Jon Pastor)
Date: Mon, 26 Sep 88 19:43:33 +0000

I'm sure that this will be noticed by others, but there were two requests
posted for two different sets of proceedings. The information given was
correct for the proceedings of the ICNN conference in San Diego, but one
of the requests specifically asked about the INNS conference in Boston
(6-10 September).

I spent a good deal of time talking with representatives of INNS *and*
Pergamon Press (the publishers of the journal Neural Networks, including
the special issue containing the abstracts for INNS). There are no plans to
publish proceedings, and the reason is financial. INNS wished to keep the
cost of the conference down, so as to make it accessible to as many researchers
and students as possible. The INNS board decided that the inclusion of
proceedings would have increased the cost of conference registration by an
unacceptable amount (let's say, $90, based on the ICNN Proceedings costs).
While making the proceedings available at an additional charge would seem to
have been a viable alternative, the economics of publishing are such that this
was ruled out (INNS would have had to print some number of copies with no
guaranteed sales, and at a higher per-unit cost due to smaller print run).
There was talk of publishing some of the papers in one or more special issues
of Neural Networks, but nothing definite.

I would like to see proceedings. However, unless INNS and Pergamon can be
convinced that neither of them will be left holding a lot of expensive
inventory, it is unlikely that either of them will be willing to incur the
production and editorial costs. If there are any members of the INNS board
reading this newsgroup, I would be interested in hearing what the break-even
level for printing proceedings would be, and in finding out whether a
sufficient number of *prepaid* orders would be a sufficient incentive for
pursuing the issue.

INNS is a young organization, and not yet a wealthy one. Attempting to place
the conference within the financial reach of people who are not on company
expense accounts is laudable (it so happens that I attended INNS on my own
funds this year...), but I am not convinced that it's worth the lack of
proceedings.

[[ Although I did not attend INNS, alas, I was also dismayed at the lack of
published proceedings. As a future entry will point out, often proceedings
are *the* way of getting information from important gatherings. Even as it
stood, the abstracts were apparently quite hefty. The mechanics of
journals and proceedsings, as noted recntly in one of the Physics jounrals,
can threaten to undermine proper information dissemination. reasonable
alternative to the "nice printed/bound" proceedings would be the
"on-demand" printing; a printing house would create a cheaply bound
photocopy *per order* rather than warehousing in hopes of future sales. I'd
bet that we'd pay for slightly less glossy covers,if we could have the
contents. -PM ]]

------------------------------

End of Neurons Digest
*********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT