Copy Link
Add to Bookmark
Report

Neuron Digest Volume 07 Number 04

eZine's profile picture
Published in 
Neuron Digest
 · 1 year ago

Neuron Digest   Saturday, 12 Jan 1991                Volume 7 : Issue 4 

Today's Topics:
Regarding Brain Size and Sulci
neural nets for fingerprint recognition
Applications summary.
Summary of medical/astronomical imaging w/ANNs
Connectionist Simulators
Transputers for neural networks?
Kohonen's Network again
Call for Papers: Uncertainty in AI 91
Machine learning workshop: Addendum to call for papers
Call for papers


Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"
Use "ftp" to get old issues from hplpm.hpl.hp.com (15.255.176.205).

------------------------------------------------------------

Subject: Regarding Brain Size and Sulci
From: UAP001%DDOHRZ11.BITNET@CUNYVM.CUNY.EDU
Date: Wed, 19 Dec 90 10:28:56 +0700


On the matter of brain size and number of sulci: exceptions -e.g., Gauss-
have been reported; but there's no clear pattern. A century ago measuring
brains of famous savants was popular, but went out of fashion when it
turned out that geniuses tended to fall into the normal range.

Incidentally, although we don't know the surface area of their cortex,
the Neanderthal probably had slightly larger brains than ours.

********************************************************************
* C.R. Cavonius BITNET:uap001@ddohrz11 *
* Inst. f. Arbeitsphysiologie (Note: uap-zero-zero-one, *
* an der Universitaet Dortmund not uap-oh-oh-one) *
* Ardeystr. 67 Tel: +49 231 1084 261 *
* D-4600 Dortmund 1, F.R. Germany Fax: +49 231 1084 308 *
********************************************************************


------------------------------

Subject: neural nets for fingerprint recognition
From: oflazer%TRBILUN.BITNET@CUNYVM.CUNY.EDU
Date: Mon, 24 Dec 90 18:25:00 +0200

Hello,

I would appreciate any pointers to work on recognition of finger prints
using a neural network approach.

Thanks

Kemal Oflazer

oflazer@trbilun.bitnet (BITNET)

oflazer%trbilun.bitnet@cunyvm.cuny.edu (INTERNET)

------------------------------

Subject: Applications summary.
From: yuhas@faline.bellcore.com (Ben Yuhas)
Date: Wed, 26 Dec 90 15:16:02 -0500

I am trying to gather a list of neural networks applications that have,
or are about to, become commercial products. At NIPS we heard about such
work from FORD and Sarnoff labs, I would appreciate any other examples
that any of you are aware of.

Ben

yuhas@bellcore.


------------------------------

Subject: Summary of medical/astronomical imaging w/ANNs
From: Denis Anthony <esrmm@cu.warwick.ac.uk>
Organization: Computing Services, Warwick University, UK
Date: Sat, 29 Dec 90 15:31:51 +0000

[[ Editor's Note: My thanks to Denis for providing this summary. My
frequent theme is for readers who request information to share the
resulst of their search with the Digest so all may benefit. Readers who
post what they're doing may find kindred spirits in unlikely places as
well! -PM ]]

Peter

You stated in the Digest that you would like a summary of the search in
medical/astronomical imaging using neural nets. I give below short notes
on the small number of replies I have had, and a short summary of what I
am doing.

I am not sure how much use the below is, but hopefully the names of
interested users may be of some interest.

nb. I only have medical replies here.

Denis

1. Peter J.B. Hancock (pjh@cs.stir.ac.uk)

Starting looking at MRI brain data, with a view to describing the lesions
caused by accidents, and eventually trying to predict the long-term
recovery of the patient.


2. Paul Marquis (pmarquis@jade.tufts.edu (Internet) PTMARQUI@TUFTS (BITNET))

I'm a graduate student at Tufts University and am currently considering
using neural networks as an approach to Magnetic Resonance.

3. Henk D. Davids (hd%nl.philips.cft.philtis%uucp.phigate@nl.nluug.hp4nl)

We are working with medical images too. I would love to see more
application of nn technology to medical image data discussed here. Seems
to me that there are some problems that are specific for this kind of
processing: the large input space ... I think that there must be
sufficient interest to warrant discussion on Usenet rather than taking
this type of application into a private corner.

4. Nathan Combs. (ncombs%fandago.ctc.tasc.com%fandago@net.uu.uunet)
Steve Stone (STONES@edu.wsu.csc.wsuvm1)
Rajiv Sarathy ( INTERNET sarathy@gpu.utcs.utoronto.ca)

Above expressed interest in ANNs and medical imaging.


5. Denis Anthony (esrmm@cu.warwick.ac.uk)

Previously working on classification of lung scintigrams using neural
networks.

Now working on the inverse problem of ultrasound tomography using nns.

References below may be emailed to any interested parties.


%Q Anthony D.M, Hines E.L, Taylor D, and Barham J
%D 1989
%T A Study of Data Compression using Neural Networks and Principal Component
Analysis (IEE Coloquiuum on Biomedical Applications of Digital Signal
Processing)
%P 2/1-2/4

%Q Anthony D.M, Hines E.L, Taylor D, and Barham J
%D 1990
%T The Use of Genetic Algorithms to Learn the Most Appropriate Inputs to a
Neural Network
%J IASTED Conf. Artificial Intelligence App. Neural Networks

%Q Anthony D.M, Hines E.L, Taylor D, and Barham J
%D 1990
%T The Use of Neural Networks to Classify Lung Scintigrams
%J IASTED Conference on Applied Informatics
%P 240-242

%Q Anthony D.M, Hines E.L, Taylor D, and Barham J
%D 1989
%T An Investigation into the Use of Neural Networks for an Expert System in
Nuclear Medicine Image Analysis
%J IEE Conference on Image Processing
%P 338-342

%Q Anthony D.M, Hines E.L, Taylor D, and Barham J
%D 1990
%T The Use of Neural Networks in Classifying Lung Scintigrams
%J INNS Int. Neural Networks Conference
%V 1
%P 71-74


%Q Anthony D.M
%D 1990
%T Reducing Connectivity in Compression Networks
%J Neural Network Review

%Q Chiu W.C, Anthony D.M, Hines E.L, Forno C, Hunt R, and Oldfield S
%D 1990
%T Selection of the Optimal MLP Net for Photogrammetric Target Processing
%J IASTED Conf. Artificial Intelligence App. Neural Networks

------------------------------

Subject: Connectionist Simulators
From: Kim Daugherty <kimd@gizmo.usc.edu>
Date: Mon, 07 Jan 91 17:12:13 -0800

[[ Editor's Note: Once again, I'm reminded of the need of a "canonical"
list of neural network simulators. I've been meaning to assemble same,
but doubt that I'll have enough free time to do so. Besides this below,
there's also the PDP Volume 3 code, Mactivation, and many other public
domain or low-cost programs. In addition, there is a host of commercial
products. Does anyone keep such a "complete list" which they would share
with the Digest? -PM ]]

Last November, I posted a request for connectionist modeling simulators
to the mailing list. I would like to thank those who responded.
Following is a list and brief description of several simulators:

1. Genesis - An elaborate X windows simulator that is particularly well
suited for modeling biological neural networks.

unix> telnet genesis.cns.caltech.edu (or 131.215.135.185)
Name: genesis

Follow directions there to get a ftp account from which you
can ftp 'genesis.tar.Z". This contains genesis source and
several tutorial demos. NOTE: There is a fee to become a
registered user.

2. PlaNet (AKA SunNet) - A popular connectionist simulator with versions to
run under SunTools, X Windows, and non-graphics terminals
created by Yoshiro Miyata. The SunTools version is not
supported.

unix> ftp boulder.colorado.edu (128.138.240.1)
Name: anonymous
Password: ident
ftp> cd pub
ftp> binary
ftp> get PlaNet5.6.tar.Z
ftp> quit
unix> zcat PlaNet5.6.tar.Z | tar xf -

All you need to do to try it is to type:
unix> Tutorial

This will install a program appropriate for your environment
and start an on-line tutorial. If you don't need a tutorial,
just type 'Install' to install the system and then
'source RunNet' to start it. See the file README for more
details.

The 60-page User's Guide has been split into three separate
postscript files so that each can be printed from a printer
with limited memory. Print the files doc/PlaNet_doc{1,2,3}.ps
from your postscript printer. See the doc/README file for
printing the Reference Manual.

Enjoy!! And send any questions to miyata@boulder.colorado.edu.

3. CMU Connectionist Archive - There is a lisp backprop simulator in the
connectionist archive.

unix> ftp b.gp.cs.cmu.edu (or 128.2.242.8)
Name: ftpguest
Password: cmunix
ftp> cd connectionists/archives
ftp> get backprop.lisp
ftp> quit

4. Cascade Correlation Simulator - There is a LISP and C version of the
simulator based on Scott Fahlman's Cascade Correlation algorithm,
who also created the LISP version. The C version was created by
Scott Crowder.

unix> ftp pt.cs.cmu.edu (or 128.2.254.155)
Name: anonymous
Password: (none)
ftp> cd /afs/cs/project/connect/code
ftp> get cascor1.lisp
ftp> get cascor1.c
ftp> quit

A technical report descibing the Cascade Correlation algorithm
may be obtained as follows:

unix> ftp cheops.cis.ohio-state.edu (or 128.146.8.62)
Name: anonymous
Password: neuron
ftp> cd pub/neuroprose
ftp> binary
ftp> get fahlman.cascor-tr.ps.Z
ftp> quit
unix> uncompress fahlman.cascor-tr.ps.Z
unix> lpr fahlman.cascor-tr.ps

5. Quickprop - A variation of the back-propagation algorithm developed by
Scott Fahlman. A LISP and C version can be obtained in the
same directory as the cascade correlation simulator above.


Kim Daugherty
kimd@gizmo.usc.edu

------------------------------

Subject: Transputers for neural networks?
From: "
Tom Tollenaere " <ORBAN%BLEKUL13.BITNET@CUNYVM.CUNY.EDU>
Date: Wed, 09 Jan 91 12:25:45 +0000

[[ Editor's Note: As usual, I certainly hope that Tom summarizes the
results he gets and posts them to the Digest. Of course, readers with
relevant information are free to post directly to the Digest
(neuron-request@hplabs.hpl.hp.com) as well as sending a copy to Tom. -PM]]

Hi netters,

I am compiling a technical report on scientific/industrial use of
transputers and similar parallel computers (Intel IPSC, NCube, Cosmic
Cube, etc...) for neural network research/applications. If you happen
to be involved both in neural networks and parallel computers, please get
in touch with me.

I'm interested in
* what machine do you use ? And in what language do you program it ?
* what kind of networks do you run on the machine ? feed forward things
like perceptrons, backprop, or more dynamic things like Hopfield,
Kohonen networks ?
* how did you do it ? i.e. how did you handle parallelism
* are you happy ? i.e. how well does the application run ? Do you have
any performance/efficiency measures ?
* what do you use it for ? i.e. do you run simulations for basic network
research ? Are you involved with research on a particular
application and do you use the parallel machine just c'se its
fast ? Do you have an industrial application ? If so, do you sell it?
What price ? Happy clients ?

The idea is to get an overview of what people are doing with neural nets
and parallel computers, how they are doing it, and how well it goes.
When the report is finished, I'll post a note on the net, so anyone
interested can get a copy.

I'm looking forward to massive response; especially from the U.S. and
from industrial users. Don't use the bulletin board, but contact me
directly.

Cheers,

tom


Tom TOLLENAERE
Laboratorium voor Neuro en Psychofysiologie
Katholieke Universiteit Leuven
Campus Gasthuisberg
Herestraat 49
B-3000 Leuven
BELGIUM - EUROPE

email : ORBAN at blekul13.bitnet or blekul13.earn
fax : 32 16 21 59 93
phone : 32 16 21 59 60
Acknowledge-To: <ORBAN@BLEKUL13>

------------------------------

Subject: Kohonen's Network again
From: JJ Merelo <jmerelo@ugr.es>
Date: 10 Jan 91 12:52:00 +0200

I am working on Kohonen network, and I have met lots of trouble
when I have tried to find the correct parameters k1 and k2 on the
learning algorithm. Does anyone know how to find them?

Besides, the convergence of the learning procedure is guaranteed
because of the decreasing nature of the alpha gain factor. But is it
guaranteed that it will converge to the right vectors. In other
clustering algorithms, it does not end until convergence in clustering
mean vectors is reached ( v.g. k-means), and I think this is more
correct.

By the way, is anybody working on Kohonen's network? I have seen
it quoted thousands of times, but the quotes are always from the same
papers from Kohonen himself. I know not about anybody who has got Kohonen
net working ( maybe Aleksander, as he says in his book, but this is the
only one ). I think it *must* work, but I have got mixed results.
Besides, it is boring to keep on trying new parameters.

I hope to get some help,

JJ Merelo
Dpto. de Electronica y Sistemas Informaticos
Facultad de Ciencias
Campus Fuentenueva, s/n
18071 Granada ( Spain )
e-mail JMERELO@UGR.ES

------------------------------

Subject: Call for Papers: Uncertainty in AI 91
From: dambrosi@kowa.CS.ORST.EDU
Date: Fri, 21 Dec 90 12:33:02 +0000

THE SEVENTH CONFERENCE ON UNCERTAINTY IN ARTIFICIAL INTELLIGENCE
UCLA, Los Angeles
July 13-15, 1991 (Preceding AAAI)


The seventh annual Conference on Uncertainty in AI is concerned with the
full gamut of approaches to automated and interactive reasoning and
decision making under uncertainty including both quantitative and
qualitative methods.

We invite original contributions on fundamental theoretical issues, on
the development of software tool embedding approximate reasoning
theories, and on the validation of such theories and technologies on
challenging applications. Topics of particular interest include:

- Foundations of uncertainty
- Semantics of qualitative and quantitative uncertainty representations
- The role of uncertainty in automated systems
- Control of reasoning; planning under uncertainty
- Comparison and integration of qualitative and quantitative schemes
- Knowledge engineering tools and techniques for building approximate
reasoning systems
- User Interface: explanation and summarization of uncertain information
- Applications of approximate reasoning techniques

Papers will be carefully refereed. All accepted papers will be included
in the proceedings, which will be available at the conference. Papers
may be accepted for presentation in plenary sessions or poster sessions.

Five copies of each paper should be sent to the Program Chair by March 4,
1991. Acceptance will be sent by April 22, 1991. Final camera-ready
papers, incorporating reviewers' comments, will be due by May 10, 1991.
There will be an eight page limit on the camera-ready copy (with a few
extra pages available for a nominal fee.)

Program Co-Chair:

Bruce D'Ambrosio Philippe Smets
Dept. of Computer Science IRIDIA
303 Dearborn Hall Universite Libre de Bruxelles.
Oregon State University 50 av. Roosevelt, CP 194-6
Corvallis, OR 97331-3202 USA 1050 Brussels, Belgium
tel: 503-737-5563 tel: +322.642.27.29
fax: 503-737-3014 fax: +322.642.27.15
e-mail: dambrosio@CS.ORST.EDU e-mail: R01501@BBRBFU01.BITNET


General Chair:

Piero Bonissone
General Electric
Corporate Research and Development
1 River Rd., Bldg. K1-5C32a, 4
Schenectady, NY 12301
tel: 518-387-5155
fax: 518-387-6845
e-mail: bonisson@crd.ge.com


Program Committee: Piero Bonissone, Peter Cheeseman, Max Henrion, Henry
Kyburg, John Lemmer, Tod Levitt, Ramesh Patil, Judea Pearl, Enrique
Ruspini, Ross Shachter, Glenn Shafer, Lofti Zadeh.


------------------------------

Subject: Machine learning workshop: Addendum to call for papers
From: Lawrence Birnbaum <birnbaum@fido.ils.nwu.edu>
Date: Tue, 08 Jan 91 14:54:23 -0600


ADDENDUM TO CALL FOR PAPERS
EIGHTH INTERNATIONAL WORKSHOP ON MACHINE LEARNING
NORTHWESTERN UNIVERSITY
EVANSTON, ILLINOIS
JUNE 27-29, 1991


We wish to clarify the position of ML91 with respect to the issue of multiple
publication. In accordance with the consensus expressed at the business
meeting at ML90 in Austin, ML91 is considered by its organizers to be a
specialized workshop, and thus papers published in its proceedings may overlap
substantially with papers published elsewhere, for instance IJCAI or AAAI.
The sole exception is with regard to publication in future Machine Learning
Conferences. Authors who are concerned by this constraint will be given the
option of foregoing publication of their presentation in the ML91 Proceedings.

The call for papers contained information concerning seven of the eight
individual workshops that will make up ML91. Information concerning the final
workshop follows.

Larry Birnbaum
Gregg Collins

Northwestern University
The Institute for the Learning Sciences
1890 Maple Avenue
Evanston, IL 60201
(708) 491-3500

- -------------------------------------------------------------------------------

COMPUTATIONAL MODELS OF HUMAN LEARNING

This workshop will foster interaction between researchers concerned with
psychological models of learning and those concerned with learning systems
developed from a machine learning perspective.

We see several ways in which simulations intended to model human learning and
algorithms intended to optimize machine learning may be mutually relevant.
For example, the way humans learn and the optimal method may turn out to be
the same for some tasks. On the other hand, the relation may be more
indirect: modeling human behavior may provide task definitions or constraints
that are helpful in developing machine learning algorithms; or machine
learning algorithms designed for efficiency may mimic human behavior in
interesting ways.

We invite papers that report on learning algorithms that model or are
motivated by learning in humans or animals. We encourage submissions that
address any of a variety of learning tasks, including category learning, skill
acquisition, learning to plan, and analogical reasoning. In addition, we hope
to draw work from a variety of theoretical approaches to learning, including
explanation-based learning, empirical learning, connectionist approaches, and
genetic algorithims.

In all cases, authors should explicitly identify 1) in what ways the system's
behavior models human (or animal) behavior, 2) what principles in the
algorithm are responsible for this, and 3) the methods for comparing the
system's behavior to human behavior and for evaluating the algorithm. A
variety of methods have been proposed for computational psychological models;
we hope the workshop will lead to a clearer understanding of their relative
merits. Progress reports on research projects still in development are
appropriate to submit, although more weight will be given to projects that
have been implemented and evaluated. Integrative papers providing an analysis
of multiple systems or several key issues are also invited.

WORKSHOP COMMITTEE

Dorrit Billman (Georgia Tech)
Randolph Jones (Univ. of Pittsburgh)
Michael Pazzani (Univ. of California, Irvine)
Jordan Pollack (Ohio State Univ.)
Paul Rosenbloom (USC/ISI)
Jeff Shrager (Xerox PARC)
Richard Sutton (GTE)

SUBMISSION DETAILS

Papers should be approximately 4000 words in length. Authors should submit
seven copies, by March 1, 1991, to:

Dorrit Billman
School of Psychology
Georgia Institute of Technology
Atlanta, GA 30332
phone (404) 894-2349

Formats and deadlines for camera-ready copy will be communicated upon
acceptance.



------------------------------

Subject: Call for papers
From: RAKESH@IBM.COM
Date: Wed, 09 Jan 91 11:34:17 -0500

CALL FOR PAPERS

Progress In Neural Networks
Special Volume on Neural Networks In Vision

Significant progress has been made recently in the application of neural
networks to computational vision. To showcase this research, Ablex Publishing
is planning a special volume on "
Neural Networks in Vision", scheduled for
1992. This volume will be a part of "
Progress in Neural Networks", an annual
book series reviewing research in modelling, analysis, design and application
of neural networks.

Authors are invited to submit original manuscripts detailing recent progress
in neural networks for vision. The paper should be tutorial in nature, self
contained and preferably, but not necessarily, about fifty double spaced pages
in length. An abstract and an outline are due by January 31, 1991, the full
paper by Feburary 28, 1991. Make submissions to


Rakesh Mohan
Associate Volume Editor
IBM Thomas J. Watson Research Center
PO Box 704
Yorktown Heights, NY 10598

email: rakesh@ibm.com


------------------------------

End of Neuron Digest [Volume 7 Issue 4]
***************************************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT