Copy Link
Add to Bookmark
Report

Neuron Digest Volume 06 Number 54

eZine's profile picture
Published in 
Neuron Digest
 · 1 year ago

Neuron Digest	Tuesday, 18 Sep 1990		Volume 6 : Issue 54 

Today's Topics:
why models that have similar illusions are useful (re: S. Lehar)
Anyone using Eldelman's theories?
INNS/SIG Washington Technical Meeting
request for data
Natural Language Parsing
Request for replies
ML91 -- THE EIGHTH INTERNATIONAL WORKSHOP ON MACHINE LEARNING
NN For Knowledge Representation and Inference
TR - Acquiring Verb Morphology in Children and Connectionist Nets
Reinforcement Learning -- Special Issue of Machine Learning Journal
ANNA-91 Conference
Tech Report Available - ID3 vs. BackProp
NN AND VISION -IJPRAI-special issue


Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"
Use "ftp" to get old issues from hplpm.hpl.hp.com (15.255.176.205).

------------------------------------------------------------

Subject: why models that have similar illusions are useful (re: S. Lehar)
From: "David A. Honig" <honig@ICS.UCI.EDU>
Date: Mon, 10 Sep 90 15:21:18 -0700


Steve Lehar Boston University Boston MA WROTE:

"The BCS/FCS is a very interesting model mostly because it does not
just try to perform image processing with neural techniques, but
actually attempts to duplicate the exact neural architecture used by
the brain. The model is based not only on neurophysiological
findings, but much of the model is directly based on visual illusions-
things that the human eye sees that arn't really there! The idea is
that if we can model the illusions as well as the vision, then we will
have a mechanism that not only does the same job as the eye, but does
it the same way as the eye does.

Imagine if you were given a primitive pocket calculator, and asked to
figure out how it works without taking it apart. Giving it
calculations like 1+1= will not make you any the wiser. When you ask
it to compute (1/3)*3= however you will learn not only how it works,
but also how it fails. The BCS/FCS is the only model that can explain
a wide range of psychophysical phenomena such as neon color spreading,
pre-attentive perceptual grouping, mach bands, brightness and color
illusions, illusory boundaries and illusory motions of various sorts."


Connectionists interested in this reasoning, and in the important
relationship between functionality, algorithm, and implementation, and
how these should be analyzed, might want to read David Marr's book,
_Vision_ (WH Freeman & Co, 1982).

------------------------------

Subject: Anyone using Eldelman's theories?
From: anumolu@cis.uab.edu (Vivek Anumolu)
Date: Mon, 10 Sep 90 21:15:29 -0500

hello fellow NN researchers,

There has been some recent excitement about Gerald Edelman's theory of
Neuronal Group Selection. If the theory has a lot of promise, why don't
many researchers work on this theory? Specifically I 'm interested in
any references to any type of analysis or application of this theory by
folks other than the Edelman group.

Thanks.
Vivek Anumolu
anumolu@cis.uab.edu /* INTERNET address */

------------------------------

Subject: INNS/SIG Washington Technical Meeting
From: Gary Fleming <72260.2544@compuserve.com>
Date: 11 Sep 90 09:30:49 -0400


INTERNATIONAL NEURAL NETWORK SOCIETY
GREATER WASHINGTON AREA SPECIAL INTEREST GROUP

The INNS/SIG Washington is pleased to announce a technical
meeting to be held at 7:00 pm, 19 September 1990 in the Lipsett
Amphitheater in the Clinical Center, National Institutes of Health. Dr.
Charles Wilson of the National Institute of Standards and Technology
(NIST, formerly the National Bureau of Standards) will speak on
Self-Organizing Neural Network Character Recognition on Massively
Parallel Computers

The NIH campus is accessible by MetroRail or by automobile.
If you arrive by Metro, the Clinical Center is the large 10 story
building to the East-NorthEast of the Metro Station. If you
arrive by automobile, the NIH campus is approximately 1 mile south
of the Capital Beltway on Wisconsin Avenue (or Rockville Pike).
Upon entering the Clinical Center, bear left and progress towards
the Lipsett Amphitheater.

For further information, please call Gary Fleming at
(301) 459-4343.

ABSTRACT

Neural network based methods for image filtering and pattern
feature extraction are combined to develop font independent character
recognition on a massively parallel array processor. Feature
localization and noise reduction are achieved using least squares
optimized Gabor filtering. The filtered images are then presented to an
ART-1 based learning algorithm which produces the self-organizing sets of
neural network generated features used for character recognition.
Implementation of these algorithms on highly parallel computer with 1024
processors allows high speed character recognition to be achieved at a
speed of 3ms/image, with greater than 99.9\% accuracy on machine print
and 80\% accuracy on unconstrained hand printed characters.

To improve the accuracy of hand printed recognition a new
architecture was developed for for multi-map, self-organizing pattern
recognition which allows concurrent, massively parallel, learning of
features using different maps for each feature type. The method used is
similar to the multi-map maps known to exist in the vertebrate sensory
cortex. The method consists sets of associative memory locations, one for
each feature type, in which learning is symmetrically triggered by
logical combinations of the association strengths of the memory blocks.
Each map is independent of the others except for the connections used to
trigger learning. The learning used to update memory location uses a feed
forward mechanisms and is self-organizing. The architecture is described
by the acronym FAUST (Feed-forward Association Using Symmetrical
Triggering). As a demonstration of the effectiveness of FAUST, 99.9\%
accurate character recognition on medium quality machine printed digits
at 2 ms/digit and 90\% recognition with 2\% substitutional errors has
been achieved on hand printed digits.


REFERENCES

[1] G. A. Carpenter, S. Grossberg and C. Mehanian, ``Invariant
Recognition of Cluttered Scenes by a Self-Organizing ART Architecture:
CORT-X Boundary Segmentation'', Neural Networks, 2, pp. 169-181, 1989.

[2] B. L. Pulito, T. R. Damarla, and S. Nariani, ``A Two- Dimensional
Shift Invarient Image Classification Neural Network Which Overcomes the
Stability/Plasticty Delemma'', Proc. of the IJCNN, II, pp. 825-833, June
1990.

[3] C. L. Wilson, R. A. Wilkinson, and M. D. Garris, ``Self- Organizing
Neural Network Character Recognition on a Massively Parallel Computer'',
Proc. of the IJCNN, II, pp. 325-329, June 1990.

[4] G. A. Carpenter and S. Grossberg, ``A Massively Parallel Architecture
for a Self-Organizing Neural Pattern Recognition Machine'', Computer
Vision, Graphics, and Image Processing, 37, pp 54-115, 1987.

[5] K. Fukushima, ``Neocognitron: A self-organizing neural network model
for a mechanism of pattern recognition unaffected by shifts in
position'', Biological Cybernetics, 36, pp. 193-202, 1980.

[6] J. G. Daugman, `` Complete Discrete 2-D Gabor Transform by
Neural Networks for Image Analysis and Compression'', IEEE Trans
on Acoustics, Speech, and Signal Processing, 36,pp. 1169-1179, 1988.

[7] R. Linsker, ``Self-Organization in a Perceptual Network'', Computer,
21, pp. 105-117, 1988.


[8] J. Rubner and K. Schulten, ``Development of Feature Detectors by
Self-Organization'', Biological Cybernetics, 62, pp. 193-199, 1990.

[9] A. Rojer and E. Schwatz, ``Multi-Map Model for Pattern
Classification'', Neural Computation, 1, pp. 104-115, 1989.

[10] D. L. Alkon, K. T. Blackwell, G. S. Barbour, A. K. Rigler, and T. P.
Vogl, ``Pattern-Recognition by an Artificial Network Derived fron
Biological Neuronal Systems'', Biological Cybernetics, 62, pp. 363-376,
1990.

[11] P. M. Flanders, D. J. Hunt, S. F. Reddaway, and D. Parkinson,
``Efficient high speed computing with the distributed array processor'',
Proceedings of Symposium on High Speed Computer and Algorithm
Organization, U of Ill., pp. 113-128, 1977.


------------------------------

Subject: request for data
From: zamora@das.llnl.gov (John A. Zamora)
Date: Tue, 11 Sep 90 10:50:58 -0700


I am a graduate student conducting research in the area of neural
networks. Since this is a neural network newsgroup, I would like to ask
those of you in netland for some possible help. This research is looking
at the possibility of a correlation between neural networks and other
organized systems. So the research is in need of data for statistical
purposes.

At this point, backpropagation is the type of neural network being
considered. Specifically, the data requested would be as follows:

* The title of the problem being solved and your name.

* The number of nodes in the input, hidden, and output layer.

* Did you use the standard backpropagation formula?
Did you use a momentum term in your formula?

* The weight values between nodes of a neural network after training.

Your time is greatly appreciated in this matter. If you have more than
one set of weights available (two or three sets), this will be of great
help.

If you have any questions, please do not hesitate to contact me.

John Zamora (415) 422-2008 zamora@das.llnl.gov

Thank you for your time and effort.


John A. Zamora
zamora@das.llnl.gov
(415) 422-2008
"All necessary disclaimers apply"

------------------------------

Subject: Natural Language Parsing
From: E S Atwell <eric%ai.leeds.ac.uk@pucc.PRINCETON.EDU>
Date: Wed, 12 Sep 90 14:39:28 +0000

I'm looking for references on neural networks for Natural Language
parsing, including techniques for parsing language of at least Context Free
complexity, and/or using a recursive stack of NNs. I have a large collection
of parsed English sentences (words plus detailed syntax trees) that I'd love
to train a neural network stack with if only I knew how. Any leads will be
appreciated. I will summarise and return the list to any responses. Thanks.


Eric Steven Atwell
Centre for Computer Analysis of Language And Speech (CCALAS)
Artificial Intelligence Division, School of Computer Studies
phone: +44 532 335761 Leeds University
FAX: +44 532 335468 Leeds LS2 9JT
JANET: eric@uk.ac.leeds.ai England
EARN/BITNET/ARPA: eric%leeds.ai@ac.uk

------------------------------

Subject: Request for replies
From: Ralph Cherubini <cherubini@austin.enet.dec.com>
Date: Thu, 13 Sep 90 06:49:44 -0700

I am interested in any information on ways to characterize the effects of
additive noise in back-propagation connection weights on network
performance.

More specifically:

has anyone done any work on a characterization which expresses the
effects in terms of analogy with optical focussing? I am not particularly
interested in ways of quantifying robustness of performance in the
presence of noise.

Thank you in advance for any thoughts,

Ralph Cherubini
Digital Equipment Corporation

------------------------------

Subject: ML91 -- THE EIGHTH INTERNATIONAL WORKSHOP ON MACHINE LEARNING
From: Lawrence Birnbaum <birnbaum@fido.ils.nwu.edu>
Date: Fri, 07 Sep 90 10:03:56 -0500


ML91 -- THE EIGHTH INTERNATIONAL WORKSHOP ON MACHINE LEARNING

CALL FOR WORKSHOP PROPOSALS AND PRELIMINARY CALL FOR PAPERS


On behalf of the organizing committee, we are pleased to solicit
proposals for the workshops that will constitute ML91, the Eighth
International Workshop on Machine Learning, to be held in late June,
1991, at Northwestern University, Evanston, Illinois, USA. We anticipate
choosing six workshops to be held in parallel over the three days of the
meeting. Our goal in evaluating workshop proposals is to ensure high
quality and broad coverage of work in machine learning. Workshop
committees -- which will operate for the most part independently in
selecting work to be presented at ML91 -- should include two to four
people, preferably at different institutions. The organizing committee
may select some workshops as proposed, or may suggest changes or
combinations of proposals in order to achieve the goals of quality and
balance.

Proposals are due October 10, 1990, preferably by email to:

ml91@ils.nwu.edu

although hardcopy may also be sent to the following address:

ML91
Northwestern University
The Institute for the Learning Sciences
1890 Maple Avenue
Evanston, IL 60201 USA

fax (708) 491-5258

Please include the following information:

1. Workshop topic

2. Names, addresses, and positions of workshop committee members

3. Brief description of topic

4. Workshop format

5. Justification for workshop, including assessment of breadth of
appeal

Workshop format is somewhat flexible, and may include invited talks,
panel discussions, short presentations, and even small working group
meetings. However, it is expected that the majority of time will be
devoted to technical presentations of 20 to 30 minutes in length, and we
encourage the inclusion of a poster session in each workshop. Each
workshop will be allocated approximately 100 pages in the Proceedings,
and papers to be published must have a minimum length of (most likely) 4
to 5 pages in double column format. Workshop committee members should be
aware of these space limitations in designing their workshops.

We encourage proposals in all areas of machine learning, including
induction, explanation-based learning, connectionist and neural net
models, adaptive control, pattern recognition, computational models of
human learning, perceptual learning, genetic algorithms, computational
approaches to teaching informed by learning theories, scientific theory
formation, etc. Proposals centered around research problems that can
fruitfully be addressed from a variety of perspectives are particularly
welcome.

The workshops to be held at ML91 will be announced towards the end of
October. In the meantime, we would like to announce a preliminary call
for papers; the submission deadline is February 1, 1990. Authors should
bear in mind the space limitations described above.

On behalf of the organizing committee,

Larry Birnbaum
Gregg Collins

Program co-chairs, ML91

(This announcement is being sent/posted to ML-LIST, CONNECTIONISTS,
ALife, PSYCOLOQUY, NEWS.ANNOUNCE.CONFERENCES, COMP.AI, COMP.AI.EDU,
COMP.AI.NEURAL-NETS, COMP.ROBOTICS, and SCI.PSYCHOLOGY. We encourage
readers to forward it to any other relevant mailing list or bulletin
board.)



------------------------------

Subject: NN For Knowledge Representation and Inference
From: B344DSL@UTARLG.UTARL.EDU
Date: Sat, 08 Sep 90 18:41:00 -0500

Announcement


NEURAL NETWORKS FOR KNOWLEDGE REPRESENTATION AND INFERENCE
Fourth Annual Workshop of the Metroplex Institute for Neural Dynamics (MIND)


October 4-6, 1990
IBM Westlake, TX
(near Dallas - Fort Worth Airport)


Conference Organizers:

Daniel Levine, University of Texas at Arlington (Mathematics)
Manuel Aparicio, IBM Application Solutions Division


Speakers will include:

James Anderson, Brown University (Psychology)
Jean-Paul Banquet, Hospital de la Salpetriere, Paris
John Barnden, New Mexico State University (Computer Science)
Claude Cruz, Plexus Systems Incorporated
Robert Dawes, Martingale Research Corporation
Richard Golden, University of Texas at Dallas (Human Development)
Janet Metcalfe, Dartmouth College (Psychology)
Jordan Pollack, Ohio State University (Computer Science)
Karl Pribram, Radford University (Brain Research Institute)
Lokendra Shastri, University of Pennsylvania (Computer Science)


Topics will include:

Connectionist models of semantic comprehension. Architectures for
evidential and case-based reasoning. Connectionist approaches to
symbolic problems in AI such as truth maintenance and dynamic binding.
Representations of logical primitives, data structures, and constitutive
relations. Biological mechanisms for knowledge representation and
knowledge-based planning.

We plan to follow the talks by a structured panel discussion on the
questions: Can neural networks do numbers? Will architectures for
pattern matching also be useful for precise reasoning, planning, and
inference?

Tutorial Session:

Robert Dawes, President of Martingale Research Corporation, will present
a three hour tutorial on neurocomputing the evening of October 3. This
preparation for the workshop will be free of charge to all
pre-registrants.



------------------------------------------------------------------------
Registration Form

NEURAL NETWORKS FOR KNOWLEDGE REPRESENTATION AND INFERENCE
Fourth Annual Workshop of the Metroplex Institute for Neural Dynamics (MIND)

Name: _____________________________________________________

Affiliation: ______________________________________________

Address: __________________________________________________
__________________________________________________
__________________________________________________
__________________________________________________

Telephone number: _________________________________________

Electronic mail: __________________________________________


Conference fee enclosed (please check appropriate line):

$50 for MIND members before September 30 ______
$60 for MIND members on/after September 30 ______
$60 for non-members before September 30 ______
$70 for non-members on/after September 30 ______
$10 for student MIND members any time ______
$20 for student non-members any time ______


Tutorial session (check if you plan to attend): ______
Note: This is free of charge to pre-registrants.


Suggested Hotels:

Solana Marriott Hotel. Next to IBM complex, with continuous shuttle
bus available to meeting site; ask for MIND conference rate of
$80/night. Call (817) 430-3848 or (800) 228-9290.

Campus Inn, Arlington. 30 minutes from conference, but rides are
available if needed; $39.55 for single/night. Call (817)
860-2323.

American Airlines. Minus 40% on coach or 5% over and above Super Saver.
Call (800)-433-1790 for specific information and reservations,
under Star File #02oz76 for MIND Conference.

Conference programs, maps, and other information will be mailed to
pre-registrants in mid-September.

Please send this form with check or money order to:

Dr. Manuel Aparicio
IBM Mail Stop 03-04-40
5 West Kirkwood Blvd.
Roanoke, TX 76299-0001
(817) 962-5944


------------------------------

Subject: TR - Acquiring Verb Morphology in Children and Connectionist Nets
From: Kim Plunkett <plunkett@amos.ucsd.edu>
Date: Mon, 10 Sep 90 11:37:33 -0700


The following TR is now available:


From Rote Learning to System Building:
Acquiring Verb Morphology in Children and Connectionist Nets

Kim Plunkett
University of Aarhus
Denmark

Virginia Marchman
Center for Research in Language
University of California, San Diego

Abstract

The traditional account of the acquisition of English verb
morphology supposes that a dual mechanism architecture
underlies the transition from early rote learning processes
(in which past tense forms of verbs are correctly produced)
to the systematic treatment of verbs (in which irregular
verbs are prone to error). A connectionist account supposes
that this transition can occur in a single mechanism (in the
form of a neural network) driven by gradual quantitative
changes in the size of the training set to which the network
is exposed. In this paper, a series of simulations is
reported in which a multi-layered perceptron learns to map
verb stems to past tense forms analogous to the mappings
found in the English past tense system. By expanding the
training set in a gradual, incremental fashion and evaluat-
ing network performance on both trained and novel verbs at
successive points in learning, we demonstrate that the net-
work undergoes reorganizations that result in a shift from a
mode of rote learning to a systematic treatment of verbs.
Furthermore, we show that this reorganizational transition
is contingent upon a critical mass in the training set and
is sensitive to the phonological sub-regularities character-
izing the irregular verbs. The optimal levels of performance
achieved in this series of simulations compared to previous
work derives from the incremental training procedures
exploited in the current simulations. The pattern of errors
observed are compared to those of children acquiring the
English past tense, as well as children's performance on
experimental studies with nonsense verbs. Incremental learn-
ing procedures are discussed in light of theories of cogni-
tive development. It is concluded that a connectionist
approach offers a viable alternative account of the acquisi-
tion of English verb morphology, given the current state of
empirical evidence relating to processes of acquisition in
young children.


Copies of the TR can be obtained by contacting "staight@amos.ucsd.edu"
and requesting CRL TR #9020. Please remember to provide your
hardmail address. Alternatively, a compressed PostScript
file is available by anonymous ftp from "amos.ucsd.edu" (internet
address 128.54.16.43). The relevant file is "crl_tr9020.ps.Z" and is
in the directory "~ftp/pub".

Kim Plunkett

------------------------------

Subject: Reinforcement Learning -- Special Issue of Machine Learning Journal
From: Rich Sutton <rich@gte.com>
Date: Wed, 12 Sep 90 11:36:48 -0400


CALL FOR PAPERS

The journal Machine Learning will be publishing a special issue on
REINFORCEMENT LEARNING in 1991. By "reinforcement learning" I mean
trial-and-error learning from performance feedback without an explicit
teacher other than the external environment. Of particular interest is
the learning of mappings from situation to action in this way.
Reinforcement learning has most often been studied within connectionist
or classifier-system (genetic) paradigms, but it need not be.

Manuscripts must be received by March 1, 1991, to assure full
consideration. One copy should be mailed to the editor:

Richard S. Sutton
GTE Laboratories, MS-44
40 Sylvan Road
Waltham, MA 02254
USA

In addition, four copies should be mailed to:

Karen Cullen
MACH Editorial Office
Kluwer Academic Publishers
101 Philip Drive
Assinippi Park
Norwell, MA 02061
USA

Papers will be subject to the standard review process.


------------------------------

Subject: ANNA-91 Conference
From: /PN=HARRY.ERWIN/O=TRW/ADMD=TELEMAIL/C=US/@sprint.com
Date: 13 Sep 90 17:41:00 +0000

ANNA-91: Analysis of Neural Net Applications Conference
May 29-31, 1991, George Mason University, Fairfax, VA

CALL FOR PARTICIPATION

The 1991 Analysis of Neural Net Applications Conference, ANNA-91,
will present a single-track technical program focusing on the
applications of neural net technology to real world problems.
The program will be structured around the problem-solving process:
. Domain Analysis
. Design Criteria
. Analytic Approaches to Network Definition
. Evaluation Mechanisms
. Lesson Learned, Feedback/Design Implications

Authors are invited to report about concrete results and experiences
on the application of neural nets. Reports about important negative,
as well as successful, results are encouraged. Submissions should
address applications areas, such as:
. Biological applications
. Cognitive modeling
. Modeling of linear and non-linear systems
. Optimization and decision support
. Vision and imaging

Authors are responsible for any necessary clearances and/or approvals for
submitted abstracts and papers. ACM copyright releases will be required
for the final papers. Authors are expected to provide their full papers
to be included in the proceedings, which will be available at the
conference. Please submit 10 copies of an extended abstract of
sufficient length to support review--4-5 pages, including a brief
bibliography. Authors of accepted papers are expected to provide full
papers for the proceedings, not to exceed 25 pages, including
bibliography.

Tutorial sessions will be held on Wednesday, May 29, immediately prior to
the conference. If you are interested in teaching a tutorial or feel a
need for one in a particular related field, please submit a full-day
tutorial proposal. Instructors should include one copy of draft tutorial
notes and descriptions of any demonstrations with 10 copies of their
proposal. Instructors of accepted tutorials are expected to provide a
master set of tutorial notes, not to exceed 200 pages including
bibliography, for distribution to tutorial attendees.

Key Dates:
. Submission deadline: 11/15/90.
. Acceptance notices: 1/1/91.
. Final paper due: 2/28/91.
. Tutorial date: 5/29/91.
. Conference dates: 5/30-31/91.

All submissions should be sent to:
Robert Stites
ANNA-91 Program Chair
IKONIX
PO Box 565
Herndon, VA 22070-0565

For ANNA-91 information, contact:
Toni Shetler
ANNA-91 Conference Chair
TRW/Systems Division
FVA6/3444
PO Box 10400
Fairfax, VA 22031

Sponsors: ACM SIGArt, ACM SIGBDP
In cooperation with: International Neural Net Society
Washington Evolutionary Systems Society
Institutional support: George Mason University
National Institutes of Health
Industrial support: American Electronics Inc.
CTA Inc.
TRW/Systems Division


Harry Erwin
Telemail: HERWIN/TRW
Internet: /G=Harry/S=Erwin/O=TRW/ADMD=Telemail/C=US/@Sprint.com
Alternate Internet: herwin@pro-novapple.cts.com


------------------------------

Subject: Tech Report Available - ID3 vs. BackProp
From: Tom Dietterich <tgd@turing.CS.ORST.EDU>
Date: Thu, 13 Sep 90 14:08:26 -0700


The following tech report is available in compressed postscript format
from the neuroprose archive at Ohio State.


A Comparison of
ID3 and Backpropagation
for English Text-to-Speech Mapping

Thomas G. Dietterich
Hermann Hild
Ghulum Bakiri

Department of Computer Science
Oregon State University
Corvallis, OR 97331-3102

Abstract

The performance of the error backpropagation (BP) and decision tree (ID3)
learning algorithms was compared on the task of mapping English text to
phonemes and stresses. Under the distributed output code developed by
Sejnowski and Rosenberg, it is shown that BP consistently out-performs
ID3 on this task by several percentage points. Three hypotheses
explaining this difference were explored: (a) ID3 is overfitting the
training data, (b) BP is able to share hidden units across several output
units and hence can learn the output units better, and (c) BP captures
statistical information that ID3 does not. We conclude that only
hypothesis (c) is correct. By augmenting ID3 with a simple statistical
learning procedure, the performance of BP can be approached but not
matched. More complex statistical procedures can improve the performance
of both BP and ID3 substantially. A study of the residual errors
suggests that there is still substantial room for improvement in learning
methods for text-to-speech mapping.



This is an expanded version of a short paper that appeared at the Seventh
International Conference on Machine Learning at Austin TX in June.

To retrieve via FTP, use the following procedure:

unix> ftp cheops.cis.ohio-state.edu # (or ftp 128.146.8.62)
Name (cheops.cis.ohio-state.edu:): anonymous
Password (cheops.cis.ohio-state.edu:anonymous): <ret>
ftp> cd pub/neuroprose
ftp> binary
ftp> get
(remote-file) dietterich.comparison.ps.Z
(local-file) foo.ps.Z
ftp> quit
unix> uncompress foo.ps
unix> lpr -P(your_local_postscript_printer) foo.ps

------------------------------

Subject: NN AND VISION -IJPRAI-special issue
From: "Dr. Josef Skrzypek" <skrzypek@CS.UCLA.EDU>
Date: Thu, 13 Sep 90 15:25:48 -0700


Because of repeat enquiries about the special issue of IJPRAI (Intl. J.
of Pattern Recognition and AI) I am posting the announcement again.



IJPRAI CALL FOR PAPERS IJPRAI

We are organizing a special issue of IJPRAI (Intl. Journal of
Pattern Recognition and Artificial Intelligence) dedicated to the
subject of neural networks in vision and pattern recognition.
Papers will be refereed. The plan calls for the issue to be
published in the fall of 1991. I would like to invite your
participation.


DEADLINE FOR SUBMISSION: 10th of December, 1990

VOLUME TITLE: Neural Networks in Vision and Pattern Recognition

VOLUME GUEST EDITORS: Prof. Josef Skrzypek and Prof. Walter Karplus
Department of Computer Science, 3532 BH
UCLA
Los Angeles CA 90024-1596
Email: skrzypek@cs.ucla.edu or karplus@cs.ucla.edu
Tel: (213) 825 2381
Fax: (213) UCLA CSD


DESCRIPTION


The capabilities of neural architectures (supervised and
unsupervised learning, feature detection and analysis through
approximate pattern matching, categorization and self-organization,
adaptation, soft constraints, and signal based processing) suggest
new approaches to solving problems in vision, image processing and
pattern recognition as applied to visual stimuli. The purpose of
this special issue is to encourage further work and discussion in
this area.

The volume will include both invited and submitted peer-reviewed
articles. We are seeking submissions from researchers in relevant
fields, including, natural and artificial vision, scientific
computing, artificial intelligence, psychology, image processing
and pattern recognition. "We encourage submission of: 1) detailed
presentations of models or supporting mechanisms, 2) formal
theoretical analyses, 3) empirical and methodological studies. 4)
critical reviews of neural networks applicability to various
subfields of vision, image processing and pattern recognition.

Submitted papers may be enthusiastic or critical on the
applicability of neural networks to processing of visual
information. The IJPRAI journal would like to encourage
submissions from both , researchers engaged in analysis of
biological systems such as modeling
psychological/neurophysiological data using neural networks as well
as from members of the engineering community who are synthesizing
neural network models. The number of papers that can be included
in this special issue will be limited. Therefore, some qualified
papers may be encouraged for submission to the regular issues of
IJPRAI.

SUBMISSION PROCEDURE

Submissions should be sent to Josef Skrzypek, by 12-10-1990. The
suggested length is 20-22 double-spaced pages including figures,
references, abstract and so on. Format details, etc. will be
supplied on request.

Authors are strongly encouraged to discuss ideas for possible
submissions with the editors.

The Journal is published by the World Scientific and was
established in 1986.

Thank you for your considerations.



------------------------------

End of Neuron Digest [Volume 6 Issue 54]
****************************************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT