Copy Link
Add to Bookmark
Report
VISION-LIST Digest Volume 13 Issue 06
VISION-LIST Digest Mon Feb 07 10:24:18 PDT 94 Volume 13 : Issue 6
- ***** The Vision List host is TELEOS.COM *****
- Send submissions to Vision-List@TELEOS.COM
- Vision List Digest available via COMP.AI.VISION newsgroup
- If you don't have access to COMP.AI.VISION, request list
membership to Vision-List-Request@TELEOS.COM
- Access Vision List Archives via anonymous ftp to FTP.TELEOS.COM
Today's Topics:
Human Visual System...
Mathematica for Computer Vision?
Request for Help
PhD studentship - Leeds
Postdoctoral fellowships and graduate assistantship
Launch of new Journal
CFP: ATRS and IUW
Helmholtz-Conference
CFP: IEEE Tools with AI '94
Technical report available
Papers on visual occlusion and neural networks
----------------------------------------------------------------------
Date: Mon, 7 Feb 1994 10:09:56 GMT
From: eepnkl@midge.bath.ac.uk (N K Laurance)
Department: Image and Parallel Processing Laboratory.
Organization: School of Electrical Engineering, University of Bath, UK
Subject: Human Visual System...
I was wondering if anyone could point me toward references/code/books
about an image degradation measure, incorporating a percieved error.
Preferably NO references from SPIE (Society of Photo-Opt Instr Eng) as these
do not "exist" according to all the local libraries I have tried.
At present, our laboratory uses RMS to compare an image after processing, to
the original, but this does not allow for the non-linearites of the HVS.
Thanks In Advance...
* Neil Laurance *
* School Of Electronic Engineering *
* University of Bath, U.K. *
* tel: + 44 225 826066 *
* email: neil@ee.bath.ac.uk *
------------------------------
Date: 3 Feb 1994 09:51:26 -0000
From: Fergal Shevlin <fshevlin@maths.tcd.ie>
Organization: Dept. of Maths, Trinity College, Dublin, Ireland.
Subject: Mathematica for Computer Vision?
Hello,
Is anybody using the Mathematica symbolic math system for image
processing / computer vision? We are trying to do so and are
encountering big performance problems, particularly with respect to
stereo image correlation. Please get in touch if you have some
experience you'd like to share.
Thanks,
Fergal.
Fergal Shevlin E-mail: Fergal.Shevlin@cs.tcd.ie
Dept. of Computer Science Phone: +353-1-7021209
Trinity College, Dublin 2, Ireland. Fax: +353-1-6772204
------------------------------
Date: Tue, 1 Feb 94 16:50:41 GMT
From: Simon Burt <simonb@charles-cross.plymouth.ac.uk>
Subject: Request for Help
I'm intending to carry out vision experiments involving computer graphics on
a PC in sVGA resolution. The experiments should take the following form:
Instantaneously display an image on the screen for x millisecs
Display a blank screen for y millisecs
Instantaneously display another image on the screen for z millisecs
Time the subject's response and store which key was pressed
(x, y & z being experiment variables)
If anyone knows of any existing software capable of doing this kind of thing
I should be very grateful if they would contact me.
Apologies if this is a FAQ - I'm a new subscriber!
Regards
Simon Burt
Centre For Intelligent Systems, University of Plymouth,
Charles Cross Centre, Constantine St, Charles Cross,
Plymouth, Devon, England.
Tel: +44 752-225885
------------------------------
Date: Thu, 3 Feb 94 10:01:42 GMT
From: Charles Taylor <charles@amsta.leeds.ac.uk>
Subject: PhD studentship - Leeds
Please bring this to the attention of any interested people.
PHD STUDENTSHIP AT THE DEPARTMENT OF STATISTICS, UNIVERSITY OF LEEDS
OPTICAL DETECTION OF WEEDS IN ROW CROPS
Funding is available for a PhD student to work on a topic related to
the above project. The project is funded by MAFF (Ministry of
Agriculture, Fisheries and Food) for three years to develop techniques
for the detection of weeds in arable and horticultural row crops, so
that precision chemical or mechanical weed control may be applied. In
particular we are interested in developing image analysis techniques
to discriminate between weed species and crops, particularly when weed
and crop plants are touching or overlapping. Methods to be explored
include examining the shape and texture of the leaves.
The student will be jointly supervised by Drs. Charles Taylor and Ian Dryden.
There are excellent computing facilities and a good research atmosphere in
the Department.
We would like to start on 1st August, 1994 or as soon as possible thereafter.
The studentship is currently 5,000 pounds per annum (tax free), and will rise by
10% per year, with the first rise on 1st October, 1994.
The student should have or expect to gain a First or 2.1 honours degree with a
strong background in Mathematics or Statistics.
If you have further enquiries please send them to
Charles Taylor,
Department of Statistics,
University of Leeds,
Leeds LS2 9JT
Please make enquiries by 15th March.
Phone: 0532 - 335168 Fax: 0532 - 335102
email: charles@amsta.leeds.ac.uk
------------------------------
Date: Fri, 4 Feb 94 10:53:01 EST
From: zhao@umdsun2.umd.umich.edu (dongming zhao)
Organization: University of Michigan -Dearborn
Subject: Postdoctoral fellowships and graduate assistantship
Two postdoctoral research positions are available in
the 3-D Imaging Laboratory, School of Engineering, University of
Michigan -Dearborn. The positions are available now and for
a 2-year period. Salary range is $22 - 28,000 per annum, plus health
insurance.
The research in the new 3-D Imaging Laboratory focuses on
fundamental study on 3-D imaging and generic algorithm design
for machine vision application, particularly in industrial automation.
The major objectives include feature extraction and object recognition
based on range data generated from laser-based cameras.
The research facilities include laser-based camera systems, Silicon
Graphics workstation, Sun workstations, an AIS vision computer (3500EX, 128
parallel processors) and a Datacube MaxVideo. The computing facilities
in the School of Engineering are also available for the research.
The research is funded by a federal agency and in collaboration with
an industrial company. The research will be performed on campus and
interactive with research engineers in the company is expected.
Candidates for the positions should have strong background in image analysis
and computer/machine vision research. Proficiency in programming in C or C++
is necessary. Experience in image processing software is preferred.
Applicants please send a resume, copies of key publications in image analysis
and machine vision area (journal, conference, tech-report) to
Dr. Dongming Zhao
Dept. of Electrical and Computer Engineering
University of Michigan -Dearborn
Dearborn, MI 48128
(313) 593-5527
E-mail: zhao@umdsun2.umd.umich.edu
In addition to the postdoctoral positions, we have two graduate research
assistantships available for this project.
The assistantship includes tuition, health insurance, and stipend around
$10,000 per annum. The students are expected to carry out specified research
tasks and take an MS thesis option. The graduate students may choose to
continue their advanced study in Doctor of Engineering on this campus
(the program is expected to start in the near future). Good skills in
programming in C or C++ and knowledge of signal and image processing are
desired. Interested students should send a resume with transcript(s) and
two references to Dr. Zhao.
------------------------------
Date: Thu, 3 Feb 1994 10:17:14 +0000 (GMT)
Subject: Launch of new Journal
From: J.Illingworth@ee.surrey.ac.uk
As of 1 February 1994 there is a new journal of relevence to practitioners in
image processing, pattern recognition and computer vision.
CALL FOR PAPERS
+++++++++++++++++++++++++++++++++++++++++++++++++++++
NEW JOURNAL FOR IMAGE PROCESSING AND MACHINE VISION
+++++++++++++++++++++++++++++++++++++++++++++++++++++
The IEE (Institution of Electrical Engineers) in the U.K. has re-organised
its Proceedings topics and February 1994 launched a new journal
title called
*************************
VISION, IMAGE AND
SIGNAL PROCESSING
*************************
The joint Honorary Editors will be:
Professor Peter Grant, Dr John Illingworth
Signal Processing Group, Vision, Speech and Signal Proc Group
Dept of Electrical Engineering Dept of Electronics and Electrical Eng.,
University of Edinburgh, University of Surrey.
Edinburgh. Scotland Guildford U.K.
This journal encompasses image and signal processing in its widest sense.
Image processing techniques covering sampling, enhancement, restoration,
segmentation, texture, motion and shape analysis are appropriate for the
journal. It also covers source coding techniques which are used in image
coding; for example vector quantisation, transform and sub-band techniques,
motion compensation, standards and 3D-modelling for bit rate reduction in
single images or image sequences. Advances in the field of speech analysis,
coding, recognition and synthesis are appropriate for the journal. Signal
processing includes algorithm advances in single and multi-dimensional
recursive and non-recursive digital filters and multirate filterbanks; signal
transformation techniques; classical, parametric and higher order spectral
analysis; system modelling and adaptiveidentification techniques.
Papers on novel algorithms for image and signal theory and processing together
with review and tutorial papers on the above topics will be welcomed by the
Honorary Editors. Papers having a practical relevence and dealing with
application of these concepts are particularly encouraged.
To submit a paper send 5 copies of a manuscript of approximately 12 to 16
double spaced A4 pages (or 3000 words) plus 10 to 14 illustrations to:
The Managing Editor: IEE Proceedings,
Institution of Electrical Engineers
Michael Faraday House
Six Hills Way
Stevenage
Herts SG1 2AY
United Kingdom
Longer papers will be considered if they are of exceptional merit. The
journal aims to provide a rapid response to authors with 90% of all
manuscripts dealt with in less than 6 months. Fuller details of the
guide to authors can be found in the current IEE Proceedings parts.
Dr. J. Illingworth, | Phone: (0483) 259835
V.S.S.P. Group, | Fax : (0483) 34139
Dept of Electronic and Electrical Eng, | Email: J.Illingworth@ee.surrey.ac.uk
University of Surrey, |
Guildford, |
Surrey GU2 5XH |
United Kingdom |
------------------------------
Date: Thu, 3 Feb 94 09:25 EST
From: peters@nvl.army.mil (Richard Peters)
Subject: CFP: ATRS and IUW
ANNOUNCEMENT AND CALL FOR PAPERS
Joint
Automatic Target Recognizer
Systems and Technology Conference IV
and
Image Understanding Workshop
November 14-17, 1994 Monterey, CA
The Fourth Automatic Target Recognizer (ATR) Systems and Technology Conference
will be held concurrently with the Image Understanding (IU) Workshop at the
Naval Postgraduate School and Hyatt Regency Hotel in Monterey, Cal., November
14-17, 1994. Since target/object recognition is a key objective of both groups,
for military as well as commercial applications, this joint conference has a
theme of "Cooperative Development".
The Joint Conference will minimize overlap between the ATR and IU sessions to
facilitate participation in both conferences. Civilian and Military leaders will
give key speeches about operational applications and emerging technologies of
interest to both groups. Some ATR sessions will be SECRET NOFORN.
CALL FOR PAPERS. The ATR Systems and Technology Conference Committee is
soliciting original unclassified and classified (through SECRET NOFORN) papers
in the following areas. (Papers to be published in the 1994 IU proceedings will
be solicited separately.)
* Target Recognition and Identification * ATR Systems
* Target Detection and Tracking * Multisensor Database Management
* Moving, Time-critical, and Hidden Targets * Evaluation Methodologies
* Data Synthesis and Characterization * Man-Machine Interfaces
* Phenomenology-based Algorithms * Multisensor Processing
Submit an unclassified abstract that concisely describes your paper (300 words
or less); also include the classification level of the paper, lead author,
co-authors, industry and/or government organization, mailing address, e-mail
address, phone and FAX numbers. By submitting an abstract to this conference,
you are committing your paper exclusively to this conference, should it be
accepted.
Abstracts postmarked no later than March 15, 1994, should be sent to Annette
Bergman at this address: Commander, Code C2158 Attn: A. Bergman,
NAVAIRWARCENWPNDIV, 1 Administration Circle, China Lake, CA 93555-6001. The
lead author will be notified of acceptance or regret by May 20, 1994.
Camera-ready manuscripts of accepted papers must be postmarked not later than
August 20, 1994 and sent to Mr. Andrew Zembower, IIT Research Institute, 4140
Linden Ave., Suite 201, Dayton, OH 45432.
Sponsored by:
Advanced Research Projects Agency
Joint Directors of Laboratories
Army Night Vision and Electronic Sensors Directorate
Air Force Wright Laboratory
Naval Air Warfare Center, China Lake
ABSTRACTS DUE MARCH 15, 1994 MANUSCRIPTS DUE AUGUST 20, 1994
Mr. Mike Bazakos Ms. Annette Bergman Prof. Bir Bhanu
ATR Co-Chair ATR Co-Chair IU Liaison
Honeywell Technology Ctr. Naval Air Warfare Center UC Riverside
------------------------------
Date: Mon, 07 Feb 94 06:53:10 EST
From: gpo65@rz.uni-kiel.d400.de
Subject: Helmholtz-Conference
Announcement
****************************************************
From Codes To Cognition.
Foundational Aspects of Visual Information Processing
Centennial Conference in Honour of Hermann v.Helmholtz
****************************************************
University of Kiel/Germany, 17.-21. July 1994
The conference will address general fundamental psychological problems
of visual perception in areas such as shape from shading, stereo vision,
colour and form perception and attention. A basic theme recurring
throughout the conference will be how perceptual achievements relate
to sensory input. Since Helmholtz and his notion of "unconscious
inferences", several theoretical intuitions (e.g. the concept
of "ill-posed problems", Barlow's statistical model for the discovery
of "independent coincidences", Ullman and Koenderink's discussion of
Gibson's ideas of "direct perception", Hoffman's "observer mechanics",
Shepard's ideas on resonance) concerning the principles of perception
revolve around the attempt to bridge the gap between the (often 'meagre')
sensory input and the actual performance.
Attempts to theoretically understand the interaction of
- restrictions and invariants of the physical environment,
- theoretical limiting factors of the sensory system as well as
- restrictions on the categorization and interpretation of sensory
information that have been internalized in the
course of evolution
will be the main focus of this conference.
SPEAKERS:
S.ANSTIS (San Diego), L.AREND (Princeton), H.B.BARLOW (Cambridge),
H.BUELTHOFF (Tuebingen), M.FAHLE (Tuebingen), D.D.HOFFMAN (Irvine),
Chr.KOCH (Pasadena), J.KOENDERINK (Utrecht), D.MACLEOD (San Diego),
H.MALLOT (Tuebingen), O.NEUMANN (Bielefeld), R.NIEDEREE (Kiel),
Chr.NOTHDURFT (Goettingen), E.POEPPEL (Muenchen), W.PRINZ (Muenchen),
V.RAMACHANDRAN (San Diego), E.SCHEERER (Oldenburg), R.SHEPARD (Stanford),
G.SPERLING (Irvine), S.ULLMAN (Cambridge, Mass.), P.WHITTLE (Cambridge)
Address correspondence to:
Dieter Heyer & Rainer Mausfeld, Institute for Psychology, University of Kiel
24098 Kiel/Germany, Fax: +49-431-8802975, E-mail: gpo65@rz.uni-kiel.d400.de
------------------------------
Date: Tue, 1 Feb 94 13:27:02 +0100
From: brause@informatik.uni-frankfurt.de
Subject: CFP: IEEE Tools with AI '94
CALL FOR PAPERS
6th IEEE International Conference on Tools with Artificial Intelligence
November 6-9, 1994
Hotel Intercontinental
New Orleans, Louisiana
This conference is envisioned to foster the transfer of ideas relating
to artificial intelligence among academics, industry, and government
agencies. It focuses on methodologies which can aid the development
of AI, as well as the demanding issues involved in turning these
methodologies to practical tools. Thus, this conference encompasses
the technical aspects of specifying, developing, and evaluating
theoretical and applied mechanisms which can serve as tools for
developing intelligent systems and pursuing artificial intelligence
applications. Focal topics of interest include, but are not limited
to, the following:
* Machine Learning, Computational Learning
* Artificial Neural Networks
* Uncertainty Management, Fuzzy Logic
* Distributed and Cooperative AI, Information Agents
* Knowledge Based Systems, Intelligent Data Bases
* Intelligent Strategies for Scheduling and Planning
* AI Algorithms, Genetic Algorithms
* Expert Systems
* Natural Language Processing
* AI Applications (Vision, Robotics, Signal Processing, etc.)
* Information Modeling, Reasoning Techniques
* AI Languages, Software Engineering, Object-Oriented Systems
* Logic and Constraint Programming
* Strategies for AI development
* AI tools for Biotechnology
INFORMATION FOR AUTHORS
There will be both academic and industry tracks. A one day workshop
(November 6th) precedes the conference (November 7-9). Authors are
requested to submitt original papers to the program chair by April 20,
1994. Five copies (in English) of double-spaced typed manuscript
(maximum of 25 pages) with an abstract are required. Please attach a
cover letter indicating the conference track (academic/industry) and
areas (in order of preference) most relevant to the paper. Include
the contact author's postal address, e-mail address, and telephone
number. Submissions in other audio-visual forms are acceptable only
for the industry track, but they must focus on methodology and timely
results on AI technological applications and problems. Authors will
be notified of acceptance by July 15, 1994 and will be given instruc-
tions for camera ready papers at that time. The deadline for camera
ready papers will be August 19, 1994. Outstanding papers will be eli-
gible for publication in the International Journal on Artificial
Intelligence Tools.
Submit papers and panel proposals by April 20, 1994 to the
Program Chair:
Cris Koutsougeras
Computer Science Department
Tulane University
New Orleans, LA 70118
Phone: (504) 865-5840
e-mail: ck@cs.tulane.edu
Potential panel organizers please submit a subject statement and a
list of panelists. Acceptances of panel proposals will be announced
by June 30, 1994.
A computer account (tai@cs.tulane.edu) is running to provide automatic
information responses. You can obtain the electronic files for the
CFP, program, registration form, hotel reservation form, and general
conference information. For more information please contact:
Conference Chair Steering Committee Chair
Jeffrey J.P. Tsai Nikolaos G. Bourbakis
Dept. of EECS (M/C 154) Dept. of Electrical Engineering
851 S. Morgan Street SUNY at Binghamton
University of Illinois Binghamton, NY 13902
Chicago, IL 60607-7053 Tel: (607)777-2165
(312)996-9324 e-mail: bourbaki@bingvaxu.cc.binghamton.edu
(312)413-0024 (fax)
tsai@bert.eecs.uic.edu
Program Chair : Cris Koutsougeras, Tulane University
Registration Chair : Takis Metaxas,
(617) 283-3054,
e-mail: takis@poobah.wellesley.edu
Local Arrangements Chair : Akhtar Jameel, e-mail: jameel@cs.tulane.edu
Workshop Organizing Chair : Mark Boddy, Honeywell
Industrial Track Vice Chairs : Steven Szygenda, Raymond Paul
Program Vice Chairs :
Machine Learning: E. Kounalis
Computational Learning: J. Vitter
Uncertainty Management, Fuzzy Logic: R. Goldman
Knowledge Based Systems, Intelligent Data Bases: M. Ozsoyoglu
AI Algorithms, Genetic Algorithms: P. Marquis
Natural Language Processing: B. Manaris
Information Modeling, Reasoning Techniques: D. Zhang
Logic and Constraint Programming: A. Bansal
AI Languages, Software Engineering, Object-Oriented Systems: B. Bryant
Artificial Neural Networks: P. Israel
Distributed and Cooperative AI, Information Agents: C. Tsatsoulis
Intelligent Strategies for Scheduling and Planning: L. Hoebel
Expert Systems: F. Bastani
AI Applications (Vision, Robotics, Signal Processing, etc.): C. T. Chen
AI tools for Biotechnology: M. Perlin
Strategies for AI development: U. Yalcinalp
Publicity Chairs : R. Brause, Germany
Mikio Aoyama, Japan
Benjamin Jang, Taiwan
Steering Committee :
Chair: Nikolaos G. Bourbakis, SUNY-Binghamton
John Mylopoulos, University of Toronto, Ontario, Canada
C. V. Ramamoorthy, University of California-Berkeley
Jeffrey J.P. Tsai, University of Illinois at Chicago
Wei-Tek Tsai, University of Minnesota
Benjamin W. Wah, University of Illinois at Urbana
------------------------------
Date: Thu, 3 Feb 1994 15:28:34 GMT
From: jml@eng.cam.ac.uk (Jonathan Lawn)
Organization: Cambridge University Engineering Department, UK
Subject: Technical report available
The following technical report is available by anonymous ftp from the
archive of the Speech, Vision and Robotics Group at the Cambridge
University Engineering Department.
Robust Egomotion Estimation
from Affine Motion Parallax
Jonathan M. Lawn and Roberto Cipolla
Technical Report CUED/F-INFENG/TR160
Cambridge University Engineering Department
Trumpington Street
Cambridge CB2 1PZ
England
Abstract
A method of determining the motion of a camera from its image
velocities is described that is insensitive to noise and the intrinsic
camera parameters and requires no special assumptions. This algorithm
is based on a novel extension of motion parallax which does not
require the instantaneous alignment of features, but uses sparse
visual motion estimates to extract the direction of translation of the
camera directly, after which determination of the camera rotation and
the depths of the image features follows easily. A method for
calculating the expected uncertainty in the estimates is also
described which allows optimal estimation and can also detect and
reject independent motion and false correspondences. Experiments using
small perturbation analysis show a favourable comparison with existing
methods, and specifically the Fundamental Matrix method.
***** How to obtain a copy ******
a) Via FTP:
unix> ftp svr-ftp.eng.cam.ac.uk
Name: anonymous
Password: (type your email address)
ftp> cd reports
ftp> binary
ftp> get lawn_tr160.ps.Z
ftp> quit
unix> uncompress lawn_tr160.ps.Z
unix> lpr lawn_tr160.ps (or however you print PostScript)
b) Via postal mail:
Request a hardcopy from
Jonathan Lawn,
Cambridge University Engineering Department,
Trumpington Street,
Cambridge CB2 1PZ,
England.
or email me: jml@eng.cam.ac.uk
------------------------------
Date: Wed, 2 Feb 94 12:48:33 -0500
From: Jonathan A. Marshall <marshall@cs.unc.edu>
Subject: Papers on visual occlusion and neural networks
Dear Colleagues,
Below I list two new papers that I have added to the Neuroprose archives
(thanks to Jordan Pollack!). In addition, I list two of my older papers
in Neuroprose. You can retrieve a copy of these papers -- follow the
instructions at the end of this message.
--Jonathan
*****
marshall.occlusion.ps.Z (5 pages)
A SELF-ORGANIZING NEURAL NETWORK THAT LEARNS TO
DETECT AND REPRESENT VISUAL DEPTH FROM OCCLUSION EVENTS
JONATHAN A. MARSHALL and RICHARD K. ALLEY
Department of Computer Science, CB 3175, Sitterson Hall
University of North Carolina, Chapel Hill, NC 27599-3175, U.S.A.
marshall@cs.unc.edu, alley@cs.unc.edu
Visual occlusion events constitute a major source of depth information. We
have developed a neural network model that learns to detect and represent
depth relations, after a period of exposure to motion sequences containing
occlusion and disocclusion events. The network's learning is governed by a
new set of learning and activation rules. The network develops two parallel
opponent channels or "chains" of lateral excitatory connections for every
resolvable motion trajectory. One channel, the "On" chain or "visible"
chain, is activated when a moving stimulus is visible. The other channel,
the "Off" chain or "invisible" chain, is activated when a formerly visible
stimulus becomes invisible due to occlusion. The On chain carries a
predictive modal representation of the visible stimulus. The Off chain
carries a persistent, amodal representation that predicts the motion of the
invisible stimulus. The new learning rule uses disinhibitory signals
emitted from the On chain to trigger learning in the Off chain. The Off
chain neurons learn to interact reciprocally with other neurons that
indicate the presence of occluders. The interactions let the network
predict the disappearance and reappearance of stimuli moving behind
occluders, and they let the unexpected disappearance or appearance of
stimuli excite the representation of an inferred occluder at that location.
Two results that have emerged from this research suggest how visual systems
may learn to represent visual depth information. First, a visual system can
learn a nonmetric representation of the depth relations arising from
occlusion events. Second, parallel opponent On and Off channels that
represent both modal and amodal stimuli can also be learned through the same
process.
[In Bowyer KW & Hall L (Eds.), Proceedings of the AAAI Fall Symposium on
Machine Learning and Computer Vision, Research Triangle Park, NC, October
1993, 70-74.]
*****
marshall.context.ps.Z (46 pages)
ADAPTIVE PERCEPTUAL PATTERN RECOGNITION
BY SELF-ORGANIZING NEURAL NETWORKS:
CONTEXT, UNCERTAINTY, MULTIPLICITY, AND SCALE
JONATHAN A. MARSHALL
Department of Computer Science, CB 3175, Sitterson Hall
University of North Carolina, Chapel Hill, NC 27599-3175, U.S.A.
marshall@cs.unc.edu
A new context-sensitive neural network, called an "EXIN" (excitatory+
inhibitory) network, is described. EXIN networks self-organize in complex
perceptual environments, in the presence of multiple superimposed patterns,
multiple scales, and uncertainty. The networks use a new inhibitory
learning rule, in addition to an excitatory learning rule, to allow
superposition of multiple simultaneous neural activations (multiple
winners), under strictly regulated circumstances, instead of forcing
winner-take-all pattern classifications. The multiple activations represent
uncertainty or multiplicity in perception and pattern recognition.
Perceptual scission (breaking of linkages) between independent category
groupings thus arises and allows effective global context-sensitive
segmentation and constraint satisfaction. A Weber Law neuron-growth rule
lets the network learn and classify input patterns despite variations in
their spatial scale. Applications of the new techniques include
segmentation of superimposed auditory or biosonar signals, segmentation of
visual regions, and representation of visual transparency.
[Submitted for publication.]
*****
marshall.steering.ps.Z (16 pages)
CHALLENGES OF VISION THEORY: SELF-ORGANIZATION OF NEURAL MECHANISMS
FOR STABLE STEERING OF OBJECT-GROUPING DATA IN VISUAL MOTION PERCEPTION
JONATHAN A. MARSHALL
[Invited paper, in Chen S-S (Ed.), Stochastic and Neural Methods in Signal
Processing, Image Processing, and Computer Vision, Proceedings of the SPIE
1569, San Diego, July 1991, 200-215.]
*****
martin.unsmearing.ps.Z (8 pages)
UNSMEARING VISUAL MOTION:
DEVELOPMENT OF LONG-RANGE HORIZONTAL INTRINSIC CONNECTIONS
KEVIN E. MARTIN and JONATHAN A. MARSHALL
[In Hanson SJ, Cowan JD, & Giles CL, Eds., Advances in Neural Information
Processing Systems, 5. San Mateo, CA: Morgan Kaufmann Publishers, 1993,
417-424.]
*****
RETRIEVAL INSTRUCTIONS
% ftp archive.cis.ohio-state.edu
Name (cheops.cis.ohio-state.edu:yourname): anonymous
Password: (use your email address)
ftp> cd pub/neuroprose
ftp> binary
ftp> get marshall.occlusion.ps.Z
ftp> get marshall.context.ps.Z
ftp> get marshall.steering.ps.Z
ftp> get martin.unsmearing.ps.Z
ftp> quit
% uncompress marshall.occlusion.ps.Z ; lpr marshall.occlusion.ps
% uncompress marshall.context.ps.Z ; lpr marshall.context.ps
% uncompress marshall.steering.ps.Z ; lpr marshall.steering.ps
% uncompress martin.unsmearing.ps.Z ; lpr martin.unsmearing.ps
------------------------------
End of VISION-LIST digest 13.6
************************