Copy Link
Add to Bookmark
Report

VISION-LIST Digest Volume 12 Issue 40

eZine's profile picture
Published in 
VISION LIST Digest
 · 11 months ago

VISION-LIST Digest    Tue Sep 07 16:21:57 PDT 93     Volume 12 : Issue 40 

- ***** The Vision List has changed hosts to TELEOS.COM *****
- Send submissions to Vision-List@TELEOS.COM
- Vision List Digest available via COMP.AI.VISION newsgroup
- If you don't have access to COMP.AI.VISION, request list
membership to Vision-List-Request@TELEOS.COM
- Access Vision List Archives via anonymous ftp to FTP.TELEOS.COM

Today's Topics:

Video-Based RS232 3-Axis (Focus-Zoom-Iris) Motorized Lens
Imaging Technology info
Wanted: some images of fossils, stone tools, etc.
Re: Interest Operators
Re: Interest Operators
Re: Distances and areas in 0/1 images
Recongition System using a single mechanism and analysis-by-synthesis
Help needed for finding proper color filters for B & W Camera
NEW JOURNAL (Call for Papers)
CFP: HUMAN VISION, VISUAL PROCESSING and DIGITAL DISPLAY V

----------------------------------------------------------------------

Date: Tue, 31 Aug 1993 15:53:44 -0700
From: Christian Fortunel <fortunel@char.vnet.net>
Subject: Video-Based RS232 3-Axis (Focus-Zoom-Iris) Motorized Lens

VIDEO-BASED RS232 3-AXIS (FOCUS-ZOOM-IRIS) MOTORIZED LENS

Fortunel Systems Inc. is introducing Servolens to the US market.
Servolens is a self-contained, 3-axis motorized lens which releases
an application from performing low level control tasks and allows it
to devote its computational resources to higher level strategies.
The characteristics of the lens are as follows:

MOUNTING
* It uses a standard C-mount adapter for mounting on video cameras.

INTERFACING
* It is connected to a computer through one RS232 serial port.
* The output from the video camera is used for automatic modes of
operations (Auto option only).

CONTROL
* All functions of the lens are controlled through the serial port.
* Each control command is a 2-byte word followed by its argument.

MANUAL CONTROL
* Each axis can be manually set to a desired position.
* The lens uses feedback position encoders for accurate positioning.
* Status about each axis can be continually queried.

CALIBRATION
* Each axis is calibrated at manufacturing.
* Built-in calibration tables can be reprogrammed if necessary.
* Comprehensive units can be used to control each axis (through
reprogrammation of the calibration tables):
* Zoom: field of view in degrees
* Focus: distance in meters
* Iris: percentage of maximum aperture

AUTOMATIC CONTROL (Auto option only)
* Independent video-based controllers are used to adjust the field
of view, focusing distance and aperture.
* The lens internally digitizes the RS170 video signal coming from
the camera and uses that information to execute user commands.
* Each axis can be set into automatic modes of operations.
* Auto focus: automatically focuses on a selected object.
* Auto focus: focuses on the next closest or furthest object.
* Auto Iris: maintains a constant brightness level.
* Auto Zoom: maintains a constant relative object size in the field
of view while you focus at various distances.

VIDEO PROCESSING (Auto option only)
* Integration windows, where measurements are taken to control each
degree of freedom, can be sized and positioned anywhere in the
field of view.

LENS TYPES
* x6: 12.5mm to 75mm - F/1.6 to closed - 0.75 kg
* x10: 10mm to 100mm - F/1.6 to closed - 1.10 kg

FURTHER INFORMATION
* Contact: C. Fortunel - Fortunel Systems, Inc. - Cary, NC - USA
Tel: 919-319-1624 - Fax: 919-319-1749

------------------------------

Date: Tue, 7 Sep 93 11:45:13 +0200
From: Mihran Tuceryan <Mihran.Tuceryan@ecrc.de>
Subject: Imaging Technology info

Hello everyone,

If anyone has been using the Imaging Technologies, Inc. series 150/40
with Sparcs, I would like to hear about your experiences. What do you
think about it? What about the SW library thay they have? I would
also like to hear about your particular configuration (e.g. what
cameras you have, etc).

We are planning a stereo setup and their boards (modular
architecture, etc.) which can be later expanded seems attractive.

Anything else that I forgot will also be appreciated.

Thanks in advance.

Mihran Tuceryan
Email: mihran@ecrc.de

------------------------------

Date: Tue, 7 Sep 93 16:15:01 GMT
From: jeonghee@CS.UCLA.EDU (Jeonghee Yi)
Organization: UCLA, Computer Science Department
Subject: Wanted: some images of fossils, stone tools, etc.

Hi,

I was wondering if anyone has any of the following pictures
or knows ftp sites which could have the images which is publically
available.
- animal bone (like fossils, ...)
- stone tools (including both neolith, and paleolith)
- ceramics
- character sets (alphabets, numerals)
- finger prints

I don't care what format they're in, as long as it is readable
in sun/x-window system.

Also, I would like to have a contour detection algorithm.

If any of these is available, please send me e-mail telling where
and how I can get them.

My internet address is jeonghee@cs.ucla.edu.

Any comment which can be a clue to find these pictures and
the algorithms will be appreciated.
Thanks !

Jeonghee Yi

------------------------------

Date: Wed, 1 Sep 93 15:54:18 -0600
From: thompson@whiterim.cs.utah.edu (William Thompson)
Subject: Re: Interest Operators

> Can anyone give me pointers to other interest operators that have been
> used or could be used for registering *any* pair of images.

Try using local extrema (maxima AND minima) of a difference of Gaussian
transformation. This works better than the Moravec operator, is
tunable for scale, and can be implemented efficiently if you are
reasonably clever about it. The points found will not be at the actual
position of corners, but they relate to corner position in a
predictable and repeatable manner.

- Bill

------------------------------

Date: Wed, 1 Sep 93 16:32:37 EDT
From: dulimart@cps.msu.edu
Subject: Re: Interest Operators

Can anyone give me pointers to other interest operators that have
been used or could be used for registering *any* pair of images.

Seems to me that you have an interest in the stereo. Have you looked at
Moravec's thesis?

Hans.

------------------------------

Date: Fri, 03 Sep 1993 22:46:54 -0400 (EDT)
From: "Joe Strout .. Miami University, Oxford OH"
Subject: Re: Distances and areas in 0/1 images

You write that youre doing two-dimensional visual gaging, but your
precision is limited by an aliasing problem: " ... if I threshold by
50% of intensity, I am classifying one pixel as from the object if it
occupies more than half the surface covered by the pixel. This is very
sensitive to small changes in the position of the object ..."

Minimizing this problem requires subpixel processing -- i.e.,
analyzing the data to infer the position of the edge in subpixes
units. For example, if you know the edge forms a straight line,
you can do a fit of a line to your data rather than relying on
the pixels directly from the camera. I'm sure there are other
subpixel processing techniques as well (I'm rather new to the
field ... any suggestions out there?). The system will be much
more robust and tolerant of orientation, noise, on lighting if
you can use a gray-scale vision system instead of a binary one.
The trade-off is more complicated processing. For an excellent
overview, including systemic and statistical error, see "Gaging
with Machine Vision" by Stanley N. Lapidus in the May 1986
_Vision_.

|o| ///// Joe Strout Department of Psychology |o|
|o| | @ @ Miami University |o|
|o| C _) Oxford, OH 45056 Text> JJSTROUT@MIAVX1.ACS.MUOHIO.EDU |o|
|o| \ o USA NeXT> JSTROUT@ROGUE.ACS.MUOHIO.EDU |o|

------------------------------

Date: Tue, 7 Sep 93 10:48:35 KDT
From: kylee@comsci.yonsei.ac.kr (Lee Kwan-Yong)
Subject: Recongition System using a single mechanism and analysis-by-synthesis

I've worked on the hybrid neural network model inspired by the visual
system of animals for visually recognizing characters such as
Hangle(Korean characters), English, Chinese character, and numeral
entered through on-line input devices.

Present, I'm try to improve and extend this model to recognize
static(off-line) characters as well as dynamic characters by using a
single recognition mechanism. And, in th case of Hangul,
contacts(touching) between graphemes that consist a syllabic
characters is a major factor of confusion by the recognition system.
To solve the problem of contacts efficiently, we'll use the method,
"analysis-by-synthesis".

I really want to develope the valid model in the view of cognitive science.

I'd like to know whether the works on recognition system using a single mechanism
had done, and to get references on analysis-by-synthesis and your advices.


Thanks in advance.

A.I. LAB. Dept. of Computer Science,
Yonsei University SEOUL, KOREA
TEL) +82-2-361-2713 +82-2-393-2186
FAX) +82-2-365-2579
e-mail) kylee@comsci.yonsei.ac.kr

------------------------------

Date: 1 Sep 1993 16:46:12 GMT
From: dchung@ces.cwru.edu (Dukki Chung)
Organization: Computer Engineering and Science, Case Western Reserve University
Subject: Help needed for finding proper color filters for B & W Camera

Hi, I would be appreciated if someone could help me for the following problem.
Please send any replies to dchung@alpha.ces.cwru.edu or post it to news groups.
Thanks in advance.

Problem:
I only have a B&W camera but would like to do color image acquisition. I recall
once reading that one could roughly approximate the color sensitivity of a
color camera by using color filters with a B&W camera. The filters were to
approximate the red, green and blue response of the color camera and had to
be changed to acquire each image. I am very interested in finding out what
these filters were.

Dukki Chung
dchung@alpha.ces.cwru.edu

------------------------------

Date: Wed, 1 Sep 1993 16:16:56 +0100 (BST)
From: J.Illingworth@ee.surrey.ac.uk
Subject: NEW JOURNAL (Call for Papers)

CALL FOR PAPERS

+++++++++++++++++++++++++++++++++++++++++++++++++++++
NEW JOURNAL FOR IMAGE PROCESSING AND MACHINE VISION
+++++++++++++++++++++++++++++++++++++++++++++++++++++


The IEE (Institution of Electrical Engineers) in the U.K. has re-organised
its Proceedings topics and in February 1994 will be launching a new journal
title called

*************************
VISION, IMAGE AND
SIGNAL PROCESSING
*************************

The joint Honorary Editors will be:

Professor Peter Grant, Dr John Illingworth
Signal Processing Group, Vision, Speech and Signal Proc Group
Dept of Electrical Engineering Dept of Electronics and Electrical Eng.,
University of Edinburgh, University of Surrey.
Edinburgh. Scotland Guildford U.K.


This journal encompasses image and signal processing in its widest sense.
Image processing techniques covering sampling, enhancement, restoration,
segmentation, texture, motion and shape analysis are appropriate for the
journal. It also covers source coding techniques which are used in image
coding; for example vector quantisation, transform and sub-band techniques,
motion compensation, standards and 3D-modelling for bit rate reduction in
single images or image sequences. Advances in the field of speech analysis,
coding, recognition and synthesis are appropriate for the journal. Signal
processing includes algorithm advances in single and multi-dimensional
recursive and non-recursive digital filters and multirate filterbanks; signal
transformation techniques; classical, parametric and higher order spectral
analysis; system modelling and adaptiveidentification techniques.


Papers on novel algorithms for image and signal theory and processing together
with review and tutorial papers on the above topics will be welcomed by the
Honorary Editors. Papers having a practical relevence and dealing with
application of these concepts are particularly encouraged.


To submit a paper send 5 copies of a manuscript of approximately 12 to 16
double spaced A4 pages (or 3000 words) plus 10 to 14 illustrations to:


The Managing Editor: IEE Proceedings,
Institution of Electrical Engineers
Michael Faraday House
Six Hills Way
Stevenage
Herts SG1 2AY
United Kingdom


Longer papers will be considered if they
are of exceptional merit. The journal aims to provide a rapid response to
authors with 90% of all manuscripts dealt with in less than 6 months.
Fuller details of the guide to authors can be found in
the current IEE Proceedings parts.

Dr. J. Illingworth, | Phone: (0483) 509835
V.S.S.P. Group, | Fax : (0483) 34139
Dept of Electronic and Electrical Eng, | Email: J.Illingworth@ee.surrey.ac.uk
University of Surrey, |
Guildford, |
Surrey GU2 5XH |
United Kingdom |

------------------------------

Date: Fri, 03 Sep 93 21:28:50 EDT
From: rogowtz@watson.ibm.com
Subject: CFP: HUMAN VISION, VISUAL PROCESSING and DIGITAL DISPLAY V

***Extended Deadline: Abstracts accepted until September 15, 1993***

SPIE/IS&T Symposium on Electronic Imaging's Conference on:
HUMAN VISION, VISUAL PROCESSING and DIGITAL DISPLAY V
San Jose, California; February 6 - 10, 1994

The goal of this conference is to explore the role of human vision,
perception, and cognition in the design, analysis, and use of
imaging, display, multimedia, visualization and virtual reality systems.

Papers are invited in the following and related areas:

o Models of Human Vision, Perception, and Cognition
o Color Perception and its Applications
- Models of Color Vision
- Color Constancy
- Perceptual Approaches to Device Independent Color
- Color Coding, Color Selection, Color Artifacts
o Psychophysical Assessment of Image Quality
o Quantization and Halftoning
- Interaction of Space, Time, and Color
- Model-Based Algorithms
- Perception of Quantized and Halftoned Images
o Vision-Based Algorithms for Image Compression, Enhancement,
Restoration, and Reconstruction
o Higher-Level Processes
- Image Semantics, Segmentation, and Representation
- Applications for Attentive and Pre-Attentive Vision
- Task-Dependent Representations of Data
o Psychophysical Aspects of Multimedia and Image Management Systems
o Perception and Performance in Virtual Environments
o Interactive Visualization, Manipulation, and Exploration

Please send your 500-word abstract to: Jane Lybecker at SPIE:
abstract@mom.spie.org
(206) 647-1445 <fax>
For more information, contact the Conference Chairs:
Bernice E. Rogowitz Jan Allebach
IBM T.J. Watson Research Center Purdue University
rogowtz@watson.ibm.com allebach@ee.ecn.purdue.edu

------------------------------

End of VISION-LIST digest 12.40
************************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT