Copy Link
Add to Bookmark
Report

VISION-LIST Digest Volume 10 Issue 20

eZine's profile picture
Published in 
VISION LIST Digest
 · 10 months ago

VISION-LIST Digest    Mon Apr 29 11:30:16 PDT 91     Volume 10 : Issue 20 

- Send submissions to Vision-List@ADS.COM
- Send requests for list membership to Vision-List-Request@ADS.COM
- Access Vision List Archives via anonymous ftp to ADS.COM

Today's Topics:

Image sequences wanted
Re: Realtime stereo machine?
Realtime stereo machine?
Re: Range Data + Model
Re: Range Data + Model
Real Time Stereo vision for Mobile Robots
CFP: Special Issue on Neural Networks
SPIE Symposium: Image Processing - Implementation and Systems
IEEE Workshop on Directions in Automated CAD-Based Vision
CVNet- Change of Editorship for Spatial Vision.

----------------------------------------------------------------------

Date: Fri, 26 Apr 91 15:23:26 +0200
From: Juhui.Wang@irisa.fr (juhui wang)
Subject: Image sequences wanted

I'm looking for calibrated image sequences for general human motion
(walking, golf ...). Please contact me if you have such data or could
tell me where i could get it. Thanks.

Juhui WANG | wang@irisa.fr
IRISA/INRIA--Rennes | TEL: 33-99 36 20 00
Campus Universitaire de Beaulieu |
35042 RENNES Cedex, France |

------------------------------

From: Thierry Vieville <Thierry.Vieville@mirsa.inria.fr>
Subject: Re: Realtime stereo machine?
Date: Thu, 25 Apr 91 08:34:53 +0200

> Does anyone know the high-performance, practical, realtime stereo system
> which uses only cameras?

The european DMA machine, provides at more than 1 Hz stereo and
3D-reconstruction from 3 cameras for an indoor scene from 0.5 to
about 5 meters. With temporal data fusion, and sensor precise calibration
the precision is better than 1%.

A few prototypes are now under final check.

Thierry Vieville, INRIA, France.

------------------------------

Date: Thu, 25 Apr 91 16:52 PDT
From: "H. Keith Nishihara" <hkn@natasha.teleos.com>
Subject: Realtime stereo machine?
Phone: (415)328-8886
Us-Mail: Teleos Research, 576 Middlefield Road, Palo Alto, CA 94301

Date: 23 Apr 91 16:28:51 GMT
From: tnaka@ius3.ius.cs.cmu.edu (Tomoharu Nakahara)

Does anyone know the high-performance, practical, realtime stereo system
which uses only cameras? And I want to know the processing time and
the range resolution of the system.
Thanks in advance.

You might be interested in the PRISM-3 system that we have developed
at Teleos Research. It implements the sign-correlation algorithm I
developed at MIT some years ago for doing binocular stereo and optical
flow measurement. It is a self contained system composed of an active
stereo camera head with pan, tilt, and vergence control and a desktop
VME box. The box includes digitizers, video rate Laplacian of
Gaussian convolvers (with kernel sizes up to 60 by 60 pixels), a high
speed correlator (36 parallel correlation measurements in 100
microseconds with thousand point windows), a motor controller board,
and a dedicated 68030 control processor. The box can operate on its
own or as a measurement server via ethernet to a workstation.

We recently completed the first prototype system and it performs at a
rate of about 100 range or velocity meaurements per video frame time
(30ms). We expect this to improve substantially as we get the control
software tuned. We have been getting disparity resolutions between
1/3 and 1/10 pixel depending on noise levels. I can send you more
details if this is of interest to you.

Keith

------------------------------

Date: Thu, 25 Apr 91 09:13:48 PDT
From: stein%iris.usc.edu@usc.edu (Fridtjof Stein)
Subject: Re: Range Data + Model

In comp.ai.vision you write:
>Are there anybody who has a range database suitable for object
>recognition research ( i.e. range data of one or more objects of
>various type along with their "model" to be matched), or knows someone
>who might have such a database? We will be grateful of your information.

My research interest is object recognition. I developed a method to match
general 3D free form surfaces (Stein & Medioni IUW90, Stein & Medioni CVPR91).
We have quite a few models (also different views). Most of them are busts
of composers. We obtained them with a liquid crystal range finder.

If you are interested please contact me by email: stein@iris.usc.edu

Fridtjof Stein * Use email - save a tree! * Email: stein@iris.usc.edu
Institute for Robotics and Intelligent Systems office: (213) 740 6435
University of Southern California, PHE 233 fax: (213) 744 0940
Los Angeles, California 90089-0273 home: (213) 836 7950

------------------------------

Date: Mon, 29 Apr 1991 02:55:52 GMT
From: ron@monu6.cc.monash.edu.au (Ron Van Schyndel)
Organization: Caulfield Campus, Monash University, Melb., Australia.
Subject: Re: Range Data + Model

Vision-List@ADS.COM writes:
>Are there anybody who has a range database suitable for object
>recognition research ( i.e. range data of one or more objects of
>various type along with their "model" to be matched), or knows someone
>who might have such a database? We will be grateful of your information.

Try using a ray-tracer to produce an image of a 3D environment, then work
backwards.

Advantages:
* Complete control of 3D environment (and ACCURATE knowledge of object
positions.
* Complete knowledge of the 3D/2D conversion matrices.
* An ability to model your detection system ruggedness amidst a variety of
noise models (which can be applied to ANY stage of the 3D/2D conversion).

Disadvantages:
* Reflection models (Lambertian, Phong, etc.) are empirically applied in
most of the ray tracers I know about -- they may not be sufficiently
accurate for your needs.
* The noise/sampling models used are again empirically applied with the
criteria for correctness being "If it look more 'REAL', its better".
Again, this may not be acceptable to you.

An excellent book on reflection models is by Roy Hall, and was published
last year (I think, I can't remember the title, but its in the FAQ in
comp.graphics).

Of course, I haven't really answered your question, since in the end your
system still has to work in the real world, but its enough to get 3/4
of the bugs etc. out of the way - and in a completely controlled environment.

PS. There was a discussion on this subject a week or two ago in comp.graphics.

Hope this helps, RON

Ron van Schyndel ron@monu6.cc.monash.edu.au
Physics Department, Monash University ron%monu6.cc.monash.edu.au@uunet.UU.NET
CAULFIELD EAST, Victoria, AUSTRALIA {hplabs,mcvax,uunet,ukc}!munnari!monu6..
Location: 37 52 38.8S 145 02 42.0E Phone: +613-573-2567 Fax: +613-573-2358

------------------------------

From: na@bora.inria.fr (Nicholas Ayache)
Subject: Real Time Stereo vision for Mobile Robots
Date: Wed, 24 Apr 91 08:41:09 +0200

For those interested in real-time Stereo Vision and Multisensory
Perception, they can refer to

1)N. Ayache, Artificial Vision for Mobile Robots, MIT-Press, Cambridge, 1991.

2)N. Ayache, Vision ste're'oscopique et Perception
Multisensorielle, Inter-Editions, Paris, 1989 (in French).

------------------------------

Date: Wed, 24 Apr 91 06:56:06 GMT
From: eba@computing-maths.cardiff.ac.uk (Eduardo Bayro)
Organization: Univ. of Wales Coll. of Cardiff, Dept. of Electronic & Systems Engineering
Subject: CFP: Special Issue on Neural Networks

Journal of Systems Engineering --- Springer Verlag

SPECIAL ISSUE ON NEURAL NETWORKS
CALL FOR PAPERS

A Special Issue of the Journal of Systems Engineering (Springer-
Verlag) will be devoted to Neural Networks and their applications in
systems engineering. Original contributions on theoretical and
practical aspects of neural computing are invited. Topics of interest
include: new network models and architectures; learning algorithms;
comparative studies of different networks;neural networks and fuzzy
theory; genetic algorithms for net optimisation; hybrid and
connectionist expert systems ; applications (systems identification
and control, pattern recognition, signal processing, vision,
telecommunications, manufacturing, robotics etc.).

Please submit papers (3 copies) by November 1991, to the following address:

Prof. D.T. Pham (Editor)
Journal of Systems Engineering, Springer-Verlag
School of Electrical, Electronic and Systems Engineering,
University of Wales College of Cardiff,
P.O. Box 904,
Cardiff CF1 3YH,
United Kingdom.
Tel: ++44(0)222 874429
Fax: ++44(0)222 874192
Telex: ++44(0)222 497368
Internet: PhamDT%cardiff.ac.uk@nsf.ac.uk
UUCP: ...!ukc!cardiff!PhamDT
email:PhamDT@uk.ac.cardiff

------------------------------

Date: Mon, 29 Apr 91 10:27:10 -0800
From: FLICK@IBM.COM
Subject: SPIE Symposium: Image Processing - Implementation and Systems

Announcement and Call for Papers

IMAGE PROCESSING - IMPLEMENTATIONS AND SYSTEMS

Part of the SPIE/SPSE SYMPOSIUM ON ELECTRONIC IMAGING: SCIENCE AND TECHNOLOGY
9-14 February 1992 * San Jose, California

Conference Chairs: Ronald B. Arps, IBM Almaden Research Center;
William K. Pratt, Sun Microsystems, Inc.

Cochairs: Myron D. Flickner, IBM Almaden Research Center;
Robert M. Haralick, University of Washington

During the past few years there have emerged two important trends in
the implementation of image processing systems. First, is the
creation of a new generation of image processing chips, which offer an
astonishing degree of capability in small packages at relatively low
cost. Second, is the development of standards for image processing
and interchange. These implementation trends are likely to profoundly
affect the design of the next generation of image processing systems.

This conference will seek to cover both hardware and software. In
the hardware area, surveys will be given on the capabilities and
functionalities of programmable and fixed function, VLSI image
processing chips that have recently been introduced. The theme of
this area will be exploration of how the availability of such chips
will affect the design of future image processing systems. In the
software area, surveys will be presented on the emerging dejure software
standards being developed by the International Standards Organization
and the X Consortium, as well as defacto standards such as TIFF for
image transport. The theme of this area will be how these software
standards will affect the design of image processing systems, with
emphasis on the development of associated software development tools
and graphical user interfaces.

The conference will bring together individuals who are concerned with
the implementation of image processing and interchange systems from
both hardware and software perspectives. Papers are solicited on the
following and related topics:

* programmable and fixed function image processing and interchange chips
* image processing hardware architectures and systems
* image processing and interchange standards
* image processing software architectures and systems
* image processing software tools and utilities (including compilers,
graphical user interfaces, and image data base systems)

------------------------------

Date: Mon, 29 Apr 91 00:55:46 EDT
From: Dr Kevin Bowyer <kwb@midgit.csee.usf.edu>
Subject: IEEE Workshop on Directions in Automated CAD-Based Vision

IEEE WORKSHOP ON DIRECTIONS IN AUTOMATED CAD-BASED VISION

Sunday-Monday, June 2-3 Maui Marriot (location of CVPR '91)

Revised Preliminary Program

General Chairman Program Chairman
Linda Shapiro, Univ. of Washington Kevin Bowyer, Univ. of South Florida

Program Committee
Avi Kak, Purdue University Joe Mundy, General Electric C.R.&D.
Yoshiaki Shirai, Osaka University George Stockman, Michigan State University
Jean Ponce, University of Illinois Katsushi Ikeuchi, Carnegie-Mellon University
Tom Henderson, University of Utah Horst Bunke, Universitat Berne
Prasanna Mulgaonkar, S.R.I. Int'l Charles Dyer, University of Wisconsin


Sunday, June 2
8:00 - 10:00 Registration
9:00 - 9:15 Opening Remarks
9:15 - 10:50 Session Theme -- Use of Knowledge About Lighting
Discussion facilitator: Alex Pentland, M.I.T.
Model based recognition of specular objects using sensor models
Sato, Ikeuchi and Kanade
Premio: an overview
Camps, Shapiro and Haralick
Automatic camera and light-source placement using CAD models
Cowan and Bergman
10:50 - 11:05 Coffee Break
11:05 - 12:40 Session Theme -- Aspect Graph Variations
Discussion facilitator: Jitendra Malik, Univ. of California at Berkeley
On the characteristic views of quadric-surfaced solids
Chen and Freeman
Perspective projection aspect graphs of solids of revolution
Eggert and Bowyer
Viewpoint from occluding contour
Seales and Dyer
12:40 - 1:45 Lunch
1:45 - 3:45 Discussion on the beach
3:45 - 5:20 Session Theme -- Geometry and Matching
Discussion facilitator: Bir Bhanu, Univ. of California at Riverside
Computing stable poses of piecewise smooth objects
Kriegman
Implementation of geometric hashing on the connection machine
Rigoutsos and Hummel
From volumes to views: an approach to 3-D object recognition
Dickinson, Pentland and Rosenfeld
5:20 - 5:30 Coffee Break
5:30 - 7:00 Panel Theme: Why Aspect Graphs Are Not (Yet) Practical
Are Not: Olivier Faugeras, Joe Mundy, Narendra Ahuja
Are Not Yet: Ramesh Jain, Alex Pentland, Chuck Dyer
7:00 Reception

Monday, June 3
9:00 - 10:35 Session Theme -- Feature Utility for Object Recognition
Discussion facilitator: Tom Henderson, University of Utah
CAD-based feature-utility measures for automatic vision programming
Chen and Mulgaonkar
3D object recognition using invariant feature indexing of interpretation table
Flynn and Jain
Generating automatic recognition strategies using CAD models
Arman and Aggarwal
10:35 - 10:50 Coffee Break
10:50 - 12:25 Session Theme -- Considerations in Building Systems
Discussion facilitator: Kim Boyer, Ohio State University
On using CAD models to compute the pose of curved 3D objects
Ponce, Hoogs and Kriegman
CBCV: a CAD-based vision system
Henderson, Evans, Grayston, Sanderson, Stoller and Weitz
CAD based vision: using a vision cell demonstrator
West, Fernando and Dew
12:25 - 1:30 Lunch
1:30 - 3:45 Discussion on the beach
3:45 - 5:20 Session Theme -- Toward ``Generic'' Representation and Recognition
Discussion facilitator: Kevin Bowyer, Univ. of South Florida
A robot vision system for recognition of generic shaped objects
Vayda and Kak
A generic bridge finder
Vergnet, Saint-Marc and Jezouin
Context-constrained matching of hierarchical CAD-based models for
outdoor scenes
Kadono, Asada and Shirai
5:20 - 5:30 Coffee Break
5:30 - 7:00 Panel Theme: State-of-the-Art in CAD-Based Vision Systems
Session Organizer: Avi Kak
Participants: T. Henderson, K. Ikeuchi, A. Kak, J. Ponce, J. Mundy, A. Pentland
7:00 Closing Remarks

Registration Information for CAD-Based Vision Workshop

The workshop is on June 2-3, 1991, at the same location as CVPR '91.
(The CVPR tutorials are on June 3. CVPR itself is on June 4-6.)
The CVPR conference hotel is the MAUI MARIOTT on KAANAPALI RESORT.
The CVPR conference rate is \$110 for single or double occupancy,
with a \$25 charge for each additional person. The rate is good
from MAY 30 until JUNE 10. Reservations should be made directly
with the hotel at (808) 667-1200.} Please mention that you are
attending the IEEE CVPR-91 conference. A 1-night deposit is required
within 10 days of arrangement for guaranteed reservation.

Registration fees:
Advance Registration (received BEFORE May 7)
IEEE Member ............................. $185
Non-member ............................. $230
IEEE member & full-time student ........ $100
Registration after May 7
IEEE Member ............................. $225
Non-member ............................. $280
IEEE member & full-time student ........ $125

The registration fee includes a copy of the proceedings, two lunches,
four breaks and the Sunday reception. Registration fee should be paid
by check or by money order made out to ``CAD-Based Vision Workshop.''
(Sorry-- we are not able to take credit cards.


(clip \& mail) - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -


Name & mailing address:

_________________________________________

_________________________________________ Mail to:

_________________________________________ Steve Graham
(CAD-Based Vision Workshop)
_________________________________________ Electrical Engineering Department
FT-10
_________________________________________ University of Washington
Seattle, Washington 98195
_________________________________________


Phone: ___________________________ e-mail: ________________________________

IEEE membership number: ______________________ Registration fee: ___________

------------------------------

Date: Thu, 25 Apr 1991 20:14:46 -0400
From: cvnet%yorkvm1.bitnet@utcs.utoronto.ca
Subject: CVNet- Change of Editorship for Spatial Vision.

Spatial Vision has been in existence for six years, and is publishing its
fifth volume. I have decided it's time for a new Editor-in-Chief for the
Americas, and will resign at the end of this year. David Foster is still
Editor-in Chief for Europe etc., so continuity will be maintained.
I am happy to report that our recommendation to the Board and the Publisher
for a replacement has been accepted with enthusiasm. The new Editor for
the Americas will be Adam Reeves, of Northeastern University. Formally his
appointment will start with Volume six, but he will be ready to accept papers
for review from June 1st 1991.
Let me urge you to support Adam, and the Journal; the best way of doing so
is to submit papers at your earliest opportunity!
Peter Dodwell
Co-Editor-in-Chief
Spatial Vision
PS. If you want more information about Spatial Vision, let me know. I have
some informative brochures for distribution. PCD.

------------------------------

End of VISION-LIST digest 10.20
************************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT