Copy Link
Add to Bookmark
Report

VISION-LIST Digest Volume 10 Issue 16

eZine's profile picture
Published in 
VISION LIST Digest
 · 10 months ago

VISION-LIST Digest    Thu Apr 04 09:55:55 PDT 91     Volume 10 : Issue 16 

- Send submissions to Vision-List@ADS.COM
- Send requests for list membership to Vision-List-Request@ADS.COM
- Access Vision List Archives via anonymous ftp to ADS.COM

Today's Topics:

Re: GEOMED
Re: Lip reading
Re: stereo ground truth
Graphics ==> Modelling <== Vision
IEEE Workshop on Directions in Automated CAD-Based Vision
Tech reports available: Constraint Networks in Vision

----------------------------------------------------------------------

From: brooks@ai.mit.edu (Rodney A. Brooks)
Date: Tue, 2 Apr 91 00:40:10 EST
Subject: Re: GEOMED

That was the system implemented by Bruce Baumgart at the Stanford AI
Lab for his PhD thesis. It uses winged edge polyhedra. It is
described in great detail there in Stanford AI Memo 249, 1974. It was
written in SAIL (an ALGOL like language). Most of the code is
included in the thesis, although I recall that there was at least one
serious typo. I remimplemented this in Lisp for the MIT AI Lab CADR's
(the pre-commercial lisp machines) in about 1981. The bits are old
and dusty on archive tapes somewhere and would take at least a week of
somebody's work just to get them online. Unless you have a really,
really, really good reason to want GEOMED I suggest you try a more
recent modelling system---99% of all work (at least quantity-wise) on
geometric modelling has been done since Bruce finished.

------------------------------

Date: Tue, 2 Apr 91 15:43:33 EST
From: Sandy (Alex) Pentland <sandy@westminster.media.mit.edu>
Subject: Lip reading

Other machine vision references on lip reading are:

Mase, K. and Pentland, A., (1990) Automatic Lipreading
by Computer, {\sl Trans. Inst. Elec. Info. and Comm. Eng.},
vol. J73-D-II, No. 6, pp. 796-803, June 1990.

Mase, K. and Pentland, A., (1989) Lip Reading: Automatic Visual
Recognition of Spoken Words, {\sl Optical Society of America Topical
Meeting on Machine Vision}, pp. 1565-1570, June 12-15, 1989, North
Falmouth, MA.

These are also available as technical reports from
Vision And Modeling Group, c/o Prof. Alex Pentland
E15-387, The Media Lab, M.I.T.
20 Ames St.,
Cambridge, MA 02139

E. Petajohn of Bell Labs has also published on lip reading in IEEE
CVPR, in '85, ACM SigChi in '88.

------------------------------

Date: Tue, 2 Apr 91 18:20:18 EST
From: mccool@dgp.toronto.edu (Michael McCool)
Subject: Re: stereo ground truth

In comp.ai.vision you write:
>> Derek Tolley (tolley@eola.cs.ucf.edu) writes:
>> We are interested in finding disparity maps with ground truth
>> for some stereo pairs. We are trying to compare the accuracy of some
>> stereo algorithms, and would like to have a verified disparity map to
>> compare with the results. If anyone has the sites where these pairs
>> and their disparity maps can be found or other information, please
>> email me. Thank you.

>GOOD LUCK!!!! If you find any, let me know, as I've been looking for
>such data for YEARS!

Why can't artificial images be used? Good reflectance models are now
available in computer graphics. "Photorealistic" images, i.e. images
with a large amount of complexity, can now be generated. It might be
argued that such images would not be as good as a test as real images,
yet random-dot stereograms are commonly used as test cases, and are
blatantly artificial. Use of computer-generated imagery would allow
precise control over image features and camera geometry, all without
building photography jigs. AND the ground truth would be trivially
available.

Michael McCool
Dynamic Graphics Project, University of Toronto

------------------------------

Date: Tue, 2 Apr 91 18:56:31 EST
From: mccool@dgp.toronto.edu (Michael McCool)
Subject: Graphics ==> Modelling <== Vision
Keywords: Stereo, Model-based Reconstruction

This posting has two parts: a plea for corrected images for a project
I am contempating, and a flame-attractor general comment on computer
graphics and vision.

Plea for Data:
=============|
I am interested in reconstructing models for use in computer graphics
from multiple views of a single object, given a simple hypothesised model.
I need pictures of objects with a simple underlying shape, but complex
surface features (for example, buildings). I don't particularly want to
deal with having to correct for lens distortion, etc, which is why I am
posting here. If anyone has any precorrected images of such objects
from multiple angles, I would be very appreciative. Accurate positions
of cameras with respect to one another and possibly the object would also
be nice, etc. Does anyone have such data or know of a site from which
it can be ftp'd?

Graphics, Modelling, and Vision:
===============================|
On a more general vein, I have just started my Ph.D. here in the Graphics
Lab at the University of Toronto and am thrashing about, looking for
a good topic. Something along the above line comes to mind: useful
combinations of computer vision and graphics.

One example: currently, graphics people spend a lot of time constructing
complex models, which is just plain silly since computer vision, properly
applied, could eliminate most of this work if a real object is being modelled.
The questions are, how and when should these techniques be applied, what
particular features of the computer-graphics modelling goal can be exploited to
simplify the vision problem, and how do vision and graphics modelling
techniques dovetail? For example, what should the roles of texture and
displacement maps be? What should the role of human intervention be?

Jumping completely off the wharf, the concept of a "cosmic derivative" has
come up in talks I've had with Dimitri Terzopolous here at Toronto.
Compare a rendered picture with a picture of the real object, and then
optimize the parameters of the graphics model so that its rendered picture
is maximally similar to the real picture. The graphics model should now be an
accurate representation of the real object, at least given the information
in the picture. The question is, how should this operation be performed?

I can imagine this topic has probably been addressed before. Can anyone point
me in the general direction of thought in this area? Can anyone relate
why such techniques are not in more general use? I'm currently grinding
my way through the computer vision literature, but some helpful hints and
thought-provoking discussion is always welcome.

Michael McCool@dgp.toronto.edu

------------------------------

Date: Tue, 2 Apr 91 11:39:51 EST
From: Dr Kevin Bowyer <kwb@zoot.csee.usf.edu>
Subject: IEEE Workshop on Directions in Automated CAD-Based Vision

IEEE Workshop on Directions in Automated CAD-Based Vision
Preliminary Program

Sunday, June 2
8:00 - 10:00 Registration
9:15 - 9:30 Opening Remarks
9:30 - 10:50 Session Theme -- Use of Knowledge About Lighting
Model based recognition of specular objects using sensor models
Sato, Ikeuchi and Kanade
Premio: an overview
Camps, Shapiro and Haralick
Automatic camera and light-source placement using CAD models
Cowan and Bergman
10:50 - 11:05 Coffee Break
11:05 - 12:25 Session Theme -- Aspect Graph Variations
On the characteristic views of quadric-surfaced solids
Chen and Freeman
Perspective projection aspect graphs of solids of revolution
Eggert and Bowyer
Viewpoint from occluding contour
Seales and Dyer
12:25 - 1:30 Lunch
1:30 - 3:30 Discussion on the beach
3:30 - 5:15 Session Theme -- Geometry and Parallelism
Computing stable poses of piecewise smooth objects
Kriegman
Implementation of geometric hashing on the connection machine
Rigoutsos and Hummel
From volumes to views: an approach to 3-D object recognition
Dickinson, Pentland and Rosenfeld
5:15 - 5:30 Coffee Break
5:30 - 6:45 Panel Theme: Why Aspect Graphs Are Not (Yet) Practical
Are Not: Olivier Faugeras, Joe Mundy, Narendra Ahuja
Are Not Yet: Ramesh Jain, Alex Pentland, Chuck Dyer, Katsushi
Ikeuchi
7:00 Reception

Monday, June 3
9:30 - 10:50 Session Theme -- Feature Utility for Object Recognition
CAD-based feature-utility measures for automatic vision programming
Chen and Mulgaonkar
3D object recognition using invariant feature indexing of interpretation table
Flynn and Jain
Generating automatic recognition strategies using CAD models
Arman and Aggarwal
10:50 - 11:05 Coffee Break
11:05 - 12:25 Session Theme -- Considerations in Building Systems
On using CAD models to compute the pose of curved 3D objects
Ponce, Hoogs and Kriegman
CBCV: a CAD-based vision system
Henderson, Evans, Grayston, Sanderson, Stoller and Weitz
CAD based vision: using a vision cell demonstrator
West, Fernando and Dew
12:25 - 1:30 Lunch
1:30 - 3:30 Discussion on the beach
3:30 - 5:15 Session Theme -- Toward ``Generic'' Representation and Recognition
A robot vision system for recognition of generic shaped objects
Vayda and Kak
A generic bridge finder
Vergnet, Saint-Marc and Jezouin
Context-constrained matching of hierarchical CAD-based models for
outdoor scenes
Kadono, Asada and Shirai
5:15 - 5:30 Coffee Break
5:30 - 6:45 Panel Theme: State-of-the-Art in CAD-Based Vision Systems
Session Organizer: Avi Kak
Participants: to be announced
6:45 - 7:00 Closing Remarks

Registration Information for CAD-Based Vision Workshop

The workshop is on June 2-3, 1991, at the same location as CVPR '91.
(The CVPR tutorials are on June 3. CVPR itself is on June 4-6.)
The CVPR conference hotel is the MAUI MARIOTT on KAANAPALI RESORT.
The CVPR conference rate is \$110 for single or double occupancy,
with a \$25 charge for each additional person. The rate is good
from MAY 30 until JUNE 10. Reservations should be made directly
with the hotel at (808) 667-1200.} Please mention that you are
attending the IEEE CVPR-91 conference. A 1-night deposit is required
within 10 days of arrangement for guaranteed reservation.

Registration fees:
Advance Registration (received BEFORE May 7)
IEEE Member ............................. $185
Non-member ............................. $230
IEEE member & full-time student ........ $100
Registration after May 7
IEEE Member ............................. $225
Non-member ............................. $280
IEEE member & full-time student ........ $125

The registration fee includes a copy of the proceedings, two lunches,
four breaks and the Sunday reception. Registration fee should be paid
by check or by money order made out to ``CAD-Based Vision Workshop.''
(Sorry-- we are not able to take credit cards.

(clip \& mail) - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
\end{center}

Name & mailing address:

_________________________________________

_________________________________________ Mail to:

_________________________________________ Steve Graham
(CAD-Based Vision Workshop)
_________________________________________ Electrical Engineering Department
FT-10
_________________________________________ University of Washington
Seattle, Washington 98195
_________________________________________

Phone: ___________________________ e-mail: ________________________________

IEEE membership number: ______________________ Registration fee: ___________

------------------------------

Date: 4 Apr 91 04:37:23 GMT
From: suter@latcs1.oz.au (David Suter)
Organization: Comp Sci, La Trobe Uni, Australia
Subject: Tech reports available: Constraint Networks in Vision
Keywords: neural networks, finite elements, visual reconstruction

Technical Reports Available
Constraint Networks in Vision / Mixed Finite Elements

The author has investigated methods of using (Augmented) Lagrangian approaches
in various ways and in various problems in computer vision. These Augmented
Lagrangian approaches generalise many of the (Penalty) based approaches
used in most approaches. The major advances reulting from this approach are
1) The method can be used to reformulate visual reconstruction problems
in a manner that uses mixed finite elements and thus allows the use
of many computational schmes similar to those used in fluid mechanics.
The mixed finite elements are simpler than those finite elements used
previously.
2) The method lends itself to analog network solutions in a straightforward
way by employing the approach of Platt.
3) The realization of these analog networks in terms of transconductance
amplifiers (ala Mead) is simple - the networks produced by this method
are far simpler than those proposed by others (Harris).
Thus the work extends from problem reformulation right down to practical
implementation using either more standard digital techniques
(mixed finite element) or more exciting new analog networks (right down to the
transistor level). Some aspects of the work are to shortly appear at IJCNN-91
(Seattle July) but more information can be obtained by requesting some
or all of the following technichal reports:

1. D. Suter "Mixed Finite Element Formulation of Problems in Visual
Reconstruction"
La Trobe University Technical Report No. 2/90,
Jan. 1990
This outlines the basic reformulation in Augmented Lagrangian terms and
shows how this generalizes approaches such as Horns recent shape from shading
approach. It also forshaddows the developments from here to analog networks
that generalize that of Harris.
2. D. Suter "Visual Reconstruction and Edge Detection Using Coupled
Deriviatives"
La Trobe University Technical Report No. 7/90,
June 1990
This contains examples obtained using Platts constraint networks.
3. D. Mansor and D. Suter "An Analogue Circuit for First Order Regularization"
La Trobe University Technical Report No. 2/91,
March 1991
This shows how to realize the networks using transconductance amps and how
to simulate using SPICE (3b1).

Copies of these reports can be obtained by email or by writing to the author:

David Suter ISD: +61 3 479-2393
Department of Computer Science
and Computer Engineering, STD: (03) 479-2393
La Trobe University, ACSnet: suter@latcs1.oz
Bundoora, CSnet: suter@latcs1.oz
Victoria, 3083, ARPA: suter%latcs1.oz@uunet.uu.net
Australia UUCP: ...!uunet!munnari!latcs1.oz!suter
TELEX: AA33143
FAX: 03 4785814

------------------------------

End of VISION-LIST digest 10.16
************************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT