Copy Link
Add to Bookmark
Report

VISION-LIST Digest 1990 11 05

eZine's profile picture
Published in 
VISION LIST Digest
 · 9 months ago

Vision-List Digest	Mon Nov 05 09:59:09 PDT 90 

- Send submissions to Vision-List@ADS.COM
- Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

RE: Optical Flow in Realtime
Standard (or at least famous) pictures - where to find them
SGI and ground-truth for shape from motion algorithm
Fingerprint ID
Abstract of Talk on Computer Vision and Humility

----------------------------------------------------------------------

Date: Fri, 2 Nov 90 23:33:13 GMT
From: Han.Wang@prg.oxford.ac.uk
Subject: RE: Optical Flow in Realtime

> (1) Is there any company or research lab which could
>compute on grey images (256x256) image flow in real time?

I have achieved the rate of 2~4 seconds in computing optic
flow using 8 T800 transputers on 128x128 images. This is
only along edge contours (Canny).

>We at BMW are developing a lateral and longitudinal
>controlled car, which should (for experiments) drive
>automatically and which might be in the future an intelligent
>assistent to the driver, in which form soever.
>
>We will use (if available) this techniques to detect
>obstacles, that are lying or driving on the street,

In oxford, we are building a system of bybrid architecture
using Sparc station, Transputer array and Datacube to
compute a 3D vision system DROID (Roke Manor Research) in
real time which can effectively detect obstacles in an
unconstraint 3D space. This is however not based on optic flow.
It uses corner matching instead. So far, we have succeed in
testing many sequences including a video camera carried by a
robot vehicle. This experiment will be demonstrated in
Brussels during the ESPRIT conference (9th - 16th Nov. 1990).

regards
Han

------------------------------

Date: Fri, 2 Nov 90 11:40:06 EST
From: John Robinson <john@watnow.waterloo.edu>
Subject: Standard (or at least famous) pictures - where to find them

We are searching for raster versions of "famous" monochrome and colour images.
Actually, any format will do if we can also get access to a format to raster
convertor. We are particularly interested in getting hold of:

Girl and toys,
Boy and toys,
Gold hill (steep village street with countryside)
Boilerhouse (picture with lots of shadows),
Side view of man with camera on a tripod (actually there are at least two
pictures of that description around - we'd prefer the one with the overcoat),
The various portraits from the 60s of one or two people that are often used,
Any single frames taken from videoconference test sequences.

Anything else that fulfils the following would be appropriate:
Good dynamic range,
Low noise,
No restrictions on copyright,
Portraits completely devoid of sexist overtones (e.g. not Lena),

Is there an FTP site with a good selection of these?

Thanks in anticipation

John Robinson
john@watnow.UWaterloo.ca

[ The Vision List Archives are on the build. Currently, of static imagery,
they contain Lenna (girl with hat) and mandrill. A collection of motion
imagery built for the upcoming Motion Workshop (including densely sampled
and stereomotion imagery) is also in the FTP accessible archive.

If you have imagery which may be of interest and may be distributed to the
general vision community, please let me know at vision-list-request@ads.com.
phil... ]

------------------------------

Date: Thu, 01 Nov 90 19:38:35 IST
From: AER6101%TECHNION.BITNET@CUNYVM.CUNY.EDU
Organization: TECHNION - I.I.T., AEROSPACE ENG.
Subject: SGI and ground-truth for shape from motion algorithm

I am presently working with 3-D scene reconstruction from a sequence
of images. The method I am using is based on corner matching between a
pair of consecutive images. The output is the estimated depth at the
corner pixels. The images are rendered by a perspective projection of
3-D blocks whose vertices are supplied by me as input to the program.
However, the detected corners are not necessarily close to those
vertices. In order to obtain a measurement of the accuracy of the
algorithm I am using, the actual depth at that pixel is needed and I
tried to recover it from the z-buffer. I thought that (my station is
a SilliconGraphics 4D-GT) the z-buffer values (between 0 and 0x7fffff)
were linearly mapped to the world z-coordinates between the closest
and farthest planes used in the perspective projection procedure
available in the Sillicon's graphic library.

The results however don't match the above hypothesis. I tested the
values of the z-buffer obtained when viewing a plane at a known depth
and it was clear that the relation was not linear. Can someone
enlighten me about how the z-buffer values are related to actual
depth? I know there is a clipping transformation that transforms the
perspective pyramid into a -1<x,y,z<1 cube, but perhaps I am missing
something else. If anybody has an opinion or reference that could help
me I would be very pleased to receive it in my E-mail
(aer6101@technion.bitnet). I would summarize the received answers and
post a message with the conclusions.

Thanking you in advance, jacques-

------------------------------

Date: Mon, 5 Nov 90 10:07:37 EST
From: perry@dewey.CSS.GOV (John Perry)
Subject: Fingerprint ID

Does anyone know of any review articles or recent
textbooks in fingerprint ID systems, research, etc.

Thanks,
John

------------------------------

Date: 2 Nov 90 01:59:57 GMT
From: sher@wolf.cs.buffalo.edu (David Sher)
Subject: Abstract of Talk on Computer Vision and Humility
Organization: State University of New York at Buffalo/Comp Sci

I thought the people who read these lists may find this talk
abstract of interest:

Computer Vision Algorithms with Humility
David Sher
Nov 1, 1990

Often computer vision algorithms are designed in this way - a plausible
and mathematically convenient model of some visual phenomenon is
constructed which defines the relationship between an image and the
structure to be extracted. For example: the image is modeled as broken
up into regions with constant intensities degraded by noise and the
region boundaries are defined to be places in the undegraded image with
large gradients. Then an algorithm is derived that generates optimal
or near optimal estimates of the image structure according to this
model. This approach assumes that constructing a correct model of our
complex world is possible. This assumption is a kind of arrogance that
yields algorithms that are difficult to improve, since the problems
with this algorithm result from inaccuracy in the model. How one
changes an algorithm given changes in its model often is not obvious.

I propose another paradigm which follows the advice of Rabbi Gamliel,
"Provide thyself a teacher" and Hillel, "He who does not increase his
knowledge decreases it."
We are designing computer algorithms that
translate advice and correction into perceptual strategies. Because
these algorithms can incorporate a large variety of statements about
the world into their models they can be easily updated and initial
inaccuracies in their models can be automatically corrected. I will
illustrate this approach by discussing 6 results in computer vision 3
of which directly translate human advice and correction into computer
vision algorithms, 3 of which indirectly use human advice.

David Sher
ARPA: sher@cs.buffalo.edu BITNET: sher%cs.buffalo.edu@ubvm.bitnet UUCP:
{apple,cornell,decwrl,harvard,rutgers,talcott,ucbvax,uunet}!cs.buffalo.edu!sher

------------------------------

End of VISION-LIST
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT