Copy Link
Add to Bookmark
Report
VISION-LIST Digest 1989 12 06
Vision-List Digest Wed Dec 06 10:34:59 PDT 89
- Send submissions to Vision-List@ADS.COM
- Send requests for list membership to Vision-List-Request@ADS.COM
Today's Topics:
We need "real world" images.
Program For Realistic Images Wanted
MATLAB
Accuracy measure
RE:Boundary tracking algorithms..
Computer Vision Position
Post-Doctoral Research Fellowship
New Journal: Journal of Visual Communication and Image Representation
Boundary Tracking -----> (collected Information).
Bibliography: 3D Scene Interp. Workshop (Austin, TX)
----------------------------------------------------------------------
Date: 27 Nov 89 19:48:16 GMT
From: rasure@borris.unm.edu (John Rasure)
Subject: We need "real world" images.
Organization: University of New Mexico, Albuquerque
We need images. Specifically, we need stereo pair images, fly by image
sequences, LANDSAT images, medical images, industrial inspection images,
astronomy images, images from lasers, images from interferometers, etc.
The best images are those that correspond to a "typical" image processing
problems of today. They need not be pleasing to look at, just representative
of the imaging technique and sensor that is being used.
Does anybody have such images that they can share?
John Rasure
rasure@bullwinkle.unm.edu
Dr. John Rasure
Department of EECE
University of New Mexico
Albuquerque, NM 87131
505-277-1351
NET-ADDRESS: rasure@bullwinkle.unm.edu
------------------------------
Date: Wed, 29 Nov 89 11:25:37 +0100
From: sro@steks.oulu.fi (Sakari Roininen)
Subject: Program For Realistic Images Wanted
We are preparing research project in the field of visual
inspection. In our research work we want to compute and simulate
highly realistic images.
Key words are: Shading - Illumination
We are looking for a software package including following
properties (modules):
- Geometric description of objects.
- Optical properties of the surfaces. Surfaces of interest are:
metal, wood, textile.
- Physical and geometrical description of the light sources.
- Physical and technological properties of the cameras.
Software should be written in C and source code should be
available so that we can customize the software to fit our
applications.
GOOD IDEAS ARE ALWAYS WELCOME !!!
Please, contact: Researcher Timo Piironen
Technical Research Centre of Finland
Electronics Laboratory
P.O.Box 200
SF-90571 OULU
Finland
tel. +358 81 509111
Internet: thp@vttko1.vtt.fi
------------------------------
Date: 1 Dec 89 18:00:26 GMT
From: Adi Pranata <pranata@udel.edu>
Subject: MATLAB
Hi,
I'm not sure where to posted this question, anyway Does any one
have any info, on convert raster images/picture to matlab matrix
format, since i am interested on use the matlab software to manipulate
it . Since it will be no problem to display the matlab file format
using the imagetool software. Any info including what other newsgroup
more appropriate to posted will be welcome. Thanks in advance.
You could reply to pranata@udel.edu
Sincerely,
Desiderius Adi Pranata
PS: Electromagnetig way
146.955 Mhz -600 KHz
Oldfashioned way
(302)- 733 - 0990
(302)- 451 - 6992
[ This is definitely appropriate for the Vision List. Answers to the
List please.
phil...]
------------------------------
Date: 2 Dec 89 17:41:18 GMT
From: muttiah@cs.purdue.edu (Ranjan Samuel Muttiah)
Subject: Accuracy measure
Organization: Department of Computer Science, Purdue University
I am looking for the various ACCURACY measures that are used in the
vision field. If you have any information on this, could you
email/post please ?
Thank you.
------------------------------
Date: Wed, 29 Nov 89 18:47:24 EST
Subject: RE:Boundary tracking algorithms..
From: Sridhar Ramachandran <Sridhar@UC.EDU>
I have pseudo code for a Boundary Tracking Algorithm for Binary
Images that uses Direction codes and Containment codes to track
the boundary. It is pretty efficient and works fine.
If interested, please e-mail requests to sramacha@uceng.uc.edu
(OR) sridhar@uc.edu (OR) sramacha@uc.edu.
Sridhar Ramachandran.
------------------------------
Date: Tue, 5 Dec 89 14:08:30 EST
From: peleg@grumpy.sarnoff.com (Shmuel Peleg x 2284)
Subject: Computer Vision Position - David Sarnoff Research Center
The computer vision research group at David Sarnoff Research Center has an
opening for a specialist in image processing or computer vision who has an
interest in computer architecture and digital hardware. Master's level or
equivalent experience is preferred.
This is a key position in an established research team devoted to the
development of high performance, real-time vision systems. The group is active
at all levels of research and development from basic research to applications
and prototype implementation. Current programs include object recognition,
motion analysis, and advanced architecture.
Please send your vitae or enquire with Peter Burt (Group Head), David Sarnoff
Research Center, Princeton, NJ 08543-5300; E-Mail: burt@vision.sarnoff.com.
------------------------------
Date: Wed, 29 Nov 89 19:11:55 WET DST
From: "D.H. Foster" <coa15%seq1.keele.ac.uk@NSFnet-Relay.AC.UK>
Subject: Post-Doctoral Research Fellowship
UNIVERSITY OF KEELE
Department of Communication & Neuroscience
POST-DOCTORAL RESEARCH FELLOWSHIP
Applications are invited for a 3-year appointment as a post-doctoral
research fellow to work in the Theoretical and Applied Vision Sciences
Group. The project will investigate early visual form processing, and
will entail a mixture of computational modelling and psychophysical
experiment. The project is part of an established programme of
research into visual information processing, involving a team of about
ten members working in several well-equipped laboratories with a range
of computing and very high resolution graphics display systems.
Candidates should preferably be experienced in computational vision
research, but those with training in computing science, physics,
experimental psychology, and allied disciplines are also encouraged to
apply. The appointment, beginning 1 January 1990, or soon thereafter,
will be on the Research IA scale, initially up to Point 5, depending
on age and experience.
Informal enquiries and applications with CV and the names of two
referees to Prof David H. Foster, Department of Communication &
Neuroscience, University of Keele, Keele, Staffordshire ST5 5BG,
England (Tel. 0782 621111, Ext 3247; e-mail D.H.Foster@uk.ac.keele).
------------------------------
Date: Thu, 23 Nov 89 16:48:13 EST
From: zeevi@caip.rutgers.edu (Joshua Y. Zeevi)
Subject: New Journal: Journal of Visual Communication and Image Representation
New Journal published by Academic Press
---------------------------------------
Dear Colleague,
The first issue of the Journal of Visual Communication and Image Representation
is scheduled to appear in September 1990. Since the journal will cover topics
in your area of expertise, your contribution will most likely have impact on
future advancements in this rapidly developing field.
Should you have questions regarding the suitability of a specific paper or
topic, please get in touch with Russell Hsing or with me. The deadline for
submission of papers for the first issue is Feb. 15, and for the second issue
May 15, 1990.
For manuscript submission and/or subscirption information please write or call
Academic press, Inc. 1250 6th Ave., San Diego, CA 92101. (619) 699-6742.
Enclosed please find the Aims & Scope (including list of preferred topics) and
list of members of the the Editorial Board.
JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION
--------------------------------------------------------
Dr. T. Russell Hsing, co-editor, Research Manager, Loop Transmission &
Application District, Bell Communication Research, 445 South Street,
Morristown, NJ 07960-1910 (trh@thumper.bellcore.com (201) 829-4950)
Professor Yehoshua Y. Zeevi*, co-editor, Barbara and Norman Seiden Chair,
Department of Electrical Engineering, Technion - Israel Ins. of Technology,
Haifa, 32 000, Israel
* Present address: CAIP Center, Rutgers University, P. O. Box 1390,
Piscataway, NJ 08855-1390 (zeevi@caip.rutgers.edu (201) 932-5551)
AIMS & SCOPE
The Journal of Visual Communication and Image Representation is an archival
peer-reviewed technical journal, published quarterly. With the growing
availability of optical fiber links, advances of large scale integration, new
telecommunication services, VLSI-based circuits and computational systems, as
well as the rapid advances in vision research and image understanding, the
field of visual communication and image representation will undoubtedly
continue to grow. The aim of this journal is to combine reports on the state-
of-the-art of visual communication and image representation with emphasis on
novel ideas and theoretical work in this multidisciplinary area of pure and
pure and applied research. The journal consists of regular papers and research
reports describing either original research results or novel technologies. The
field of visual communication and image representation is considered in its
broadest sense and covers digital and analog aspects, as well as processing and
communication in biological visual systems.
Specific areas of interest include, but are not limited to, all aspects of:
* Image scanning, sampling and tessellation
* Image representation by partial information
* Local and global schemes of image representation
* Analog and digital image processing
* Fractals and mathematical morphology
* Image understanding and scene analysis
* Deterministic and stochastic image modelling
* Visual data reduction and compression
* Image coding and video communication
* Biological and medical imaging
* Early processing in biological visual systems
* Psychophysical analysis of visual perception
* Astronomical and geophysical imaging
* Visualization of nonlinear natural phenomena
Editorial Board
R. Ansari, Bell Communications Research, USA
I. Bar-David, Technion - Israel Institute of Technology, Israel
R. Bracewell, Stanford University, USA
R. Brammer, The Analytic Sciences Corporation, USA
J.-O. Eklundh, Royal Institute of Technology, Sweden
H. Freeman, Rutgers University, USA
D. Glaser, University of California at Berkeley, USA
B. Julesz, Caltech and Rutgers University, USA
B. Mandelbrot, IBM Thomas J. Watson Research Center, USA
P. Maragos, Harvard University, USA
H.-H Nagel, Fraunhofer-Institut fur Informations-und Datenverbeitung, FRG
A. Netravali, AT&T Bell Labs, USA
D. Pearson, University of Essex, England
A. Rosenfeld, University of Maryland, USA
Y. Sakai, Tokyo Institute of Technology, Japan
J. Sanz, IBM Almaden Research Center, USA
W. Schreiber, Massachusetts Institute of Technology, USA
J. Serra, Ecole Nationale Superieure des Mines de Paris, France
M. Takagi, University of Tokyo, Japan
M. Teich, Columbia University, USA
T. Tsuda, Fujitsu Laboratories Ltd., Japan
S. Ullman, Massachusetts institute of technology, USA
H. Yamamoto, KDD Ltd., Japan
Y. Yasuda, University of Tokyo, Japan
------------------------------
Date: Mon, 27 Nov 89 16:48 EDT
From: SRIDHAR BALAJI <GRX0446@uoft02.utoledo.edu>
Subject: Boundary Tracking -----> (collected Information).
X-Vms-To: IN%"Vision-List@ADS.COM"
Status: RO
/** This are some refs. and psedocode for the Boundary tracking I asked.
** Thanks so much for all the contributors. Since so many
** wanted this. I thought it may be useful to send it to the group.
S. Balaji */
*******
From: IN%"marra@airob.mmc.COM" 14-NOV-1989 15:28:27.22
CC:
Subj: Re: Boundary tracking.
Here is Pascal-pseudo code for our binary boundary tracker. A minor mod will
extend it to handle multiple objects. Good luck.
Pseudo Code for the ALV border tracing module
Program pevdcrok ( input,output );
(* include csc.si TYPE and VAR declarations *)
(* this causes declaration of the following data elements:
dir_cue
version
V_IM
O_IM
PB
*)
(* include peve.si TYPE and VAR declarations
the following are defined in peve.si:
pevdcrok_static_block.dc
pevdcrok_static_block.dc
*)
(* ____________________FORWARD DECLARATIONS____________________ *)
(* ----------OURS---------- *)
(* ----------THEIRS-------- *)
procedure pevdcrok(road,obst:imagenum)
TYPE
border_type_type = (blob,bubble)
direction_type = (north,south,east,west)
VAR
inside_pix : d2_point; (* Col,row location of a pixel on
the inside of a border *)
outside_pix : d2_point; (* Col,row location of a pixel on
the outside of a border *)
next_pix : d2_point; (* Col,row location of the next
pixel to be encountered during
the tracing of the border *)
next_8_pix : d2_point; (* Col,row location of the next
eight-neighbor pixel to be
encountered during the tracing of
the border *)
westmost_pix : d2_point; (* col,row location of the westmost
pixel visited so far this image *)
eastmost_pix : d2_point; (* col,row location of the eastmost
pixel visited so far this image *)
northmost_pix : d2_point; (* col,row location of the northmost
pixel visited so far this image *)
southmost_pix : d2_point; (* col,row location of the southmost
pixel visited so far this image *)
border_type : border_type_type; (* a processing control flag
indicating the type of border
assumed to be following *)
direction : direction_type; (* directions being searched *)
start_time,
end_time,
print_time : integer; (* recorded times for time debug *)
procedure find_border(inside_pix,outside_pix,direction)
begin (* find_border *)
Set a starting point for finding blob in the middle
of the bottom of the image
if PB.D[pevdcrok,gra] then
mark the starting point
Search in direction looking for some blob, being sure you
don't go off the top of the road image
Search in direction looking for a blob/non-blob boundary,
being sure you don't go off the top of the blob image
if PB.D[pevdcrok,gra] then
mark the inside_pix and the outside_pix
end (* find_border *)
procedure trace_border(border_type,inside_pix,outside_pix,direction)
TYPE
dir_type = (0..7); (* 0 = east
1 = northeast
2 = north
3 = northwest
4 = west
5 = southwest
6 = south
7 = southeast *)
VAR
dir : dir_type; (* relative orientation of the inside_pix
outside_pix 2-tuple *)
begin (* trace_border *)
remember the starting inside and outside pix's for bubble detection
set dir according to direction
while we haven't found the end of this border do
begin (* follow border *)
next_pix.col := outside_pix.col + dc[dir];
next_pix.row := outside_pix.row + dr[dir];
while road.im_ptr^[next_pix.col,next_pix.row] = 0 do
begin (* move the outside pixel clockwise *)
outside_pix = next_pix;
advance the dir
check for bubbles; if a bubble then
begin (* bubble has been found *)
border_type := bubble
exit trace_border
end (* bubble has been found *)
next_pix.col := outside_pix.col + dc[dir];
next_pix.row := outside_pix.row + dr[dir];
end (* move the outside pixel clockwise *)
update the direction for moving inside_pix
next_pix.col := inside_pix.col + dc[dir];
next_pix.row := inside_pix.row + dr[dir];
next_8_pix.col := inside_pix.col + dc[dir];
next_8_pix.row := inside_pix.row + dr[dir];
while road.im_ptr^[next_pix.col,next_pix.row] = 0 or
(road.im_ptr^[next_8_pix.col,next_8_pix.row] = 0
and mod(dir,2) = 0) do
begin (* move the inside pixel counter-clockwise *)
inside_pix := next_pix;
advance the dir
if road.im_ptr^[inside_pixel.col,inside_pixel.row] <> 0 then
begin
inside_pix := next_8_pix;
advance the dir;
end;
check for bubbles; if a bubble then
begin (* bubble has been found *)
border_type := bubble
exit trace_border
end (* bubble has been found *)
next_pix.col := outside_pix.col + dc[dir];
next_pix.row := outside_pix.row + dr[dir];
end (* move the inside pixel counter-clockwise *)
update the direction for moving outside_pix
update values of westmost_pix,eastmost_pix,northmost_pix,
southmost_pix
if mod(num_border_points,crock_rec.boundary_skip) = 1 then
record column and row values in V_IM.edge_record
end (* follow border *)
border_type := blob
end (* trace_border *)
begin (* pevdcrok *)
if PB.D[pevdcrok,time] then
clock(start_time);
AND the road image with the border image, leaving the
result in road image
border_type := bubble
while border_type = bubble do
begin
if PB.D[pevdcrok,tty]
writeln('PEVDCROK: Calling find_border');
find_border(inside_pix,outside_pix,west)
initialize IM edge_record; num_border_points := 0
if PB.D[pevdcrok,tty]
writeln('PEVDCROK: Calling trace_border');
trace_border(border_type,inside_pix,outside_pix,west)
end
complete IM edge_record
if PB.D[pevdcrok,time] then
begin (* time debug *)
clock(end_time);
print_time := end_time - start_time;
writeln('PEVDCROK: elapsed time = ',print_time,' msec');
end; (* time debug *)
end (* pevdcrok *)
*******
From: IN%"mv10801@uc.msc.umn.edu" 14-NOV-1989 16:04:34.13
CC:
Subj: Re: Motion tracking
See:
J.A.Marshall, Self-Organizing Neural Networks for Perception of Visual Motion,
to appear in Neural Networks, January 1990.
*******
From: IN%"pell@isy.liu.se" "P{r Emanuelsson" 15-NOV-1989 13:24:37.17
CC:
Subj: Re: Boundary tracking.
I think you want to do chain-coding. The best method I know was invented
by my professor (of course...) and is called "crack coding". It uses
a two-bit code and avoids backtracking problems and such. It's quite
easy to implement, but I don't think I have any code handy.
The algorithm is, however, given as flow charts in the second reference:
"Encoding of binary images by raster-chain-coding of cracks", Per-Erik
Danielsson, Proc. of the 6th int. conf. on Pattern Recognition, Oct. -82.
"An improved segmentation and coding algorithm for binary and
nonbinary images", Per-Erik Danielsson, IBM Journal of research and
development, v. 26, n. 6, Nov -82.
If you are working on parallel computers, there are other more suitable
algorithms.
Please summarize your answers to the vision list.
Cheers,
/Pell
Dept. of Electrical Engineering pell@isy.liu.se
University of Linkoping, Sweden ...!uunet!isy.liu.se!pell
------------------------------
Date: Thu, 30 Nov 89 13:20:28 EST
From: flynn@pixel.cps.msu.edu
Subject: Bibliography: 3D Scene Interp. Workshop (Austin, TX)
Here's a list of papers in the proceedings of the IEEE Workshop on
Interpretation of 3D Scenes held in Austin, Texas on November 27-29.
STEREO
------
R.P. Wildes, An Analysis of Stereo Disparity for the Recovery of
Three-Dimensional Scene Geometry, pp. 2-8.
S. Das and N. Ahuja, Integrating Multiresolution Image Acquisition and
Coarse-to-Fine Surface Reconstruction from Stereo, pp. 9-15.
S.D. Cochran and G. Medioni, Accurate Surface Description from Binocular
Stereo, pp. 16-23.
SHAPE FROM X
------------
R. Vaillant and O.D. Faugeras, Using Occluding Contours for Recovering
Shape Properties of Objects, pp. 26-32.
P.K. Allen and P. Michelman, Acquisition and Interpretation of 3D Sensor
Data from Touch, pp. 33-40.
P. Belluta, G. Collini, A. Verri, and V. Torre, 3D Visual Information
from Vanishing Points, pp. 41-49.
RECOGNITION
-----------
R. Kumar and A. Hanson, Robust Estimation of Camera Location and
Orientation from Noisy Data Having Outliers, pp. 52-60.
J. Ponce and D.J. Kriegman, On Recognizing and Positioning Curved 3D
Objects from Image Contours, pp. 61-67.
R. Bergevin and M.D. Levine, Generic Object Recogfnition: Building
Coarse 3D Descriptions from Line Drawings, pp. 68-74.
S. Lee and H.S. Hahn, Object Recognition and Localization Using
Optical Proximity Sensor System: Polyhedral Case, pp. 75-81.
MOTION
------
Y.F. Wang and A. Pandey, Interpretation of 3D Structure and Motion Using
Structured Lighting, pp. 84-90.
M. Xie and P. Rives, Towards Dynamic Vision, pp. 91-99.
ASPECT GRAPHS
-------------
D. Eggert and K. Bowyer, Computing the Orthographic Projection Aspect
Graph of Solids of Revolution, pp. 102-108.
T. Sripradisvarakul and R. Jain, Generating Aspect Graphs for Curved
Objects, pp. 109-115.
D.J. Kriegman and J. Ponce, Computing Exact Aspect Graphs of Curved Objects:
Solids of Revolution, pp. 116-121.
SURFACE RECONSTRUCTION
----------------------
C.I. Connolly and J.R. Stenstrom, 3D Scene Reconstruction from Multiple
Intensity Imagesm pp. 124-130.
R.L. Stevenson and E.J. Delp, Invariant Reconstruction of Visual Surfaces,
pp. 131-137.
P.G. Mulgaonkar, C.K. Cowan, and J. DeCurtins, Scene Description Using Range
Data, pp. 138-144.
C. Brown, Kinematic and 3D Motion Prediction for Gaze Control, pp. 145-151.
3D SENSING
----------
M. Rioux, F. Blais, J.-A. Beraldin, and P. Boulanger, Range Imaging Sensors
Development at NRC Laboratories, pp. 154-159.
REPRESENTATIONS
---------------
A. Gupta, L. Bogoni, and R. Bajcsy, Quantitative and Qualitative Measures
for the Evaluation of the Superquadric Models, pp. 162-169.
F.P. Ferrie, J. Lagarde, and P. Whaite, Darboux Frames, Snakes, and
Super-Quadrics: Geometry from the Bottom-Up, pp. 170-176.
H. Lu, L.G. Shapiro, and O.I. Camps, A Relational Pyramid Approach to View
Class Determination, pp. 177-183.
APPLICATIONS
------------
I.J. Mulligan, A.K. Mackworth, and P.D. Lawrence, A Model-Based Vision System
for Manipulator Position Sensing, pp. 186-193.
J.Y. Cartoux, J.T. Lapreste, and M. Richetin, Face Authentification or
Recognition by Profile Extraction from Range Images, pp. 194-199.
J.J. Rodriguez and J.K. Aggarwal, Navigation Using Image Sequence Analysis
and 3-D Terrain Matching, pp. 200-207.
------------------------------
End of VISION-LIST
********************