Copy Link
Add to Bookmark
Report

VISION-LIST Digest 1990 01 19

eZine's profile picture
Published in 
VISION LIST Digest
 · 10 months ago

Vision-List Digest	Fri Jan 19 13:19:01 PDT 90 

- Send submissions to Vision-List@ADS.COM
- Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

Learning in Computer Vision
Public domain camera calibration software
C source for algebraic and geometric operations
popi
Available images
Re: Vision-List delayed redistribution
Call for Participation: AAAI-90 Workshop on Qualitative Vision

----------------------------------------------------------------------

Date: Mon, 15 Jan 90 15:08:34 -0500
From: dmc@piccolo.ecn.purdue.edu (David M Chelberg)
Subject: Re: Learning in Computer Vision

I have a student who is interested in learning in computer vision. I
know of several approaches to this problem, but don't have any kind of
bibliography.

I am looking for any references and people doing work on learning in
computer vision. Please post your responses to the List.

Thanks,
Dave Chelberg (dmc@piccolo.ecn.purdue.edu)

------------------------------

Date: Tue, 16 Jan 90 08:53:22 EST
From: hwang@starbase.mitre.org (Vincent Hwang)
Subject: Public domain camera calibration software

Hi:

I am collecting any and all public domain software for camera
calibration and object position computation given approximate
initial position? I will make them available if anyone is interested.

I am currently working on a research project to find out
the robustness of vision algorithms for bin-packing type of application
under adverse lighting condition. The main objective is to test/measure
the performance of vision modules under situations where external
lighting may be excessive.

Vincent Hwang, (hwang@starbase.mitre.org)

[ Camera calibration is an important issue of general interest to the
readership. Please post all sources of information (papers,
techniques, and providers of code) to the List.
phil...
BTW: You may wish to check session 1.1 on Calibration from the 1988
CVPR conference held in Ann Arbor, MI. ]

------------------------------

Date: 16 Jan 90 20:34:41 GMT
From: shuang@caip.rutgers.edu (Shuang Chen)
Subject: C source for algebraic and geometric operations
Keywords: algebra, geometry, C source
Organization: Rutgers Univ., New Brunswick, N.J.

In my computer vision research, I often need to implement some basic
algebraic and geometric operations, such as solving 3rd or 4th order
(single variable) equations, normalization of quadric forms, finding
eigenvalues and eigenvectors of a real, symmtric matrix and general 3D
transform operations. Are there any books which provide such source
code and standard algorithms around ?

I know there is a book which gives a lot of algorithms and source code
for numerical operations, but I wonder there is such a book for
algebraic and geometric operations. C code is prefered.

Thanks in advance.


Shuang Chen

Dept. of ECE
Rutgers - the State University
of New Jersey.

------------------------------

Date: Wed, 17 Jan 90 21:22:25 GMT
From: us214777@mmm.serc.3m.com (John C. Schultz)
Subject: popi
Organization: 3M - St. Paul, MN 55144-1000 US

Moderator:
Based on the number of queries I got for popi (about a dozen), perhaps
you will (re)post the popi source to this mailing list. My site does
not have ftp capabilities and many people apparently missed
comp.sources.misc Article 262-271.

John C. Schultz EMAIL: jcschultz@mmm.3m.com
3M Company WRK: +1 (612) 733 4047
3M Center, Building 518-01-1 St. Paul, MN 55144-1000
The opinions expressed above are my own and DO NOT reflect 3M's

[ Since the popi source is over .5MB, I put the code in the Vision List
ftp directory. Anonymous ftp to ads.com and cd to
/pub/VISION-LIST-ARHIVE/SYSTEMS/POPI. This code is only available through
this ftp service.
phil... ]

------------------------------

Date: Thu, 18 Jan 90 15:31:16 GMT
From: us214777@mmm.serc.3m.com (John C. Schultz)
Subject: Available images
Organization: 3M - St. Paul, MN 55144-1000 US

I have received official permission to release a set of texture images
obtained from 3M non-woven material. These images have been graded as
to product quality by 3M experts although the expert grading is
subject to the usual quantative variations. I have included the
expert rankings below. The physical sample size and the material used
is proprietary.

If you are interested in obtaining the image set (~10 MBytes), please
send a 1/4" cartridge tape, suitable for use in a SUN /dev/rst device.
For example I can write either DC300, DC600 or DC6150 tapes. Tapes
will be returned in tar format with the default sun blocking factor.
FTP is not available. The amount of data precludes e-mail.

Send tape to
Attn. John C. Schultz
3M Company
Bldg 518, Dock 4
Woodbury, MN 55125

All images are 8 bits/pixel, stored as 512 x 512 pixels with an actual
image area of about 440 x 350 pixels. All images are raw, byte
oriented format although I could convert them to TIFF format if
needed. The images were obtained by backlighting the non-woven
material and digitizing the images obtained from a Sony XC77 TV camera
using a Datacube DIGIMAX/FRAMESTORE combination. The lighting is
reasonably uniform but not perfect.

The sample images have been divided into two classes based on the material
color - brown and green. The ranking for the brown samples is probably less
accurate than for the green samples because the experts only ranked hardcopy
pictures rather than the actual samples. Comments as to why some samples were
ranked in a particular fashion are included with the rankings. Two sets of
rankings are provided showing a modest range of expert opinions.


Rankings are: 1-2 excellent
3-4 very good
5-6 average to good
7-8 fair
9 poor
10 scrap

SAMPLE rank #1 rank #2
b1 7 8 dark and light spots
b2 5 6 a few light and dark spots
b3 5 5
b4 6 6 dense spots, clumps
b5 9 10 holes, knots, clumps
b6 3 6 ????
b7 4 4
b8 9 10 some dense and light spots or holes
b9 4 4 a few dense areas
b10 3 3
b11 9-10 10 hole, otherwise OK
b12 7-8 8-9 thin sample, open sample
b13 6 7 small dense spots
b14 5 5
b15 5 7 small hole which makes fair

g1 2 2 a few open spots
g2 2 3 a few dense spots
g3 2 2 some small open spots
g4 4 3 a few more open spots
g5 5 5 small open and dense spots
g6 7 8 large open areas
g7 7 8 many light and dark spots
g8 5 5 some open areas
g9 4 4 the large light spot ??
g10 6 6 some dense spots
g11 5 4-5 a few dense spots
g12 6 6 big dense spot
g13 6 6 2 big dense spots
g14 8 9 light spots may be stretched material
g15 5 5 a few light spots
g16 4 2-3 a few dense spots
g17 2 1 good
g18 4 5 some dense spots
g19 3 2 dense spots
g20 9 9 dense spots
g21 6 6 light and dense spots

John C. Schultz EMAIL: jcschultz@mmm.3m.com
3M Company WRK: +1 (612) 733 4047
3M Center, Building 518-01-1 St. Paul, MN 55144-1000
The opinions expressed above are my own and DO NOT reflect 3M's

------------------------------

Date: Thu, 18 Jan 90 21:15:35 +1100
From: munnari!mimir.mlb.dmt.oz.au!pic@uunet.UU.NET
Subject: Re: Vision-List delayed redistribution
Organization: CSIRO Division of Manufacturing Technology, Melbourne, Australia

>Date: Thu, 11 Jan 90 14:16:52 EST
>From: palumbo@cs.Buffalo.EDU (Paul Palumbo)
>Subject: Connected Component Algorithm
>
>I was wondering if anybody out there in net-land knows an image analysis
>technique to locate connected components in digital images. In particular,
>I am looking for an algorithm that can be implemented in hardware that makes
>only one pass through the image in scan-line order and reports several simple
>component features such as component extent (Minimum and Maximum X and Y
>coordinates) and the number of foreground pixels in the component.

We developed such a hardware unit in 1983-4, and it has been on the market
since 1986 or so. The principal features are:

- one pass connectivity algorithm for binary image
- VME bus boardset, 32 bit data paths
- accepts Datacube MAXBUS video input or image load via VME bus
- programmable processing region upto 512 x 512 pixels
- can operate at video frame rates (though not for a 512x512 image)
- for each connected region in the scene (black or white) it computes
- area (0th moment)
- 1st moments (sumx, sumy)
- 2nd moments (sumxx, sumyy, sumxy)
- perimeter length
- bounding box (minx, max, miny, maxy)
- region color (black or white)
- whether region touches the edge of processing window
- a point on the perimeter (useful for subsequent processing)
- also computes region topology, that is which region is included
within which region
- all results are stored in dual port memory for host access
- interrupt or pollable status flag tells when each region is complete,
ie/ you dont have to wait till end of frame to get all the data, it
comes out as the scene is processed.
- Unix device driver allows real-time operation on a SUN Workstation
- software library compatible with Datacube's MAXWARE revision 3.x.

>I know about the LSI Logic "Object Contour Tracer Chip" but this chip appears
>to be too powerful (and slow) for this application. I had found some papers
>by Gleason and Agin dated about 10 years ago but could not find the exact
>details of their algorithm.

Folks at JPL have done work on computing moments given a contour.
Gleason and Agin used a run-length encoding system and then performed
connectivity on the runs, this reduces the computational load since they
used software to do the connectivity on a low power CPU (LSI-11/??). Automatix
used a similar scheme in their early Autovision systems.

>Any help or pointers on locating such an algorithm would be appreciated.

I can't tell you the algorithm, but for more information contact

Tom Seitzler
VISION SECURITY INC.
San Jose, CA
Telephone: (408) 453-1966
Fax: (408) 453-1972

or send me email.

Peter Corke, PHONE: +61 3 487-9259
CSIRO Div. Manufacturing Technology pic@mimir.dmt.oz.au
Melbourne, Australia, 3072

------------------------------

Date: Tue, 16 Jan 90 21:17:33 EST
From: wlim@ai.mit.edu (William Y-P. Lim)
Subject: Call for Participation: AAAI-90 Workshop on Qualitative Vision

CALL FOR PARTICIPATION
======================


AAAI-90 WORKSHOP ON QUALITATIVE VISION

Sunday, July 29 1990 Boston, Massachusetts


Qualitative descriptions of the visual environment are receiving
greater interest in the computer vision community. This recent
increase in interest is partly due to the difficulties that often
arise in the practical application of more quantitative methods.
These quantitative approaches tend to be computationally expensive,
complex and brittle. They require constraints which limit generality.
Moreover inaccuracies in the input data do not often justify such
precise methods. Alternatively, physical constraints imposed by
application domains such as mobile robotics and real-time visual
perception have prompted the exploration of qualitative mechanisms
which require less computation, have better response time, focus on
salient and relevant aspects of the environment, and use enviromental
constraints more effectively.

The one-day AAAI-90 Workshop on Qualitative Vision seeks to bring
together researchers from different disciplines for the active
discussion of the technical issues and problems related to the
development of qualitative vision techniques to support robust
intelligent systems. The Workshop will examine aspects of the
methodology, the description of qualitative vision techniques, the
application of qualitative techniques to visual domains and the role
of qualitative vision in the building of robust intelligent systems.
Topics to be discussed include:

o What is Qualitative Vision? (e.g., definitions, properties,
biological/psychophysical models or bases)

o Qualitative Visual Features and their Extraction (e.g., 2D/3D
shape, depth, motion)

o High level Qualitative Vision (e.g., qualitative 2D/3D models,
properties, representations)

o Qualitative Vision and Intelligent Behavior (e.g., navigation,
active or directed perception, hand-eye coordination, automated
model building)

Since the number of participants is limited to under 50, invitations
for participation will be based on the review of extended technical
abstracts by several members of the Qualitative Vision research
community. The extended abstract should address one of the above
topic areas, be 3 to 5 pages in length (including figures and
references), and it should begin with the title and author name(s) and
address(es). Extended abstracts (6 copies) should be sent, by
*April 15, 1990*, to:

William Lim
Grumman Corporation
Corporate Research Center
MS A01-26
Bethpage, NY 11714

Decisions on acceptance of abstracts will be made by May 15, 1990.


ORGANIZING COMMITTEE:

William Lim, Grumman Corporation, Corporate Research Center,
(516) 575-5638 or (516) 575-4909, wlim@ai.mit.edu (temporary) or
wlim@crc.grumman.com (soon)

Andrew Blake, Department of Engineering Science, Oxford University,
ab@robots.ox.ac.uk

Philip Kahn, Advanced Decision Systems, (415) 960-7457, pkahn@ads.com

Daphna Weinshall, Center for Biological Information Processing, MIT,
(617) 253-0546, daphna@ai.mit.edu

------------------------------

End of VISION-LIST
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT