Copy Link
Add to Bookmark
Report
VISION-LIST Digest Volume 12 Issue 52
VISION-LIST Digest Fri Nov 12 12:17:48 PDT 93 Volume 12 : Issue 52
- ***** The Vision List host is TELEOS.COM *****
- Send submissions to Vision-List@TELEOS.COM
- Vision List Digest available via COMP.AI.VISION newsgroup
- If you don't have access to COMP.AI.VISION, request list
membership to Vision-List-Request@TELEOS.COM
- Access Vision List Archives via anonymous ftp to FTP.TELEOS.COM
Today's Topics:
Motorized Camera Lens
Creating synthetic stereo image pairs
ROBODOC...
Wanted: software for computational stereo
A new monograph of note
Digitizer code?
Lecturer in IP/RS wanted
AAAI Fall Symposium on Machine Learning in Computer Vision
TR by FTP: Analysis of Optical Flow Constraints
----------------------------------------------------------------------
Date: Thu, 11 Nov 93 15:01:52 CST
From: swain@vuse.vanderbilt.edu (Cassandra T. Swain)
Subject: Motorized Camera Lens
The computer vision research group at Vanderbilt University is looking for
information on a motorized camera lens with computer controlled aperture, zoom,
and focus for less than $4000. Can anyone recommend a specific lens, its
manufacturer, and a contact person?
Thank you for your assistance.
Cassandra T. Swain
Vanderbilt University
Nashville, TN 37235
(615) 322-7269
swain@vuse.vanderbilt.edu
------------------------------
Date: Thu, 11 Nov 1993 09:11:53 +0100
From: Takouhi Ozanian <Takouhi.Ozanian@itk.unit.no>
Subject: Creating synthetic stereo image pairs
I would like to create synthetic stereo image pairs from a given depth
image. I need them for testing a stereo image matching algorithm.
As far as I know random dot stereograms can be used for such purpose.
All I have met up to the moment is algorithms for creating a SINGLE
image random dot stereograms (SIRDS) but not stereo pairs.
Any idea and/or code will be highly appreciated.
Thanks,
Taki Ozanian
------------------------------
Date: Wed, 10 Nov 93 08:59:05 -0800
From: "Harpreet S. Sawhney" <sawhney@almaden.ibm.com>
Subject: ROBODOC...
Hi,
Regarding your question on pose estimation for hip surgery, there is
now a commercial system called ROBODOC that does that. Here are some
of the references. Hope you can find some answers.
Harpreet S. Sawhney " From the Land of the Himalayas "
" And now close to the Sierras ! "
Research Staff Member
IBM Almaden Research Center, Dept. K54
650 Harry Road
San Jose, CA 95120
sawhney@almaden.ibm.com
408-927-1799 (work)
408-927-9048 (home)
408-927-4090 (FAX)
------------------------------- Referenced Note ---------------------------
*** DATA BASE : INSP - PAGE 1
03 SEARCH ( TAYLOR ADJ ( ( R ADJ H ) OR ( R ADJ 'H.'$1) OR 'R.H' OR 'R.H.'$1 )).
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
*** *** DOCUMENT NO. INS933102981 *** ***
TITLE Taming the bull: safety in a precise surgical robot
DOCNO INS 933102981
AUTHOR Taylor, R.H. Paul, H.A. Kazanzides, P. Mittelstadt, B.D.
Hanson, W. Zuhars, J. Williamson, B. Musits, B. Glassman, E.
Bargar, W.L.
CORPAUTH IBM Thomas J. Watson Res. Center, Yorktown Heights, NY,
USA
PUBLISHR IEEE
PUBADDR New York, NY, USA
ORDER 1991 P865-70 vol.1
CONFRNCE Pisa, Italy 19-22 June 1991
ISBN 0 7803 0078 5
ABSTRACT The authors have developed an image directed robotic system for
orthopaedic bone machining applications, aimed initially at
cementless total hip replacement surgery. A clinical trial in dogs
needing such surgery has begun. The fact that the application
requires a robot to move a tool in contact with a patient has
motivated the authors to implement a number of redundant consistency
checking mechanisms. The paper provides a brief system overview and
outlines the requirements defined by the veterinary surgeon who uses
the system. It then describes the authors' approach to implementing
these requirements and concludes with a few remarks about their
experience so far and possible extensions of their work
CLASCODE B7520 C3385 C3390 C7330 C5260B
SUBJHEAD biomedical equipment medical image processing robots
surgery
ABSTNUM B9309-7520-005 C9309-3385-012
RECTYPE 06
TRMTCODE A
*** *** DOCUMENT NO. INS930903372 *** ***
TITLE A surgical robot for total hip replacement surgery
DOCNO INS 930903372
AUTHOR Paul, H.A. Mittlestadt, B. Bargar, W.L. Musits, B. Taylor,
R.H. Kazanzides, P. Zuhars, J. Williamson, B. Hanson, W.
CORPAUTH Integrated Surg. Syst. Inc., Sacramento, CA, USA
PUBLISHR IEEE Comput. Soc. Press
PUBADDR Los Alamitos, CA, USA
ORDER Proceedings. 1992 IEEE International Conference on Robotics
And Automation (Cat. No.92CH3140-1) 1992 P606-11 vol.1
CONFRNCE Nice, France 12-14 May 1992
ISBN 0 8186 2720 4
*** DATA BASE : INSP - PAGE 2
ABSTRACT The authors describe a robotic surgical system that has been
designed to create femoral cavities that are precisely shaped and
positioned for implantation of uncemented prostheses. This robotics
system creates cavities with a dimensional accuracy more than 50
times greater than broached cavities, exceeds the tolerances to which
implants are manufactured, and does not produce gaps that prevent
bone ingrowth. A canine study was undertaken to evaluate the
prosthesis fit and placement achieved by employing a surgical robot
to prepare the femur. This study compared the results achieved on 15
dogs undergoing total hip replacement with manual broaching
techniques and 25 dogs undergoing robotically assisted surgery.
Among the 25 dogs, which ranged in age from 2/sup 1///sub 2/ to 11
years, there were no deaths, no infections, and no intraoperative
complications. Human applications of this technique are also
considered
CLASCODE C3385 C3390
SUBJHEAD robots surgery
ABSTNUM C9304-3385-013
RECTYPE 06
TRMTCODE P X
*** *** DOCUMENT NO. INS930903373 *** ***
TITLE Force sensing and control for a surgical robot
DOCNO INS 930903373
AUTHOR Kazanzides, P. Zuhars, J. Mittelstadt, B. Taylor, R.H.
CORPAUTH Integrated Surg. Syst., Sacramento, CA, USA
PUBLISHR IEEE Comput. Soc. Press
PUBADDR Los Alamitos, CA, USA
ORDER Proceedings. 1992 IEEE International Conference on Robotics
And Automation (Cat. No.92CH3140-1) 1992 P612-17 vol.1
CONFRNCE Nice, France 12-14 May 1992
ISBN 0 8186 2720 4
ABSTRACT The authors describe the use of force feedback in a surgical
robot system (ROBODOC). The application initially being addressed is
total hip replacement (THR) surgery, where the robot must prepare a
cavity in the femur for an artificial implant. In this system, force
feedback is used to provide safety, tactile search capabilities, and
an improved man machine interface. Output of the force sensor is
monitored by a safety processor, which initiates corrective action if
any of several application defined thresholds are exceeded. The
robot is able to locate objects using guarded moves and force control
(ball in cone strategy). In addition, the force control algorithm
provides an intuitive man machine interface which allows the surgeon
to guide the robot by leading its tool to the desired location. An
application of force control currently under development is
described, where the force feedback is used to modify the cutter feed
rate (force controlled velocity)
*** DATA BASE : INSP - PAGE 3
CLASCODE C3385 C3390 C3120F
SUBJHEAD feedback force control robots surgery
ABSTNUM C9304-3385-014
RECTYPE 06
TRMTCODE P
================================================================================
*** *** DOCUMENT NO. INS910310123 *** ***
TITLE Robotic total hip replacement surgery in dogs
DOCNO INS 910310123
AUTHOR Taylor, R.H. Paul, H.A. Mittelstadt, B.D. Glassman, E.
Musits, B.L. Bargar, W.L.
CORPAUTH IBM Thomas J. Watson Res. Center, Yorktown Heights, NY,
USA
PUBLISHR IEEE
PUBADDR New York, NY, USA
ORDER Images of the Twenty First Century. Proceedings of the Annual
International Conference of the IEEE Engineering in Medicine and
Biology Society (Cat. No.89CH2770-6) 1989 P887-9 vol.3
CONFRNCE Seattle, WA, USA 9-12 Nov. 1989
ABSTRACT Approximately half of over 120000 total hip replacement
operations performed annually in the United States use cementless
implants. The standard method for preparing the femoral cavity for
such implants improves the use of mallet driven handheld broach whose
shape matches that of the desired implant. In vitro experiments have
supported the possibility that more accurate (and efficacious)
results can be achieved by using a robot to machine the cavity. The
authors are developing a second generation system suitable for use in
an operating room, targeted at clinical trials on dogs needing hip
implants. A description is given of the background, objectives,
architecture, and surgical procedure for this system. Also provided
are brief descriptions of key results from earlier experiments and
planned future work
CLASCODE C3385C C3390 C7420 C7330
SUBJHEAD bone prosthetics robots surgery
ABSTNUM C91008948
RECTYPE 06
TRMTCODE X
================================================================================
*** *** DOCUMENT NO. INS902106965 *** ***
TITLE On homogeneous transforms, quaternions, and computational efficiency
DOCNO INS 902106965
AUTHOR Funda, J. Taylor, R.H. Paul, R.P.
CORPAUTH Dept. of Comput. & Inf. Sci., Pennsylvania Univ., Philadelphia,
*** DATA BASE : INSP - PAGE 6
PA, USA
CODEN IRAUEZ
ISSN 1042-296X
ORDER IEEE Trans. Robot. Autom. (USA) Vol.6, No.3 June 1990 P382-8
ABSTRACT Three dimensional modeling of rotations and translations in
robot kinematics is most commonly performed using homogeneous
transforms. An alternate approach, using quaternion vector pairs as
spatial operators, is compared with homogeneous transforms in terms
of computational efficiency and storage economy. The conclusion
drawn is that quaternion vector pairs are as efficient as, more
compact than, and more elegant than their matrix counterparts. A
robust algorithm for converting rotational matrices into equivalent
unit quaternions is described, and an efficient quaternion based
inverse kinematics solution for the Puma 560 robot arm is presented
CLASCODE C3390 C1390 C1130
SUBJHEAD kinematics robots transforms
ABSTNUM C90062730
RECTYPE 02
TRMTCODE T
------------------------------
Date: 11 Nov 1993 13:35:44 -0600
From: KOTT@cgi.com
Organization: UTexas Mail-to-News Gateway
Subject: Wanted: software for computational stereo
[ This is a repost, but with more detail. phil... ]
I am looking for a commercial software (or a solid-quality academic
software) for COMPUTATIONAL STEREO. Any platform will do.
Let me give you some information about our application. We deal with
2 or 3 monochromatic images of about 200x300mm size. The stereo baseline is
comparable to the size of the original object. We can do scanning at no better
than 600 dpi. The image has two types of features:
1. Of primary interest: spots, highly contrasted, with well-defined edges,
typically of size 0.2-3.0 mm
2. Of secondary interest: streaks, with relatively poorly defined edges,
typically 0.5-1.0 mm wide and 10-30 mm long.
Our ultimate objectives is to obtain a 3D, CAD-like reconstruction of the
object.
The software should be able to perform automatically all the conventional
computational stereo tasks: feature acquisition, matching, and depth
determination. The output does not have to be a stereo display. We can just as
well deal with a file that gives us a list of features with their
XYZ coordinates. We are willing and capable to customize and integrate
the software if necessary.
I would be delighted to obtain more information about your product and its
capabilities. You could e-mail it to me at kott@cgi.com, or
fax it to (412) 642-6906, or mail it to
Dr. Alexander Kott, PhD
Principal Engineer
Carnegie Group, Inc.
Pittsburgh, PA 15222
(412) 642-6900
Your replies and pointers to other parties will be greatly appreciated!
------------------------------
Date: Wed, 10 Nov 93 15:51:13 EST
From: ehling@ai.mit.edu (Teresa A. Ehling)
Subject: A new monograph of note
Just Released from The MIT Press...
THREE-DIMENSIONAL COMPUTER VISION
A Geometric Viewpoint
Olivier Faugeras
This monograph by one of the world's leading researchers provides
a thorough, mathematically rigorous exposition of a broad and
vital area in computer vision: the problems and techniques
related to three-dimensional vision and motion. The emphasis
is on using geometry to solve problems in stereo and motion,
with examples from navigation and object recognition.
Contents
1 Introduction
2 Projective Geometry
3 Modeling and Calibrating Cameras
4 Edge Detection
5 Representing Geometric Primitives and Their Uncertainty
6 Stereo Vision
7 Determing Discrete Motion from Points and Lines
8 Tracking Tokens over Time
9 Motion Fields of Curves
10 Interpolating and Approximating Three-Dimensional Data
11 Recognizing and Locating Objects and Places
12 Answers to Problems
A Constrained Optimization
B Some Results from Algebraic Geometry
C Differential Geometry
Bibliography
Index
663 pp. $65.00
ISBN 262-06158-9 FAUTH
The MIT Press
55 Hayward Street
Cambridge, MA 02142-1399
mitpress-orders@mit.edu
------------------------------
Date: Thu, 11 Nov 1993 00:08:31 GMT
From: David Wynter <100033.505@CompuServe.COM>
Subject: Digitizer code?
Hi,
I wonder if any of you know of code that is in the public domain that would get
us started on a development. We have developed a database for PenPoint OS that
stores BLOBs. We are looking at storing vector graphics. What we want to do is
allow users to draw on the screen and 'tidy up' their scrawls (a shape
recognizer).
By this I mean when they draw a rough square we pick up the digitizer vectors,
apply some rules that determine that it is a square and rerender the input as a
square with straight sides. I am sure there must be someone who has
experimented with this problem, hopefully in 'C' code.
The only way I can think of doing it at this early stage is measuring relative
angle changes over the distance of the continuous input stroke. This combined
with an allowable width envelope for shapes with straight side (rectangles,
triangles and straight lines) would be a good starting point. Circles would
require a closed path with the same average angle change relative to the
diameter. The way the Apple Newtons shape recognizer works is good enough.
Can someone point me to some code published in a journal for instance?
TIA
David Wynter Melbourne Australia <100033.505@CompuServe.COM>
------------------------------
Date: Wed, 10 Nov 1993 17:59:21 +1000
From: Mark Shortis <mark_shortis@muwayf.unimelb.edu.au>
Subject: Lecturer in IP/RS wanted
Lecturer in Image Processing and/or Remote Sensing
Three Year Contract
Department of Surveying and Land Information
The University of Melbourne, Australia
The University of Melbourne wishes to appoint a lecturer in image
processing and remote sensing in the Department of Surveying and Land
Information.
The Department offers a four-year degree in Surveying and two
five-year programs leading to combined degrees in Surveying and
Science (Computer Science or Environmental Science), and Surveying
and Arts (Geography). It also has a strong graduate program at
Masters and Ph.D. levels, and offers Graduate Diplomas in Surveying
Science and Geographic Information Systems (GIS). The Department has
10 academic staff, 150 undergraduate level students and more than 50
graduate level students.
Staff and graduate students have access to the latest hardware and
software for both remote sensing and image analysis with applications
operating within the UNIX and PC environments, including an extensive
SUN network and Intergraph workstations.
The appointee will be involved in the development and teaching of
courses in remote sensing and image processing, and may also be
called upon to teach subjects in closely related areas of surveying
science or GIS. A further aspect of the duties will be organising
and participation in continuing education programs. The appointee
will be expected to actively participate in research, and may
undertake research consultancies in accordance with university
policy.
Applicants should have an appropriate higher degree, preferably a
doctorate in an appropriate discipline, with a demonstrated ability
in teaching and research. They should have a good computing
background and an interest in remote sensing as it applies to the
mapping sciences.
Further enquiries may be directed to the Head of the Department,
Associate Professor C.S. Fraser on
+61 3 344 6806 or by electronic mail to
CLIVE_FRASER@MAC.UNIMELB.EDU.AU
Salary is in the range of A$41,000 - A$48,688 per annum and
appointment is for three years in the first instance.
Further information regarding details of application procedure and
conditions of appointment is available from Personnel Services on +61
3 344 6078.
Applications, including names and addresses of at least three
referees and quoting the relevant position number, should be sent to
The Director, Personnel Services, The University of Melbourne,
Parkville, Victoria, 3052.
Applications close: 10 December 1993.
An equal opportunity employer.
Dr. Mark R. Shortis, Mark_Shortis@mac.unimelb.edu.au
Senior Lecturer,
Dept. of Surveying and Land Information,
University of Melbourne, Telephone +613 344 6401
Parkville 3052, AUSTRALIA. Facsimile +613 347 2916
"I intend to live forever, or die in the attempt"
------------------------------
Date: Thu, 11 Nov 93 12:32:37 EST
From: Dr Kevin Bowyer <kwb@tortugas.csee.usf.edu>
Subject: AAAI Fall Symposium on Machine Learning in Computer Vision
AAAI Fall Symposium on Machine Learning in Computer Vision
Working notes are available from AAAI as Tech Report FSS-93-04.
Contact fss@aaai.org for details on ordering copies of the tech report.
Table of Contents:
Incremental Modelbase Updating: Learning New Model Sites
Kuntal Sengupta and Kim L. Boyer, The Ohio State University / 1
Learning Image to Symbol Conversion
Malini Bhandaru, Bruce Draper and Victor Lesser, University of
Massachusetts at Amherst / 6
Transformation-invariant Indexing and Machine Discovery for Computer Vision
Darrell Conklin, Queen's University / 10
Recognition and Learning of Unknown Objects in a Hierarchical Knowledge-base
L. Dey, P.P. Das, and S. Chaudhury, I.I.T., Delhi / 15
Unsupervised Learning of Object Models
C. K. I. Williams, R. S. Zemel, Univ. of Toronto; M. C. Mozer,
Univ. of Colorado / 20
Learning and Recognition of 3-D Objects from Brightness Images
Hiroshi Murase and Shree K. Nayar, Columbia University / 25
Adaptive Image Segmentation Using Multi-Objective Evaluation and
Hybrid Search Methods
Bir Bhanu, Sungkee Lee, Subhodev Das, University of California / 30
Learning 3D Object Recognition Models from 2D Images
Arthur R. Pope and David G. Lowe, University of British Columbia / 35
Matching and Clustering: Two Steps Towards Automatic Objective
Model Generation
Patric Gros, LIFIA, Grenoble, France / 40
Learning About A Scene Using an Active Vision System
P. Remagnino, M. Bober and J. Kittler, University of Surrey, UK / 45
Learning Indexing Functions for 3-D Model-Based Object Recognition
Jeffrey S. Beis and David G. Lowe, University of British Columbia / 50
Non-accidental Features in Learning
Richard Mann and Allan Jepson, University of Toronto / 55
Feature-Based Recognition of Objects
Paul A. Viola, Massachusetts Institute of Technology / 60
Learning Correspondences Between Visual Features and Functional Features
Hitoshi Matsubara, Katsuhiko Sakaue and Kazuhiko Yamamoto, ETL, Japan / 65
A Self-Organizing Neural Network that Learns to Detect and Represent
Visual Depth from Occlusion Events
Johnathon A. Marshall and Richard K. Alley, University of North Carolina / 70
Learning from the Schema Learning System
Bruce Draper, University of Massachusetts / 75
Learning Symbolic Names for Perceived Colors
J.M. Lammens and S.C. Shapiro, SUNY Buffalo / 80
Extracting a Domain Theory from Natural Language to Construct a Knowledge Base
for Visual Recognition
Lawrence Chachere and Thierry Pun, University of Geneva / 85
A Vision-Based Learning Method for Pushing Manipulation
Marcos Salganicoff, Univ. of Pennsylvania; Giorgio Metta, Andrea Oddera
and Giulio Sandini, University of Genoa. / 90
A Classifier System for Learning Spatial Representations Based
on a Morphological Wave Propagation Algorithm
Michael M. Skolnick, R.P.I. / 95
Evolvable Modeling: Structural Adaptation Through Hierarchical Evolution
for 3-D Model-based Vision
Thang C. Nguyen, David E. Goldberg, Thomas S. Huang, University of Illinois / 100
Developing Population Codes for Object Instantiation Parameters
Richard S. Zemel, Geoffrey E. Hinton, University of Toronto / 105
Integration of Machine Learning and Vision into an Active Agent Paradigm
Peter W. Pachowicz, George Mason University / 110
Assembly plan from observation
K. Ikeuchi and S.B. Kang, Carnegie-Mellon University / 115
Learning Shape Models for a Vision Based Human-Computer Interface
Jakub Segen, A.T.\&T. Bell Laboratories / 120
Learning Visual Speech
G. J. Wolff, K. V. Prasad, D. G. Stork \& M. Hennecke,
Ricoh California Research Center / 125
Learning open loop control of complex motor tasks
Jeff Schneider, University of Rochester / 130
Issues in Learning from Noisy Sensory Data
J. Bala and P. Pachowicz, George Mason University / 135
Learning combination of evidence functions in object recognition
D. Cook, L. Hall, L. Stark and K. Bowyer, University of South Florida / 139
Learning to Eliminate Background Effects in Object Recognition
Robin R. Murphy, Colorado School of Mines / 144
The Prax Approach to Learning a Large Number of Texture Concepts
J. Bala, R. Michalski, and J. Wnek, George Mason University / 148
Non-Intrusive Gaze Tracking Using Artificial Neural Networks
Dean A. Pomerleau and Shumeet Baluja, Carnegie Mellon University / 153
Toward a General Solution to the Symbol Grounding Problem: Combining Learning
and Computer Vision
Paul Davidsson, Lund University / 157
Late Papers:
Symbolic and Subsymbolic Learning for Vision: Some Possibilities
Vasant Honavar, Iowa State University / 161
------------------------------
Date: Wed, 10 NOV 93 10:24 N
From: NESI@ingfi1.cineca.it
Subject: TR by FTP: Analysis of Optical Flow Constraints
The following Technical Reports are available from anonymous ftp.
Please post this in Vision-List.
"Analysis of Optical Flow Constraints"
Alberto Del Bimbo, Paolo Nesi, Jorge L. C. Sanz
Dipartimento di Sistemi e Informatica, Universit\`{a} di Firenze,
Facolt\`{a} di Ingegneria, via S. Marta 3, Firenze, Italy.
nesi@ingfi1.ing.unifi.it, tel.: +39-55-4796265, fax.: +39-55-4796363
Computer Science Department, IBM Almaden Research Center, Ca, USA.
Computer Research and Advanced Applications Group, IBM Argentina.
ABSTRACT
The velocity field is the projection on the image plane of the 3-D real
velocity. The gradient-based approach for velocity field estimation was
defined on the basis of a partial differential equation (commonly referred
to as the Optical Flow Constraint -- OFC --) that models the changes in
image brightness. The fields estimated by the gradient-based techniques are
usually called optical flow and are only an approximation of the
velocity field. Several, authors have investigated conditions for the
applicability of OFC equation for modeling the velocity field. More
recently, a different constraint equation was introduced (that will be
referred to in the following as Extended Optical Flow Constraint -- EOFC --)
which is supposed to provide a more accurate approximation of velocity field
than the OFC. A large amount of experimental results, mostly based on the
OFC, are available in the literature. However, few papers exist that show
conditions for the applicability of OFC and the relationships between the
OFC and EOFC for modeling the velocity field. In this paper, several
conditions for the applicability of the OFC in modeling the velocity field
are shown. Moreover, differences between the OFC and EOFC equations for
modeling the velocity field are analyzed.
{\bf Index terms}: motion analysis, image flow, optical flow, constraint
equation analysis, velocity field, motion field.
"A Vision System for Estimating People Flow"
A. Del Bimbo, P. Nesi
Dipartimento di Sistemi e Informatica, Universit\`{a} di Firenze,
Facolt\`{a} di Ingegneria, via S. Marta 3, Firenze, Italy.
nesi@ingfi1.ing.unifi.it, tel.: +39-55-4796265, fax.: +39-55-4796363
Counting the number of people crossing a public area can be very useful for
properly scheduling the frequency of a service. Mechanical and
photosensitive systems, such as rotating tripod gates, short iron doors,
weight-sensitive boards, and photoelectric cells, have often been used for
such estimates. Since these methods are not efficient in critical
conditions, vision-based approaches have been provided. Many of them
identify moving objects through a segmentation process. Once the objects
are identified, they are tracked in the sequence of images and counted.
These approaches have some drawbacks when they are used in critical
conditions such as for counting the people getting on and off a public bus.
In this paper, a new technique for counting passing people which is based on
motion estimation and spatio-temporal interpretation of the estimated motion
is proposed, with its implementation on prototype DSP-based architecture.
{\bf Index terms}: counting system, people's flow, feature flow field,
spatio-temporal optical flow interpretation.
*****
To get the PostScript file you have to connect to 150.217.11.13 by using
ftp with anonymous username and leave your bitnet address as password. The
file name is "analysis.ps" for the first and "mvaj.ps" for the second.
They are located in the directory /motion/papers, so:
% ftp 150.217.11.13
Connected to aguirre
Name: anomymous
Password: <your complete bitnet address>
ftp> cd motion/papers
ftp> binary
ftp> get mvaj.ps.Z
ftp> get analysis.ps.Z
ftp> by
% compress -d mvaj.ps.Z
% lpr -P<nome your printer> mvaj.ps
% compress -d analysis.ps.Z
% lpr -P<nome your printer> analysis.ps
*****
For additional information or comments do not esitate to contact:
Dr. Paolo Nesi
Dip. System and Informatics
University of Florence
Via S. Marta 3
50139 Firenze, ITALY
NESI@INGFI1.ING.UNIFI.IT
------------------------------
End of VISION-LIST digest 12.52
************************