Copy Link
Add to Bookmark
Report

VISION-LIST Digest Volume 11 Issue 34

eZine's profile picture
Published in 
VISION LIST Digest
 · 10 months ago

VISION-LIST Digest    Thu Sep 17 12:22:48 PDT 92     Volume 11 : Issue 34 

- Send submissions to Vision-List@ADS.COM
- Vision List Digest available via COMP.AI.VISION newsgroup
- If you don't have access to COMP.AI.VISION, request list
membership to Vision-List-Request@ADS.COM
- Access Vision List Archives via anonymous ftp to FTP.ADS.COM

Today's Topics:

Vision Software Availability
Visually guided grasping, any?
Re: 3D Grasp Planning
Image Registration
Help on publishing an article
Text Book for Computer Vision
Machine vision vendors?
Express Eye Movements & Attention: BBS Call for Commentators

----------------------------------------------------------------------

From: misha@ai.mit.edu (Mike Bolotski)
Date: Wed, 16 Sep 92 11:31:25 EDT
Subject: Vision Software Availability

We're trying to determine whether any of the following types of
packages are in the public domain. The software should be reliable,
relatively bug-free, not necessarily general purpose (restricted set
of objects or environments is OK). It should run out of the box in
either C, C++, or Lisp. Doesn't need a fancy user interface.

Object Recognition: Be able to recognize 3D polygonal objects from 2D
image data. Large number of models is not an
issue.

Motion Vision: Extract egomotion from a sequence of images. Depth
maps aren't necessary, but they might be
interesting.

Image Processing/
Analysis: Low-level signal processing and early vision
algorithms. We know about Khoros, but something
of a simpler system would be nice.

Motion Sequences: Are there any other sequences available other than
the ones from the various motion workshops?

Any and all information would be greatly appreciated.

Mike Bolotski MIT Artificial Intelligence Laboratory
misha@ai.mit.edu Cambridge, MA 02139 (617) 253-8170

------------------------------

Date: 13 Sep 92 14:24:30 GMT
From: edelman@wisdom.weizmann.ac.il (Edelman Shimon)
Organization: Weizmann Institute of Science, Dept. of Applied Math & CS
Subject: Visually guided grasping, any?
Keywords: vision, grasping

I am looking for information on recent works in visually guided
grasping (including the relevant aspects of visual processing,
learning, planning and robotics). I am aware of Bartlett Mel's work on
learning to reach, and of models of visual-motor coordination by
Kuperstein, and by G. Edelman's group (formerly of Rockefeller U.).
None of those works, however, seems to deal with grasping, or with
visual processing of shape for the purpose of grasping (as opposed to
mere sensing of the location of the end effector).

Please reply directly to "edelman@wisdom.weizmann.ac.il". If there are
enough responses, I'll post a summary.

Shimon

Dr. Shimon Edelman Internet: edelman@wisdom.weizmann.ac.il
Dept. of Applied Mathematics and Computer Science
The Weizmann Institute of Science
Rehovot 76100, ISRAEL

------------------------------

Date: Sun, 13 Sep 92 12:13:46 CDT
From: krao@ra.csc.ti.com (Kashi Rao)
Subject: Re: 3D Grasp Planning

> Date: 20 Aug 92 17:02:58 GMT
> From: johnson@akula.llnl.gov ( Robert Johnson )
> I am looking for references (and/or software) on 3D representations
> and algorithms suitable for robot grasp planning from range (or stereo)
> images.

We did some work on these lines. The reference below describes the
work.

@ARTICLE(Rao-etal89,
AUTHOR = "K. Rao and G. Medioni and H. Liu and G. Bekey",
TITLE = "Shape Description and Grasping for Robot Hand-Eye Coordination",
JOURNAL = "IEEE Control Systems Magazine: Special Issue on Robotics and Automation",
YEAR = "1989",
MONTH = feb,
VOLUME = 9,
NUMBER = 2,
PAGES = "22-29"
)

Regards,

Kashi Rao
Image Understanding
Corporate Research, Texas Instruments
Dallas, Texas 75265
krao@csc.ti.com
214-995-0335

------------------------------

Date: Thu, 17 Sep 92 13:43:16 BST
From: cv_colem@hal.bristol-poly.ac.uk (CV Coleman)
Subject: Image Registration

I would be very grateful if anyone could point me toward any articles
on image registration,(with ftp locations if poss!) I am interested
in registering images, using a genetic algorithm to find suitable
coefficients for an affine transform to match a pair of stereo images.

I thank you.

This is Charlie Coleman signing off. Bye-bye.

------------------------------

Date: Thu, 17 Sep 92 11:51:26 PDT
From: Julian Sominka (CSM) <jsominka@camborne-school-of-mines.ac.uk>
Subject: Help on publishing an article

I would like to publish a paper on the project I am currently
working on and I would appreciate any advice I can get.

I have been working for some time now on a image analysis
project, trying to analyse machined metal surfaces (steels
and brasses that have been machined by turning and grinding).
To date, I have had some success using fairly standard Fourier
transform and texture based techniques in classifying regions
in grey scale images captured via an optical microscope and camera.
I feel that the techniques I have used are not particularly novel
but their use in this application is.

I have never tried to publish anything before, and my question
is which journals may be interested in publishing an article
on this subject?

Any advice as well as names to contact, addresses etc. would be
greatly appreciated.

In addition, if anyone knows of any work done in a similar project
I would be interested in hearing about it.

Thanks,
Julian Sominka

jsominka@csm.ac.uk (Janet & Bitnet)
jsominka%csm.ac.uk@nsfnet-relay.ac.uk (Internet)

------------------------------

Date: Tue, 8 Sep 92 09:56:28 GMT
From: cpna@marlin.jcu.edu.au (Nizam Ahmed)
Organization: James Cook University
Subject: Text Book for Computer Vision
Keywords: computer vision

I am looking for a suitable text book and a reference book to teach a
semester course on computer vision in the final year of a four year
degree programme or the first year of a two year master's programme in
computer science.

So far I have tried a combination of Computer Vision by Ballard & Brown
and Vision by Marr but I am not quite satisfied.

Thanking you in advance.

Nizam

nizam@coral.cs.jcu.edu.au

------------------------------

Date: Thu, 17 Sep 92 00:16:36 GMT
From: park@netcom.com (Bill Park)
Organization: Netcom - Online Communication Services (408 241-9760 guest)
Subject: Machine vision vendors?
Summary: Who sells intelligent machine vision systems these days?
Keywords: vendors, machine vision image processing image understanding

I'm trying to collect a list of current vendors of machine vision
systems that have some level of "intelligence" -- by that I mean that
they should be capable of identifying objects whose appearance varies
significantly. They might appear in different locations and
orientations, for example, or the lighting might vary, or the objects
themselves might even vary in shape. But I'm looking for
COMMERCIALIZED technology, red in tooth and claw, not wimpy lab stuf!

I have in mind products of companies such as Automatix and Machine
Intelligence Corporation (now sadly defunct). They did a
nearest-neighbor match in a space of shape features that are invariant
under displacement and/or rotation of the object (area, perimeter,
number of holes, various ratios of such numbers). There once were
about fifty companies selling machine vision systems like this, mostly
based on SRI International's public-domain "binary vision module"
(Duda, Agin). Did a shakeout occur in that industry like the one
that happened in robotics, and the one going on in expert systems now?

Other experimental systems (Bolles, Faugeras, Birk, etc.) were able to
recognize partially-visible objects (e.g., they could identify and
locate overlapping objects tumbled in a bin). DARPA's Autonomous Land
Vehicle program produced the Terragator vehicle that can see well
enough to drive on the streets of CMU (maybe Pittsburgh, too, while
the University's lawyers weren't looking?), while Professor Dickmann's
vehicle drives itself on the Autobahn in Germany. -- Did algorithms
like these ever make it into commercial, industrial-strength products
or are they still stuck in the lab?

To scope this request a bit more:

Systems that process 2D or 3D images or even moving images are OK (Ted
Turner's colorization process??). Binary, grey-scale, or color OK.
Preferably light-based, but IR, UV, radar, subatomic particles,
ultrasonics, etc. OK if non-trivial. Trainable systems are good. So
are systems that work off of geometric or CAD models. Morphological
analysis systems would qualify. Texture/fractal analysis systems
(Pentland) would qualify. Probably anything spun off from DARPA's
Image Understanding Program would qualify (though they are probably
mostly secret military reconnaissance imagery analysis programs and
not really on the open market). Bio-lab programs that read
electrophoresis gels (didn't Intelligenetics sell one of those?) or
karyotypes or pathology slides or fingerprints or yak butt hairs.
Vision systems that make esthetic judgements (evaluate a magazine
layout, rate your architectural color scheme, criticize your choice of
bow tie). I guess I should include robot cart vision systems like the
one on Joe Engelberger's candy-striper hospital robot (TRC, Inc.).
Algorithmic, expert systems, fuzzy logic, neural nets, genetic
algorithms, simulated annealing, omphalaskepsis -- I don't care what
reasoning method.

I'm not looking for: imaging systems per se (even CyberVision's),
optical character recognition, pen-input recognition,
template-matchers in image space, or inspection systems that merely
look at a few fixed regions of an image of an object in a reference
position and count the pixels whose brightness exceeds a threshold.
Cochlea's 3D acoustic inspection/recognition/tracking systems are kind
of on the borderline -- just far enough off the wall to be
interesting.

Reply to park@netcom.com and I'll post a summary. If there's already
a good recent survey in the literature, maybe that's all we need to
mention.

Thanks, and a tip of the hat to the great, sprawling Net Mind.

Bill Park

Grandpaw Bill's High Technology Consulting & Live Bait, Inc.

------------------------------

Date: Sat, 12 Sep 1992 21:38:47 GMT
From: harnad@phoenix.princeton.edu (Stevan Harnad)
Organization: Princeton University
Subject: Express Eye Movements & Attention: BBS Call for Commentators

Below is the abstract of a forthcoming target article by B. Fischer &
H. Weber on express saccadic eye movements and attention. It has been
accepted for publication in Behavioral and Brain Sciences (BBS), an
international, interdisciplinary journal that provides Open Peer
Commentary on important and controversial current research in the
biobehavioral and cognitive sciences. Commentators must be current BBS
Associates or nominated by a current BBS Associate. To be considered as
a commentator on this article, to suggest other appropriate
commentators, or for information about how to become a BBS Associate,
please send email to:

harnad@clarity.princeton.edu or harnad@pucc.bitnet or write to:
BBS, 20 Nassau Street, #240, Princeton NJ 08542 [tel: 609-921-7771]

To help us put together a balanced list of commentators, please give some
indication of the aspects of the topic on which you would bring your
areas of expertise to bear if you were selected as a commentator. An
electronic draft of the full text is available for inspection by anonymous
ftp according to the instructions that follow after the abstract.
____________________________________________________________________

EXPRESS SACCADES AND VISUAL ATTENTION

B. Fischer and H. Weber
Department Neurophysiology
Hansastr. 9
D - 78 Freiburg
Germany
aiple@sun1.ruf.uni-freiburg.de (c/o Franz Aiple)

KEYWORDS: Eye movements, Saccade, Express Saccade, Vision, Fixation,
Attention, Cortex, Reaction Time, Dyslexia

ABSTRACT: One of the most intriguing and controversial observations in
oculomotor research in recent years is the phenomenon of express
saccades in man and monkey. These are saccades of so extremely short
reaction times (100 ms in man, 70 ms in monkey) that some experts on
eye movements still regard them as artifacts or anticipatory reactions
that do not need any further explanation. On the other hand, some
research groups consider them to be not only authentic but also a
valuable means of investigating the mechanisms of saccade generation,
the coordination of vision and eye movements, and the mechanisms of
visual attention.

This target article puts together pieces of experimental evidence in
oculomotor and related research - with special emphasis on the express
saccade - in order to enhance our present understanding of the
coordination of vision, visual attention, and eye movements necessary
for visual perception and cognition.

We hypothethize that an optomotor reflex is responsible for the
occurrence of express saccades, one that is controlled by higher brain
functions of disengaged visual attention and decision making. We
describe a neural network as a basis for more elaborate mathematical
models and computer simulations of the optomotor system in primates.

********************************************************************
To help you decide whether you would be an appropriate commentator for
this article, an electronic draft is retrievable by anonymous ftp from
princeton.edu according to the instructions below (the filename is
bbs.fischer). Please do not prepare a commentary on this draft. Just
let us know, after having inspected it, what relevant expertise you
feel you would bring to bear on what aspect of the article.
********************************************************************
To retrieve a file by ftp from a Unix/Internet site, type either:
ftp princeton.edu
or
ftp 128.112.128.1
When you are asked for your login, type:
anonymous
Enter password as per instructions (make sure to include the specified @),
and then change directories with:
cd /pub/harnad
To show the available files, type:
ls
Next, retrieve the file you want with (for example):
get bbs.fischer
When you have the file(s) you want, type:
quit

Certain non-Unix/Internet sites have a facility you can use that is
equivalent to the above. Sometimes the procedure for connecting to
princeton.edu will be a two step process such as:

ftp
followed at the prompt by:
open princeton.edu
or
open 128.112.128.1

In case of doubt or difficulty, consult your system manager.

*****

JANET users who do not have an ftp facilty for interactive file
transfer (this requires a JIPS connection on your local machine -
consult your system manager if in doubt) can use a similar facility
available at JANET site UK.AC.NSF.SUN (numeric equivalent
000040010180), logging in using 'guestftp' as both login and
password. The online help information gives details of the transfer
procedure which is similar to the above. The file received on the
NSF.SUN machine needs to be transferred to your home machine to read
it, which can be done either using a 'push' command on the NSF.SUN
machine, or (usually faster) by initiating the file transfer from
your home machine. In the latter case the file on the NSF.SUN machine
must be referred to as directory-name/filename (the directory name to
use being that provided by you when you logged on to UK.AC.NSF.SUN).
To be sociable (since NSF.SUN is short of disc space), once you have
received the file on your own machine you should delete the file from
the UK.AC.NSF.SUN machine.

This facility is very often overloaded, and an off-line relay
facility at site UK.AC.FT-RELAY (which is simpler to use in any
case) can be used as an alternative. The process is almost identical
to file transfer within JANET, and the general method is illustrated
in the following example. With some machines, filenames and the
username need to be placed within quotes to prevent unacceptable
transposion to upper case (as may apply also to the transfer from
NSF.SUN described above).

transfer
Send or Fetch: f
*********

To Local Filename: bbs.fischer
Remote Sitename: uk.ac.ft-relay
Remote Username: anonymous
Remote Password: [enter your full email address including userid for
this, or it won't be accepted]
Queue this request? y


Or if you wish you can get a listing of the available files, by giving
the remote filename as:

princeton.edu:(D)/pub/harnad

Because of traffic delays through the FT-RELAY, still another method
can sometimes be recommended, which is to use the Princeton bitftp
fileserver described below. Typically, one sends a mail message of
the form:

FTP princeton.edu UUENCODE
USER anonymous
LS /pub/harnad
GET /pub/harnad/bbs.fischer
QUIT

(the line beginning LS is required only if you need a listing of
available files) to email address BITFTP@EARN.PUCC or to
BITFTP@EDU.PRINCETON, and receives the requested file in the form of
one or more email messages.

[Thanks to Brian Josephson (BDJ10@UK.AC.CAM.PHX) for the above
detailed UK/JANET instructions; similar special instructions for file
retrieval from other networks or countries would be appreciated and
will be included in updates of these instructions.]

*****

Where the above procedures are not available (e.g. from Bitnet or other
networks), there are two fileservers:
ftpmail@decwrl.dec.com
and
bitftp@pucc.bitnet
that will do the transfer for you. To one or the
other of them, send the following one line message:

help

for instructions (which will be similar to the above, but will be in
the form of a series of lines in an email message that ftpmail or
bitftp will then execute for you).

*****

Stevan Harnad Department of Psychology Princeton University
& Lab Cognition et Mouvement URA CNRS 1166 Universite d'Aix Marseille II
harnad@clarity.princeton.edu / harnad@pucc.bitnet / srh@flash.bellcore.com
harnad@learning.siemens.com / harnad@gandalf.rutgers.edu / (609)-921-7771


------------------------------

End of VISION-LIST digest 11.34
************************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT