Copy Link
Add to Bookmark
Report

VISION-LIST Digest 1988 09 26

eZine's profile picture
Published in 
VISION LIST Digest
 · 10 months ago

Vision-List Digest	Mon Sep 26 09:55:18 PDT 1988 

- Send submissions to Vision-List@ADS.COM
- Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

Motion, correspondence, and aperture problem
Re: temporal domain in vision
Re: temporal domain in vision
Image Processing User's Group - Twin Cities
Re: How to connect SUN 3-160M and Imaging technology's series 151
Re: Real-time data acquisition
Info. request on computerised colorimetry/microscopy.
Faculty position at Stanford
Information needed on Intl. Wkshp. on Dynamic Image Analysis...
Workshop on VLSI implementation of Neural Nets
The 6th Scandinavian Conference on Image Analysis
Stanford Robotics Seminars

----------------------------------------------------------------------

Date: Wed, 21 Sep 88 16:13:49 PDT
From: stiber@CS.UCLA.EDU (Michael D Stiber)
Subject: Motion, correspondence, and aperture problem

I am currently working on a thesis involving shape from motion, and
would appreciate pointers to the literature that specifically address
my topic, as detailed in the "abstract" below. If the following rings
any bells, please send me the results of the bell-ringing. I
appreciate any efforts that you make with regards to this, even if it
is just to send me flames.

SHAPE FROM MOTION: ELIMINATING THE CORRESPONDENCE AND APERTURE PROBLEMS

The traditional approach to the task of shape from motion has been to
first apply spatial processing techniques to individual images in a
sequence (to localize features of interest), and then to apply other
algorithms to the images to determine how features moved. The
oft-mentioned "correspondence" and "aperture" problems arise here,
since one cannot be sure, from the information in the processed
frames, which features in one frame match which features in the
following frames. The methods designed to process visual motion
are actually confounded by that motion.

Alternative approaches to shape from motion perform temporal (rather
than spatial) processing first. These include work on optic flow.
When the causes of temporal variation are considered in this manner,
it becomes clear that different classes of variation are caused by
quite different types of changes in the "real world", and are best
accounted for at different levels of processing within a visual
system. Thus, overall brightness changes (such as when a cloud moves
in front of the sun) are "eliminated from the motion equation" at the
lowest levels, with lightness constancy processing. Changes due to
eye or camera motion, self motion, and object motion are likewise
identified at the appropriate stage of visual processing. This
strategy for processing motion uses temporal changes as a source of
much additional information, in contrast to the "spatial first"
approaches, which throw away that information. The "correspondence"
and "aperture" problems are eliminated by using this additional
information.

This thesis details an algorithm and connectionist architecture for
the processing of visual motion due to camera motion. It performs
this by converting images from a sensor-based coordinate system (in
which camera motion causes temporal variation of images) to a
body-centered coordinate system (in which the percept remains
constant, independent of camera movement). The "correspondence" and
"aperture" problems (as a result of camera motion) are eliminated by
this approach.


------------------------------

Date: 21 Sep 88 20:50:53 GMT
From: lag@cseg.uucp (L. Adrian Griffis)
Subject: Re: temporal domain in vision
Keywords: multiplex filter model
Organization: College of Engineering, University of Arkansas, Fayetteville


In article <233@uceng.UC.EDU>, dmocsny@uceng.UC.EDU (daniel mocsny) writes:
> In Science News, vol. 134, July 23, 1988, C. Vaughan reports on the
> work of B. Richmond of NIMH and L. Optican of the National Eye
> Institute on their multiplex filter model for encoding data on
> neural spike trains. The article implies that real neurons multiplex
> lots of data onto their spike trains, much more than the simple
> analog voltage in most neurocomputer models. I have not seen
> Richmond and Optican's papers and the Science News article was
> sufficiently watered down to be somewhat baffling. Has anyone
> seen the details of this work, and might it lead to a method to
> significantly increase the processing power of an artificial neural
> network?

My understanding is that neurons in the eye depart from a number of
general rules that neurons seem to follow elsewhere in the nervous system.
One such departure is that sections of a neuron can fire independent
of other sections. This allows the eye to behave as though is has a great
many logical neuron without having to use the the space that the same number
of discrete cellular metabolic systems would require. I'm not an expert
in this field, but this suggests to me that many of the special tricks
that neurons of the eye employ may be attempts to overcome space limitations
rather than to make other processing schemes possible. Whether or not
this affects the applicability of such tricks to artificial neural networks
is another matter. After all, artificial neural networks have space
limitations of their own.

UseNet: lag@cseg L. Adrian Griffis
BITNET: AG27107@UAFSYSB

------------------------------

Date: 22 Sep 88 01:42:30 GMT
From: jwl@ernie.Berkeley.EDU (James Wilbur Lewis)
Subject: Re: temporal domain in vision
Keywords: multiplex filter model
Organization: University of California, Berkeley

In article <724@cseg.uucp> lag@cseg.uucp (L. Adrian Griffis) writes:
>In article <233@uceng.UC.EDU>, dmocsny@uceng.UC.EDU (daniel mocsny) writes:
>> In Science News, vol. 134, July 23, 1988, C. Vaughan reports on the
>> work of B. Richmond of NIMH and L. Optican of the National Eye
>> Institute on their multiplex filter model for encoding data on
>> neural spike trains. The article implies that real neurons multiplex
>> lots of data onto their spike trains, much more than the simple
>> analog voltage in most neurocomputer models.
>
>My understanding is that neurons in the eye depart from a number of
>general rules that neurons seem to follow elsewhere in the nervous system.

I think Richmond and Optican were studying cortical neurons. Retinal
neurons encode information mainly by graded potentials, not spike
trains....another significant difference between retinal architecture
and most of the rest of the CNS.

I was somewhat baffled by the Science News article, too. For example,
it was noted that the information in the spike trains might be a
result of the cable properties of the axons involved, not necessarily
encoding any "real" information, but this possibility was dismissed with a
few handwaves.

Another disturbing loose end was the lack of discussion about how this
information might be propogated across synapses. Considering that
it generally takes input from several other neurons to trigger a
neural firing, and that the integration time necessary would tend
to smear out any such fine-tuned temporal information, I don't
see how it could be.

It's an interesting result, but I think they may have jumped the
gun with the conclusion they drew from it.

-- Jim Lewis
U.C. Berkeley

------------------------------

Date: 22 Sep 88 04:51:39 GMT
From: manning@mmm.serc.3m.com (Arthur T. Manning)
Subject: Image Processing User's Group - Twin Cities
Summary: Announcement of Group's Formation
Keywords: vision, image, datacube
Organization: 3M Company - Software and Electronics Resource Center (SERC); St. Paul, MN

Datacube Inc. (a high-speed image processing hardware manufacturer) is
initiating an image processing user's group in the twin cities through
their local sales rep

Barb Baker
Micro Resources Corporation phone: (612) 830-1454
4640 W. 77th St, Sui 109 ITT Telex 499-6349
Edina, Minnesota 55435 FAX (612) 830-1380


Hopefully this group will be the basis of valuable cooperation between
various commercial vision groups (as well as others) in the Twin Cities.

The first meeting to formulate the purpose, governing body, etc of this
image processing user's group was held September 22, 1988.

It looks like this newsgroup would be the best place to post further
developments.


Arthur T. Manning Phone: 612-733-4401
3M Center 518-1 FAX: 612-736-3122
St. Paul MN 55144-1000 U.S.A. Email: manning@mmm.uucp

------------------------------

Date: Thu, 22 Sep 88 00:01:33 CDT
From: schultz@mmm.3m.com (John C Schultz)
Subject: Re:How to connect SUN 3-160M and Imaging technology's series 151


While we do not have the exact same hardware we are using Bit 3
VME-VME bus repeaters (not extenders) with our SUN 3/160 and
Datacube hardware. We had to add a small patch the the Datacube
driver to get it to work with the Bit 3 because of the strange Bit 3
handling of interrrupt vectors but now it is fine.

The Bit 3 might work for you application, as opposed to a bus
"extender", because this repeater keeps the VME backplanes logically
and electricially separate although memory mapping is possible.

Hope this helps. We don't have IT boards to try it out with.

------------------------------

Date: Thu, 22 Sep 88 00:10:49 CDT
From: schultz@mmm.3m.com (John C Schultz)
Subject: re:real-time data acquisition


>I was wondering if anybody out there knows of a system that will record
>rgb digital data in real-time plus some ancillary position information. An
>inexpensive medium is of course preferred since it would be useful to
>be able to share this data with other folks.

Two ways to go (as the moderator mentioned) are low quality VHS style
machines which only record maybe 200 lines/image or broadcast quality
3/4 inch tape machines which cost big bucks (and still will probably
lose some data from a 512 x 485 image).

Digital storage is limited to maybe 30 seconds. In addition to Gould
I know of one vendor (Datacube) who supplies 1.2 GB of mag disk for
real-time image recording ($25-40K depending on size I think). If the
speed requirements are bursty, thse real-time disk systems could be
backed up to a Exabyte style cartridge holding a couple GB - or a
write-once optical media for faster recall I suppose.

As to location info, how about encoding during either vertical
blanking or on the audio channel if you use video tape.

------------------------------

Date: 23-SEP-1988 17:02:02 GMT
From: ALLEN%BIOMED.ABDN.AC.UK@CUNYVM.CUNY.EDU
Subject: Info. request on computerised colorimetry/microscopy.

I wonder if anyone can help me with some background information.

We have a potential application which involves the measurement of the
colour of a surface in an industrial inspection task. It is possible that
this will be computerised (eg. camera + framestore). I am trying to find
background information on the use of computers in colorimetry. Any books,
review articles, etc. which you can recommend, or experience with actual
systems, etc. - any info. would be gratefully received.

The second part of the project involves the measurement of the thickness
of a film (< 20/1000 inch), possibly by optical inspection. We don't
want to reinvent the wheel: anyone been here before?

Alastair Allen
Dept of Physics
University of Aberdeen
UK.
ALLEN@UK.AC.ABDN


------------------------------

Date: 23 Sep 88 19:58:04 GMT
From: rit@coyote.stanford.edu (Jean-Francois Rit)
Subject: Faculty position at Stanford
Organization: Stanford University


STANFORD UNIVERSITY

Department of Computer Science

Faculty Position in Robotics


Qualified people are invited to submit applications for a tenure-track
faculty position in Robotics. The appointment may be made at either
the junior or senior level depending on the qualifications of the
applicants.

Applicants for a tenured position must have strong records of
achievements both in research and in teaching and have demonstrated
potential for research leadership and future accomplishments.

Applicants for a junior, tenure-track position must have a PhD in
Computer Science and have demonstrated competence in one or several
areas of Robotics research and must have demonstrated potential for
excellent teaching.

Outstanding candidates in all areas of Robotics will be considered,
with preference to those in Advanced Control or Computer Vision.

Depending on specific background and interests, there is a strong
possibility of joint appointments with the Mechanical Engineering or
Electrical Engineering Departments.

Please send applications with curriculum vitae and names of at
least four references to: Professor Jean-Claude Latombe,
Chairman of Robotics Search Committee, Computer Science Department,
Stanford University, Stanford, CA 94305.

Stanford University is an Equal Opportunity/Affirmative Action
employer and actively solicits applications from qualified women and
targeted minorities.

Jean-Francois Rit Tel: (415) 723 3796
CS Dept Robotics Laboratory e-mail: rit@coyote.stanford.edu
Cedar Hall B7
Stanford, CA 94305-4110

------------------------------

Date: Sat, 24 Sep 88 14:52 EDT
From: From the Land of the Himalayas <SAWHNEY@cs.umass.EDU>
Subject: Information needed on Intl. Wkshp. on Dynamic Image Analysis...

Hullo Folks,

Does anyone out there have information about THE 3RD INTERNATIONAL
WORKSHOP on TIME-VARYING ANALYSIS and MOVING OBJECT RECOGNITION to
be held in Florence, Italy in May,1989 ?

I need information on deadlines, the program committee and the rest.

Harpreet

Univ. of Mass. at Amherst, COINS Dept.
sawhney@umass

------------------------------

Date: 22 Sep 88 00:43:58 GMT
From: munnari!extro.ucc.su.oz.au!marwan@uunet.UU.NET
From: marwan@extro.ucc.su.oz (Marwan Jabri)
Subject: Workshop on VLSI implementation of Neural Nets
Organization: University of Sydney Computing Service, Australia


Neural Networks -
Their Implementation in VLSI


A Two Day Workshop

5-6 November 1988

Sydney University Electrical
Engineering


Sponsored by

Electrical Engineering Foundation,
University of Sydney

Introduction
-------------

Research in artificial neural systems or, more commonly, artificial
neural networks (NNs) has gained new momentum following a decline in
the late 1960s as a result of unsuccessful problem solving areas where
conventional digital computers, with processing elements switching in
nanoseconds, do not perform as well as ``biological'' neural systems
that have electrochemical devices responding in milliseconds.

These problem solving areas share important attributes in that
they may be performed on noisy and distorted data. Vision, speech
recognition and combinatorial optimisation are examples of such
problems.

VLSI implementations of NN systems have begun to appear as a natural
solution to building large and fast computational systems. AT&T Bell
Labs and California Institute of Technology (Caltech) are two of the
leading research institutions where VLSI NN systems have been
developed recently. Successful development of VLSI NNs requires a
robust design methodology.

Objectives of the Workshop
--------------------------
The workshop is organised by the Systems Engineering and Design
Automation Laboratory (SEDAL), Sydney University Electrical
Engineering (SUEE) and is sponsored by the Electrical Engineering
Foundation. The workshop will present to academics, researchers and
engineers state-of-the-art methodologies for the implementation of
VLSI NN systems. It will also ll cover 6 important lectures of the
program.

Dr. Larry Jackel
----------------
Head Device Structures Research Department, AT\&T Bells Labs.

Dr. Larry Jackel is a world expert on VLSI implementation of
artificial NNs. He is leader of a group working on the implementation
of VLSI chips with several hundreds of neurons for image
classification, pattern recognition and associative memories. Dr
Jackel has over 80 technical publications in professional journals and
seven US patents. He is recipient of the 1985 IEEE Electron Device
Society Paul Rappaport Award for best paper. Dr. Jackel is author
and/or co-author of several invited papers on NN design, in
particular, recently in the special issue on NNs of the IEEE Computer
magazine (March 88).

Ms. Mary Ann Maher
------------------
Member of the technical staff, Computer Science Department,
Caltech.

Ms Mary Ann Maher is member of the research group headed by professor
Carver in the simulation of VLSI implementations of NNs. She has
participated as an invited speaker at several conferences and
workshops on VLSI implementation of NNs including the IEEE
International Symposium on Circuits and Systems, 1988 at Helsinki.

Invited Speakers
----------------
The seminar will also feature speakers from several Australian
research institutions with a diverse background who
will give the participants a broad overview of the subject.

Prof. Max Benne Wales
Prof. Rigby will present an introduction to important
MOS building blocks used in the VLSI implementation of NNs.

Other lectures and tutorials will be presented by the following
speakers from Sydney University Electrical Engineering:

Peter Henderson, SEDAL

Marwan Jabri, SEDAL

Dr. Peter Nickolls, Laboratory for Imaging Science and Engineering

Clive Summerfield, Speech Technology Research


Venue
-----
The course will be held in Lecture Theatre 450,
Sydney University Electrical Engineering on November 5 and 6, 1988.

Registration
------------
The workshop registration cost is $400 for a private institutioFor more information please contact:


Marwan Jabri,
SEDAL, Sydney University Electrical Engineering,
NSW 2006

or by:

Tel: 02 692 2240 Fax: 02 692 2012
ACSnet marwan@extro.ucc.su.oz


------------------------------

Date: Thu, 22 Sep 88 13:43:59 +0300
From: scia@stek5.oulu.fi (SCIA confrence in OULU)


The 6th Scandinavian Conference on Image Analysis
=================================================

June 19 - 22, 1989
Oulu, Finland

Second Call for Papers



INVITATION TO 6TH SCIA

The 6th Scandinavian Conference on Image Analysis (6SCIA)
will be arranged by the Pattern Recognition Society of Fin-
land from June 19 to June 22, 1989. The conference is spon-
sored by the International Association for Pattern Recogni-
tion. The conference will be held at the University of Oulu.
Oulu is the major industrial city in North Finland, situated
not far from the Arctic Circle. The conference site is at
the Linnanmaa campus of the University, near downtown Oulu.

CONFERENCE COMMITTEE

Erkki Oja, Conference Chairman
Matti Pietik{inen, Program Chairman
Juha R|ning, Local organization Chairman
Hannu Hakalahti, Exhibition Chairman

Jan-Olof Eklundh, Sweden
Stein Grinaker, Norway
Teuvo Kohonen, Finland
L. F. Pau, Denmark

SCIENTIFIC PROGRAM

The program will consist of contributed papers, invited
talks and special panels. The contributed papers will cov-
er:

* computer vision
* image processing
* pattern recognition
* perception
* parallel algorithms and architectures

as well as application areas including

* industry
* medicine and biology
* office automation
* remote sensing

There will be invited speakers on the following topics:

Industrial Machine Vision
(Dr. J. Sanz, IBM Almaden Research Center)

Vision and Robotics
(Prof. Y. Shirai, Osaka University)

Knowledge-Based Vision
(Prof. L. Davis, University of Maryland)

Parallel Architectures
(Prof. P. E. Danielsson, Link|ping University)

Neural Networks in Vision
(to be announced)

Image Processing for HDTV
(Dr. G. Tonge, Independent Broadcasting Authority).

Panels will be organized on the following topics:

Visual Inspection in the Electronics Industry (moderator:
prof. L. F. Pau);
Medical Imaging (moderator: prof. N. Saranummi);
Neural Networks and Conventional Architectures (moderator:
prof. E. Oja);
Image Processing Workstations (moderator: Dr. A. Kortekan-
gas).

SUBMISSION OF PAPERS

Authors are invited to submit four copies of an extended
summary of at least 1000 words of each of their papers to:

Professor Matti Pietik{inen
6SCIA Program Chairman
Dept. of Electrical Engineering
University of Oulu
SF-90570 OULU, Finland

tel +358-81-352765
fax +358-81-561278
telex 32 375 oylin sf
net scia@steks.oulu.fi

The summary should contain sufficient detail, including a
clear description of the salient concepts and novel features
of the work. The deadline for submission of summaries is
December 1, 1988. Authors will be notified of acceptance by
January 31st, 1989 and final camera-ready papers will be re-
quired by March 31st, 1989.

The length of the final paper must not exceed 8 pages. In-
structions for writing the final paper will be sent to the
authors.

EXHIBITION

An exhibition is planned. Companies and institutions in-
volved in image analysis and related fields are invited to
exhibit their products at demonstration stands, on posters
or video. Please indicate your interest to take part by con-
tacting the Exhibition Committee:

Matti Oikarinen
P.O. Box 181
SF-90101 OULU
Finland

tel. +358-81-346488
telex 32354 vttou sf
fax. +358-81-346211

SOCIAL PROGRAM

A social program will be arranged, including possibilities
to enjoy the location of the conference, the sea and the
midnight sun. There are excellent possibilities to make
post-conference tours e.g. to Lapland or to the lake dis-
trict of Finland.

The social program will consist of a get-together party on
Monday June 19th, a city reception on Tuesday June 20th, and
the conference Banquet on Wednesday June 21st. These are all
included in the registration fee. There is an extra fee for
accompanying persons.

REGISTRATION INFORMATION

The registration fee will be 1300 FIM before April 15th,
1989 and 1500 FIM afterwards. The fee for participants cov-
ers: entrance to all sessions, panels and exhibition;
proceedings; get-together party, city reception, banquet and
coffee breaks.

The fee is payable by
- check made out to 6th SCIA and mailed to the
Conference Secretariat; or by
- bank transfer draft account or
- all major credit cards

Registration forms, hotel information and practical travel
information are available from the Conference Secretariat.
An information package will be sent to authors of accepted
papers by January 31st, 1989.

Secretariat:
Congress Team
P.O. Box 227
SF-00131 HELSINKI
Finland
tel. +358-0-176866
telex 122783 arcon sf
fax +358-0-1855245

There will be hotel rooms available for participants, with
prices ranging from 135 FIM (90 FIM) to 430 FIM (270 FIM)
per night for a single room (double room/person).

------------------------------

From: binford@Boa-Constrictor.Stanford.EDU.stanford.edu (Tom Binford)
Subject: ROBOTICS SEMINARS

Monday, Sept. 26, 4:15

Self-Calibrated Collinearity Detector.
Yael Neumann
Weizmann inst. Israel.

Abstract:

The visual system can make highly precise spatial judgments. It has
been sugested that the high accuracy is maintained by an error
correction mechanism. According to this view, adaptation and
aftereffect phenomena can be explained as a by product of an error
correction mecahnism. This work describes an adaptive system for
collinearity and straightness detection. The system incorporates an
error correction mechanism and it is, therefore, highly accurate. The
error correction mechanism is performed by a simple self calibration
process names proportional multi-gain asjustment. The calibration
process adjusts the gain values of the system input units. It
compensate for errors due to noise in the input units receptive fields
location and response functions by ensuring that the average
collinearity offset (or curvature) detected by the system is zero
(straight).



Wednesday, September 28, 1988

Greg Hager
Computer and Information Science
University of Pennsylvania

ACTIVE REDUCTION OF UNCERTAINTY IN MULTI-SENSOR SYSTEMS
4:15 p.m.



Oct 3, 1988

Dr. Doug Smith
Kestrel Institute

KIDS - A Knowledge-Based Software Development System

Abstract:

KIDS (Kestrel Interactive Development System) is an experimental
knowledge-based software development system that integrates a number
of sources of programming knowledge. It is used to interactively
develop formal specifications into correct and efficient programs.
Tools for performing algorithm design, a generalized form of deductive
inference, program simplification, finite differencing optimizations,
and partial evaluation are available to the program developer. We
describe these tools and discuss the derivation of several programs
drawn from scheduling and pattern-recognition applications. All of
the KIDS tools are automatic except the algorithm design tactics which
require some interaction at present. Dozens of derivations have been
performed using the KIDS environment and we believe that it is close
to the point where it can be used for some routine programming.


------------------------------

End of VISION-LIST
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT