Copy Link
Add to Bookmark
Report
VISION-LIST Digest Volume 10 Issue 52
VISION-LIST Digest Thu Dec 12 10:07:26 PDT 91 Volume 10 : Issue 52
- Send submissions to Vision-List@ADS.COM
- Vision List Digest available via COMP.AI.VISION newsgroup
- If you don't have access to COMP.AI.VISION, request list
membership to Vision-List-Request@ADS.COM
- Access Vision List Archives via anonymous ftp to ADS.COM
Today's Topics:
IU constructs and environments
Request for Realtime Image Processing Systems on Sun Stations
Invariant pattern recognition with ANNs
Hilbert transform
One pass bandreject algorithm wanted
Data compression hardware
Pseudocolor algorithms
Job Announcement
Promising Directions in Active Vision
OPTICOMP@TAUNIVM is now named OPT-PROC@TAUNIVM
Japanese Artificial Intelligence Industry
----------------------------------------------------------------------
Date: Thu, 12 Dec 91 13:11:33 -0800
From: pkahn@ads.com (Philip Kahn)
Subject: IU constructs and environments
I am compiling a list of IU constructs and environments that are
CURRENTLY AVAILABLE. Please respond to me (pkahn@ads.com) ASAP.
thanks,
phil...
------------------------------
Date: Wed, 11 Dec 91 08:58:36 +0100
From: ka%bsun7@ztivax.siemens.com (Ingolf Karls)
Subject: Request for Realtime Image Processing Systems on Sun Stations
Dear Colleagues:
We are planning the installation of a closed realtime image processing
system which should meet the following requirements:
1) connection to Sun SPARC environment via VMEbus or better Sbus
2) I/O capabilities like frame grabber utility for RGB, video input/output
3) addition of processing boards for scaling of processing performance is
possible
4) fast internal bus system (video rate)
5) flexibility through HLL or ASM programming of processing boards
6) integration in the SunOS 4.1 environment (X11R5 interface)
We have checked systems like DataCube and Wavetracer, does anybody know a multiprocessor system based on digital signal processors (TMS320C30, TMS320C40 for example) that is able for using as an image processing system ?
I will summarize all incoming information, thanks in advance.
Ingolf Karls | Siemens AG
Image Processing Group | ZFE ST SN61
| Otto-Hahn-Ring 6
email: ka%bsun3@ztivax.siemens.com | 8000 Munich 83
phone: 089-636-49479 |
fax : 089-636-2393 |
------------------------------
Date: Thu, 05 Dec 1991 14:35:30 GMT
From: infko!evol@ads.com
Organization: University of Koblenz, Germany
Subject: Invariant pattern recognition with ANNs
I am interested in translation, rotation and scaleing invariant
pattern recognition with ANNs. I already know about Fokushima's
Neocognitron and Hubel & Wiesels work about the human visual cortex. I
also read some papers about complex logarithmic mapping and Fast
Fourier Transformation (H. Haken, H. J. Reitboeck & J. Altmann, H.
Wechsler & G. L. Zimmerman). Nevertheless this complex logarithmic
mapping seems to take place in the human visual cortex, I think it is
not sufficient for the task. Especially scaleing invariant recognition
is still a very hard problem. Are there any additional related works
about scaleing invariant pattern recognition ?
Thanks in advance
Randolf Werner
Randolf Werner FB Informatik, Uni Koblenz, evol@infko.UUCP
Rheinau 3-4, D-5400 Koblenz, Germany ...!uunet!mcsun!unido!infko!evol
------------------------------
Date: Tue, 3 Dec 91 14:14 GMT
From: GLYNN PETER ROBINSON <GROBINSON@portia.umds.lon.ac.uk>
Subject: Hilbert transform
Can anyone help me? I am trying to work out the Hilbert transform
of the 2nd derivative of the Gaussian. I want a symbolic solution
not a numerical approximation.
The Hilbert transform of a function is defined to be the convolution
of the function with 1/x.
If g(x;s) is the Gaussian distribution of width s then its 2nd derivative
is g''(x;s)=(x^2/s^2-1)g(x;s)/(s^2).
If anyone can solve this for me or knows where the solution can be found
then I would be very interested.
Yours,
Lewis Griffin
------------------------------
Date: Wed, 4 Dec 91 00:59 GMT
From: Roy <SN=Roy%GI=Rajiv%C=US%TI@mcimail.com>
Subject: One pass bandreject algorithm wanted
I am looking for a 3x3 or 5x5 operator that can do a bandreject
in one pass over a frame. Would appreciate any pointers or the kernel
itself.
I mean "essentially subtracting a bandpass from the original signal".
At the end of one frame, after the image goes through a 3x3 convoluter, I would
like to have an image that retains the DC component and the high frequency
component. I suspect, from Gonzalez and Wintz Pg 188, that one could perform
a Moore-Penrose generalized inverse on a radially symmetric Butterworth
bandreject filter (Pg 200) and arrive at the kernel. I do not quite follow
the logic on Pg 188 to do it.
Thanks for the follow up.
Regards, Rajiv Rajiv.Roy%RROY@timsg.ti.com
------------------------------
Date: 4 Dec 91 17:02:08 GMT
From: Expressio Veri <chettri@udel.edu>
Organization: University of Delaware
Subject: Data compression hardware
Keywords: data compression hardware, neural nets, conventional techniques
Hello fellow netters:
I am interested in getting information on data compression hardware
(company, university etc), platforms on which it works, algorithms
implemented etc.
In addition, I would like to pointers in the literature regarding
neural nets and data compression.
If you feel that this is not important enough to flood the net with,
please reply directly to me.
Samir
(chettri@huey.udel.edu)
------------------------------
Date: Tue, 10 Dec 91 19:29:45 GMT
From: mulberry%triton.unm.edu@lynx.unm.edu ()
Organization: University of New Mexico, Albuquerque
Subject: Pseudocolor algorithms
Does anyone know of algorithms and/or software that does
pseudocoloring, false coloring, color enhancement, color to
black-and-white conversion, etc?
I need such for an IBM PC 286 compatible system.
Thanks.
Bill (mulberry@triton.unm.edu)
College of Pharmacy
University of New Mexico
Albuquerque, New Mexico
------------------------------
Date: Wed 4 Dec 1991 15:16:33 EST
From: Scott Musman <musman@radar.nrl.navy.mil>
Subject: Job Announcement
Radar Division
Naval Research Laboratory
Washington, D.C.
The Radar division of NRL now has position available for an
applied researcher into Automatic Recognition of Radar Imagery.
The work involves data driven and model based approaches to feature
extraction, classification and multi-frame image understanding.
Applicants with experience in C, Unix and X are preferred (experience
with Lisp is a plus but is not required). The ability to obtain a
Security Clearance will be required to be eligible for the position.
Any people who are interested in hearing more details
about the job opening, or who may have any specific questions, should
contact Scott Musman, by phone at 202-767-3610, or by e-mail at
musman@radar.nrl.navy.mil.
------------------------------
Date: Thu, 05 Dec 91 17:16:42 -0600
From: swain@gargoyle.uchicago.edu
Subject: Promising Directions in Active Vision
This summer the National Science Foundation sponsored a workshop on the
subject of Active Vision, an approach to computer vision that stresses
the tight coupling of vision and action. At the workshop a group of
researchers within this emerging discipline wrote a document intended to help
guide research and funding entitled ``Promising Directions in Active
Vision''. Thanks to funding from the NSF, IBM and Hughes Research
Laboratories, copies are available free of charge. If you would like a
copy, please send e-mail to riss@cs.uchicago.edu, or write to:
Nerissa Walton
Department of Computer Science
University of Chicago
1100 E 58th St.
Chicago, IL 60637
A postscript version of the document can be obtained via anonymous ftp
from gargoyle.uchicago.edu. It is the file active-vision.ps in the
pub/vision directory. There is also a compressed version of the
postscript file in the same directory, named active-vision.ps.Z.
The abstract reads:
PROMISING DIRECTIONS IN ACTIVE VISION
Written by the attendees of the NSF Active Vision Workshop
University of Chicago, August 5-7, 1991
Edited by Michael J. Swain and Markus Stricker
Active vision systems have mechanisms that can actively control camera
parameters such as position, orientation, focus, zoom, aperture and
vergence (in a two camera system) in response to the requirements of
the task and external stimuli. They may also have features such as
spatially variant (foveal) sensors. More broadly, active vision
encompasses attention, selective sensing in space, resolution and time,
whether it is achieved by modifying physical camera parameters or the
way data is processed after leaving the camera.
In the active vision paradigm, the basic components of the visual
system are visual behaviors tightly integrated with the actions they
support; these behaviors may not require elaborate categorical
representations of the 3-D world. Because the cost of generating and
updating a complete, detailed model of our everyday environment is too
high, this approach to vision is vital for achieving robust, real-time
perception in the real world. In addition, active control of imaging
parameters has been shown to simplify scene interpretation by
eliminating the ambiguity present in single images.
This document describes promising directions for research in active
vision and possible applications of this research. It also discusses
progress in experimental equipment for supporting this research and
potential applications. Important research areas in active vision
include attention, foveal sensing, gaze control, eye-hand coordination,
and integration with robot architectures.
The contributors to the document were:
A. Lynn Abbott Virginia Polytechnic Institute
Narendra Ahuja University of Illinois
Peter K. Allen Columbia University
John (Yiannis) Aloimonos University of Maryland
Minoru Asada Osaka University
Ruzena Bajcsy University of Pennsylvania
Dana H. Ballard University of Rochester
Ben Bederson Vision Applications Inc.
Ruud M. Bolle IBM
Peter Bonasso The MITRE Corporation
Christopher M. Brown University of Rochester
Peter J. Burt SRI/David Sarnoff Research Center
David Chapman Teleos Research
James J. Clark Harvard University
Jonathon H. Connell IBM
Paul R. Cooper Northwestern University
Jill D. Crisman Northeastern University
James L. Crowley LIFIA (France)
Michael J. Daily Hughes Research Laboratories
Jan-Olaf Eklundh Royal Institute of Technology, Sweden
Frank P. Ferrie McGill University
R. James Firby University of Chicago
Martin Herman National Institute of Science and Technology
Philip Kahn Advanced Decision Systems
Eric Krotkov Carnegie Mellon University
Niels da Vitoria Lobo University of Toronto
Howard Moraff National Science Foundation
Randal C. Nelson University of Rochester
H. Keith Nishihara Teleos Research
Thomas J. Olson University of Virginia
Daniel Raviv Florida Atlantic University
Giulio Sandini University of Genoa
Eric L. Schwartz New York University / Vision Applications Inc.
Markus Stricker University of Chicago
Michael J. Swain University of Chicago
John K. Tsotsos University of Toronto
Richard Wallace Vision Applications Inc.
------------------------------
Date: Mon, 09 Dec 91 17:13:37 IST
From: Shelly Glaser 972 3 545 0060 <GLAS@TAUNIVM.TAU.AC.IL>
Organization: TAU
Subject: OPTICOMP@TAUNIVM is now named OPT-PROC@TAUNIVM
The mailing list formerly known as OPTICOMP@TAUNIVM was renamed in order
to avoid confusion with OptiComp Corp., with which it has no connection.
The new name for this list is OPT-PROC.
The OPT-PROC mailing list is a moderated mailing list, and is involved
with optical computing, optical information processing and holography.
To join OPT-PROC send the message
SUBSCRIBE OPT-PROC your-everyday-name
to LISTSERV@TAUNIVM.bitnet or listserv@vm.tau.ac.il (those are two forms
of the same address).
Yours
Shelly Glaser
Moderator, OPT-PROC
------------------------------
Date: Mon, 09 Dec 91 13:51:15 EST
From: "James W. Reese" <R505040@UNIVSCVM.CSD.SCAROLINA.EDU>
Subject: Japanese Artificial Intelligence Industry
I am the editor of AJBS-L@NCSUVM (The Association of Japanese
Business Studies List at North Carolina State University, USA).
During the next few months, we will be adding Japanese industry
analysis files to the existing AJBS-L datafiles.
We are seeking contributor(s) for a file called JAPAN AI which
will cover the nature and characteristics of the Japanese artificial
intelligence industry (e.g. major competitors, market shares, etc.).
If you have research which would be useful, please send it to:
James W. Reese
AJBS-L Editor
R505040@UNIVSCVM.CSD.SCAROLINA.EDU (Internet)
R505040@UNIVSCVM (Bitnet)
The address for AJBS-L is:
AJBS-L@NCSUVM.CC.NCSU.EDU (Internet)
AJBS-L@NCSUVM (Bitnet)
Thank you in advance for your assistance.
------------------------------
End of VISION-LIST digest 10.52
************************