Computer Science Colloquium
Thursday, August 29, 2002, 4:15pm
Upson Hall B17
University of Washington, Seattle
If you could capture any set of light rays, which ones would you choose? Most cameras are designed to capture perspective images, generated from light rays that converge at a single point in space. This design choice is very natural for creating images that humans can easily view and interpret, since our eyes are also perspective sensors. It is less clear what advantage perspective imaging might have for computer vision applications. Non-perspective sensing opens up an exciting new realm of possibilities. This talk explores a range of strange, new, non-perspective images that are optimized for the purposes of 3D photography, i.e., computing 3D scene geometry from images. These non-perspective images may be generated by capturing a sequence of images from a standard camera with varying position or illumination to produce an XYT volume of image data, and slicing this image volume in special ways. I will present several examples of such images and their applications for 3D photography.