29-Issue 2
https://diglib.eg.org:443/handle/10.2312/153
EG 2010 - Conference Issue2024-03-29T00:49:32ZReinterpretable Imager: Towards Variable Post-Capture Space, Angle and Time Resolution in Photography
https://diglib.eg.org:443/handle/10.2312/CGF.v29i2pp763-772
Reinterpretable Imager: Towards Variable Post-Capture Space, Angle and Time Resolution in Photography
Agrawal, Amit; Veeraraghavan, Ashok; Raskar, Ramesh
We describe a novel multiplexing approach to achieve tradeoffs in space, angle and time resolution in photography. We explore the problem of mapping useful subsets of time-varying 4D lightfields in a single snapshot. Our design is based on using a dynamic mask in the aperture and a static mask close to the sensor. The key idea is to exploit scene-specific redundancy along spatial, angular and temporal dimensions and to provide a programmable or variable resolution tradeoff among these dimensions. This allows a user to reinterpret the single captured photo as either a high spatial resolution image, a refocusable image stack or a video for different parts of the scene in post-processing.A lightfield camera or a video camera forces a-priori choice in space-angle-time resolution. We demonstrate a single prototype which provides flexible post-capture abilities not possible using either a single-shot lightfield camera or a multi-frame video camera. We show several novel results including digital refocusing on objects moving in depth and capturing multiple facial expressions in a single photo.
2010-01-01T00:00:00ZFast High-Dimensional Filtering Using the Permutohedral Lattice
https://diglib.eg.org:443/handle/10.2312/CGF.v29i2pp753-762
Fast High-Dimensional Filtering Using the Permutohedral Lattice
Adams, Andrew; Baek, Jongmin; Davis, Myers Abraham
Many useful algorithms for processing images and geometry fall under the general framework of high-dimensional Gaussian filtering. This family of algorithms includes bilateral filtering and non-local means. We propose a new way to perform such filters using the permutohedral lattice, which tessellates high-dimensional space with uniform simplices. Our algorithm is the first implementation of a high-dimensional Gaussian filter that is both linear in input size and polynomial in dimensionality. Furthermore it is parameter-free, apart from the filter size, and achieves a consistently high accuracy relative to ground truth (> 45 dB). We use this to demonstrate a number of interactive-rate applications of filters in as high as eight dimensions.
2010-01-01T00:00:00ZTwo-Colored Pixels
https://diglib.eg.org:443/handle/10.2312/CGF.v29i2pp743-752
Two-Colored Pixels
Pavic, Darko; Kobbelt, Leif
In this paper we show how to use two-colored pixels as a generic tool for image processing. We apply two-colored pixels as a basic operator as well as a supporting data structure for several image processing applications. Traditionally, images are represented by a regular grid of square pixels with one constant color each. In the two-colored pixel representation, we reduce the image resolution and replace blocks of N x N pixels by one square that is split by a (feature) line into two regions with constant colors. We show how the conversion of standard mono-colored pixel images into two-colored pixel images can be computed efficiently by applying a hierarchical algorithm along with a CUDA-based implementation. Two-colored pixels overcome some of the limitations that classical pixel representations have, and their feature lines provide minimal geometric information about the underlying image region that can be effectively exploited for a number of applications. We show how to use two-colored pixels as an interactive brush tool, achieving realtime performance for image abstraction and non-photorealistic filtering. Additionally, we propose a realtime solution for image retargeting, defined as a linear minimization problem on a regular or even adaptive two-colored pixel image. The concept of two-colored pixels can be easily extended to a video volume, and we demonstrate this for the example of video retargeting.
2010-01-01T00:00:00ZMotion Blur for EWA Surface Splatting
https://diglib.eg.org:443/handle/10.2312/CGF.v29i2pp733-742
Motion Blur for EWA Surface Splatting
Heinzle, Simon; Wolf, Johanna; Kanamori, Yoshihiro; Weyrich, Tim; Nishita, Tomoyuki; Gross, Markus
This paper presents a novel framework for elliptical weighted average (EWA) surface splatting with time-varying scenes. We extend the theoretical basis of the original framework by replacing the 2D surface reconstruction filters by 3D kernels which unify the spatial and temporal component of moving objects. Based on the newly derived mathematical framework we introduce a rendering algorithm that supports the generation of high-quality motion blur for point-based objects using a piecewise linear approximation of the motion. The rendering algorithm applies ellipsoids as rendering primitives which are constructed by extending planar EWA surface splats into the temporal dimension along the instantaneous motion vector. Finally, we present an implementation of the proposed rendering algorithm with approximated occlusion handling using advanced features of modern GPUs and show its capability of producing motion-blurred result images at interactive frame rates.
2010-01-01T00:00:00Z