32-Issue 7
https://diglib.eg.org:443/handle/10.2312/185
Pacific Graphics 2013 - Special Issue2024-03-29T10:18:12ZEfficient Shadow Removal Using Subregion Matching Illumination Transfer
https://diglib.eg.org:443/handle/10.1111/v32i7pp421-430
Efficient Shadow Removal Using Subregion Matching Illumination Transfer
Xiao, Chunxia; Xiao, Donglin; Zhang, Ling; Chen, Lin
B. Levy, X. Tong, and K. Yin
This paper proposes a new shadow removal approach for input single natural image by using subregion matching illumination transfer. We first propose an effective and automatic shadow detection algorithm incorporating global successive thresholding scheme and local boundary refinement. Then we present a novel shadow removal algorithm by performing illumination transfer on the matched subregion pairs between the shadow regions and non-shadow regions, and this method can process complex images with different kinds of shadowed texture regions and illumination conditions. In addition, we develop an efficient shadow boundary processing method by using alpha matte interpolation, which produces seamless transition between the shadow and non-shadow regions. Experimental results demonstrate the capabilities of our algorithm in both the shadow removal quality and performance.
2013-01-01T00:00:00ZEnvyDepth: An Interface for Recovering Local Natural Illumination from Environment Maps
https://diglib.eg.org:443/handle/10.1111/v32i7pp411-420
EnvyDepth: An Interface for Recovering Local Natural Illumination from Environment Maps
Banterle, Francesco; Callieri, Marco; Dellepiane, Matteo; Corsini, Massimiliano; Pellacini, Fabio; Scopigno, Roberto
B. Levy, X. Tong, and K. Yin
In this paper, we present EnvyDepth, an interface for recovering local illumination from a single HDR environment map. In EnvyDepth, the user quickly indicates strokes to mark regions of the environment map that should be grouped together in a single geometric primitive. From these annotated strokes, EnvyDepth uses edit propagation to create a detailed collection of virtual point lights that reproduce both the local and the distant lighting effects in the original scene. When compared to the sole use of the distant illumination, the added spatial information better reproduces a variety of local effects such as shadows, highlights and caustics. Without the effort needed to create precise scene reconstructions, EnvyDepth annotations take only tens of seconds to produce a plausible lighting without visible artifacts. This is easy to obtain even in the case of complex scenes, both indoors and outdoors. The generated lighting environments work well in a production pipeline since they are efficient to use and able to produce accurate renderings.
2013-01-01T00:00:00ZAn Efficient and Scalable Image Filtering Framework Using VIPS Fusion
https://diglib.eg.org:443/handle/10.1111/v32i7pp391-400
An Efficient and Scalable Image Filtering Framework Using VIPS Fusion
Zhang, Jun; Chen, Xiuhong; Zhao, Yan; Li, H.
B. Levy, X. Tong, and K. Yin
Edge-preserving image filtering is a valuable tool for a variety of applications in image processing and computer vision. Motivated by a new simple but effective local Laplacian filter, we propose a scalable and efficient image filtering framework to extend this edge-preserving image filter and construct an uniform implementation in O(N) time. The proposed framework is built upon a practical global-to-local strategy. The input image is first remapped globally by a series of tentative remapping functions to generate a virtual candidate image sequence (Virtual Image Pyramid Sequence, VIPS). This sequence is then recombined locally to a single output image by a flexible edge-aware pixel-level fusion rule. To avoid halo artifacts, both the output image and the virtual candidate image sequence are transformed into multi-resolution pyramid representations. Four examples, single image de-hazing, multi-exposure fusion, fast edge-preserving filtering and tone-mapping, are presented as the concrete applications of the proposed framework. Experiments on filtering effect and computational efficiency indicate that the proposed framework is able to build a wide range of fast image filtering that yields visually compelling results.
2013-01-01T00:00:00ZLearning to Predict Localized Distortions in Rendered Images
https://diglib.eg.org:443/handle/10.1111/v32i7pp401-410
Learning to Predict Localized Distortions in Rendered Images
CadÃk, Martin; Herzog, Robert; Mantiuk, Rafal; Mantiuk, Radoslaw; Myszkowski, Karol; Seidel, Hans-Peter
B. Levy, X. Tong, and K. Yin
In this work, we present an analysis of feature descriptors for objective image quality assessment. We explore a large space of possible features including components of existing image quality metrics as well as many traditional computer vision and statistical features. Additionally, we propose new features motivated by human perception and we analyze visual saliency maps acquired using an eye tracker in our user experiments. The discriminative power of the features is assessed by means of a machine learning framework revealing the importance of each feature for image quality assessment task. Furthermore, we propose a new data-driven full-reference image quality metric which outperforms current state-of-the-art metrics. The metric was trained on subjective ground truth data combining two publicly available datasets. For the sake of completeness we create a new testing synthetic dataset including experimentally measured subjective distortion maps. Finally, using the same machine-learning framework we optimize the parameters of popular existing metrics.
2013-01-01T00:00:00Z