Li, MingMagnor, MarcusSeidel, Hans-Peter2015-02-192015-02-1920041467-8659https://doi.org/10.1111/j.1467-8659.2004.00795.xThis paper presents an efficient hardware-accelerated method for novel view synthesis from a set of images or videos. Our method is based on the photo hull representation, which is the maximal photo-consistent shape. We avoid the explicit reconstruction of photo hulls by adopting a view-dependent plane-sweeping strategy. From the target viewpoint slicing planes are rendered with reference views projected onto them. Graphics hardware is exploited to verify the photo-consistency of each rasterized fragment. Visibilities with respect to reference views are properly modeled, and only photo-consistent fragments are kept and colored in the target view. We present experiments with real images and animation sequences. Thanks to the more accurate shape of the photo hull representation, our method generates more realistic rendering results than methods based on visual hulls. Currently, we achieve rendering frame rates of 2-3 fps. Compared to a pure software implementation, the performance of our hardware-accelerated method is approximately 7 times faster.Categories and Subject Descriptors (according to ACM CCS): CR Categories: I.3.3 [Computer Graphics]: Picture/Image Generation; I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism.Hardware-Accelerated Rendering of Photo Hulls10.1111/j.1467-8659.2004.00795.x635-642