3D Reconstruction and Rendering from High Resolution Light Fields

dc.contributor.authorKim, Changil
dc.date.accessioned2015-11-26T15:22:57Z
dc.date.available2015-11-26T15:22:57Z
dc.date.issued2015
dc.description.abstractThis thesis presents a complete processing pipeline of densely sampled, high resolution light fields, from acquisition to rendering. The key components of the pipeline include 3D scene reconstruction, geometry-driven sampling analysis, and controllable multiscopic 3D rendering. The thesis first addresses 3D geometry reconstruction from light fields. We show that dense sampling of a scene attained in light fields allows for more robust and accurate depth estimation without resorting to patch matching and costly global optimization processes. Our algorithm estimates the depth for each and every light ray in the light field with great accuracy, and its pixel-wise depth computation results in particularly favorable quality around depth discontinuities. In fact, most operations are kept localized over small portions of the light field, which by itself is crucial to scalability for higher resolution input and also well suited for efficient parallelized implementations. Resulting reconstructions retain fine details of the scene and exhibit precise localization of object boundaries. While it is the key to the success of our reconstruction algorithm, the dense sampling of light fields entails difficulties when it comes to the acquisition and processing of light fields. This raises a question of optimal sampling density required for faithful geometry reconstruction. Existing works focus more on the alias-free rendering of light fields, and geometry-driven analysis has seen much less research effort. We propose an analysis model for determining sampling locations that are optimal in the sense of high quality geometry reconstruction. This is achieved by analyzing the visibility of scene points and the resolvability of depth and estimating the distribution of reliable estimates over potential sampling locations. A light field with accurate depth information enables an entirely new approach to flexible and controllable 3D rendering. We develop a novel algorithm for multiscopic rendering of light fields which provides great controllability over the perceived depth conveyed in the output. The algorithm synthesizes a pair of stereoscopic images directly from light fields and allows us to control stereoscopic and artistic constraints on a per-pixel basis. It computes non-planar 2D cuts over a light field volume that best meet described constraints by minimizing an energy functional. The output images are synthesized by sampling light rays on the cut surfaces. The algorithm generalizes for multiscopic 3D displays by computing multiple cuts. The resulting algorithms are highly relevant to many application scenarios. It can readily be applied to 3D scene reconstruction and object scanning, depth-assisted segmentation, image-based rendering, and stereoscopic content creation and post-processing, and can also be used to improve the quality of light field rendering that requires depth information such as super-resolution and extended depth of field.en_US
dc.identifier.citationKim, Changil. 2015. 3D Reconstruction and Rendering from High Resolution Light Fields. PhD dissertation, ETH Zurich.en_US
dc.identifier.urihttps://diglib.eg.org/handle/10.2312/14428
dc.language.isoen_USen_US
dc.publisherETH Zurichen_US
dc.relation.ispartofseriesDiss. ETH No.;22933
dc.subjectlight fielden_US
dc.subject3D Reconstructionen_US
dc.subjectimage-based renderingen_US
dc.subjectmulti-perspective imagingen_US
dc.subjectstereoscopyen_US
dc.subjectview samplingen_US
dc.title3D Reconstruction and Rendering from High Resolution Light Fieldsen_US
dc.typeThesisen_US
Files
Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
dissertation_changil_kim_2015.pdf
Size:
240.87 MB
Format:
Adobe Portable Document Format
Description:
PhD Thesis
Collections