Dąbała, ŁukaszZiegler, MatthiasDidyk, PiotrZilly, FrederikKeinert, JoachimMyszkowski, KarolSeidel, Hans-PeterRokita, PrzemysławRitschel, TobiasEitan Grinspun and Bernd Bickel and Yoshinori Dobashi2016-10-112016-10-1120161467-8659https://doi.org/10.1111/cgf.13037https://diglib.eg.org:443/handle/10.1111/cgf13037Light field videos express the entire visual information of an animated scene, but their shear size typically makes capture, processing and display an off-line process, i. e., time between initial capture and final display is far from real-time. In this paper we propose a solution for one of the key bottlenecks in such a processing pipeline, which is a reliable depth reconstruction possibly for many views. This is enabled by a novel correspondence algorithm converting the video streams from a sparse array of off-the-shelf cameras into an array of animated depth maps. The algorithm is based on a generalization of the classic multi-resolution Lucas-Kanade correspondence algorithm from a pair of images to an entire array. Special inter-image confidence consolidation allows recovery from unreliable matching in some locations and some views. It can be implemented efficiently in massively parallel hardware, allowing for interactive computations. The resulting depth quality as well as the computation performance compares favorably to other state-of-the art light field-to-depth approaches, as well as stereo matching techniques. Another outcome of this work is a data set of light field videos that are captured with multiple variants of sparse camera arrays.I.4.8 [Image processing and Computer Vision]Scene AnalysisShapeEfficient Multi-image Correspondences for On-line Light Field Video Processing10.1111/cgf.13037401-410