Dense and Scalable Reconstruction from Unstructured Videos with Occlusions

Abstract
Depth-map-based multi-view stereo algorithms typically recover textureless surfaces by assuming smoothness per view, so they require processing different views to solve occlusions. Moreover, the highly redundant viewpoints of videos make exhaustive calculation of depth maps unfeasible for large scenes. This paper achieves dense and scalable reconstruction from videos by adaptively selecting a minimum subset of views from the unstructured camera paths, that are most beneficial for incremental occlusion handling and coverage improvement. Furthermore, we simplify and optimize each set of locally consistent points as the points accumulated from a cluster of previously processed views. By combining content-aware view selection and clustering, as well as cluster-wise point merging, our approach can reduce both computational and memory costs while producing accurate, concise, and dense 3D points, even for homogeneous areas. The superior efficiency and point-level fashion of our operations facilitate 3D modeling at large scales.
Description

        
@inproceedings{
10.2312:vmv.20171259
, booktitle = {
Vision, Modeling & Visualization
}, editor = {
Matthias Hullin and Reinhard Klein and Thomas Schultz and Angela Yao
}, title = {{
Dense and Scalable Reconstruction from Unstructured Videos with Occlusions
}}, author = {
Wei, Jian
and
Resch, Benjamin
and
Lensch, Hendrik P. A.
}, year = {
2017
}, publisher = {
The Eurographics Association
}, ISBN = {
978-3-03868-049-9
}, DOI = {
10.2312/vmv.20171259
} }
Citation
Collections