Reichl, F.Weiss, J.Westermann, R.Chen, Min and Zhang, Hao (Richard)2016-12-082016-12-0820161467-8659https://doi.org/10.1111/cgf.12779https://diglib.eg.org:443/handle/10.1111/cgf12779We describe how the pipeline for 3D online reconstruction using commodity depth and image scanning hardware can be made scalable for large spatial extents and high scanning resolutions. Our modified pipeline requires less than 10% of the memory that is required by previous approaches at similar speed and resolution. To achieve this, we avoid storing a 3D distance field and weight map during online scene reconstruction. Instead, surface samples are binned into a high‐resolution binary voxel grid. This grid is used in combination with caching and deferred processing of depth images to reconstruct the scene geometry. For pose estimation, GPU ray‐casting is performed on the binary voxel grid. A one‐to‐one comparison to level‐set ray‐casting in a distance volume indicates slightly lower pose accuracy. To enable unlimited spatial extents and store acquired samples at the appropriate level of detail, we combine a hash map with a hierarchical tree representation.object scanning/acquisitionsurface reconstructionI.3.3 [Computer Graphics]: Picture/Image Generation—Digitizing and ScanningMemory‐Efficient Interactive Online Reconstruction From Depth Image Streams10.1111/cgf.12779