Boulch, AlexandreSaux, Bertrand LeAudebert, NicolasIoannis Pratikakis and Florent Dupont and Maks Ovsjanikov2017-04-222017-04-222017978-3-03868-030-71997-0471https://doi.org/10.2312/3dor.20171047https://diglib.eg.org:443/handle/10.2312/3dor20171047In this work, we describe a new, general, and efficient method for unstructured point cloud labeling. As the question of efficiently using deep Convolutional Neural Networks (CNNs) on 3D data is still a pending issue, we propose a framework which applies CNNs on multiple 2D image views (or snapshots) of the point cloud. The approach consists in three core ideas. (i) We pick many suitable snapshots of the point cloud. We generate two types of images: a Red-Green-Blue (RGB) view and a depth composite view containing geometric features. (ii) We then perform a pixel-wise labeling of each pair of 2D snapshots using fully convolutional networks. Different architectures are tested to achieve a profitable fusion of our heterogeneous inputs. (iii) Finally, we perform fast back-projection of the label predictions in the 3D space using efficient buffering to label every 3D point. Experiments show that our method is suitable for various types of point clouds such as Lidar or photogrammetric data.Unstructured Point Cloud Semantic Labeling Using Deep Segmentation Networks10.2312/3dor.2017104717-24