Show simple item record

dc.contributor.authorBoulch, Alexandreen_US
dc.contributor.authorSaux, Bertrand Leen_US
dc.contributor.authorAudebert, Nicolasen_US
dc.contributor.editorIoannis Pratikakis and Florent Dupont and Maks Ovsjanikoven_US
dc.date.accessioned2017-04-22T17:17:40Z
dc.date.available2017-04-22T17:17:40Z
dc.date.issued2017
dc.identifier.isbn978-3-03868-030-7
dc.identifier.issn1997-0471
dc.identifier.urihttp://dx.doi.org/10.2312/3dor.20171047
dc.identifier.urihttps://diglib.eg.org:443/handle/10.2312/3dor20171047
dc.description.abstractIn this work, we describe a new, general, and efficient method for unstructured point cloud labeling. As the question of efficiently using deep Convolutional Neural Networks (CNNs) on 3D data is still a pending issue, we propose a framework which applies CNNs on multiple 2D image views (or snapshots) of the point cloud. The approach consists in three core ideas. (i) We pick many suitable snapshots of the point cloud. We generate two types of images: a Red-Green-Blue (RGB) view and a depth composite view containing geometric features. (ii) We then perform a pixel-wise labeling of each pair of 2D snapshots using fully convolutional networks. Different architectures are tested to achieve a profitable fusion of our heterogeneous inputs. (iii) Finally, we perform fast back-projection of the label predictions in the 3D space using efficient buffering to label every 3D point. Experiments show that our method is suitable for various types of point clouds such as Lidar or photogrammetric data.en_US
dc.publisherThe Eurographics Associationen_US
dc.titleUnstructured Point Cloud Semantic Labeling Using Deep Segmentation Networksen_US
dc.description.seriesinformationEurographics Workshop on 3D Object Retrieval
dc.description.sectionheadersPaper Session I
dc.identifier.doi10.2312/3dor.20171047
dc.identifier.pages17-24


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record