Show simple item record

dc.contributor.authorMattausch, Oliveren_US
dc.contributor.authorPanozzo, Danieleen_US
dc.contributor.authorMura, Claudioen_US
dc.contributor.authorSorkine-Hornung, Olgaen_US
dc.contributor.authorPajarola, Renatoen_US
dc.contributor.editorB. Levy and J. Kautzen_US
dc.date.accessioned2015-03-03T12:25:44Z
dc.date.available2015-03-03T12:25:44Z
dc.date.issued2014en_US
dc.identifier.issn1467-8659en_US
dc.identifier.urihttp://dx.doi.org/10.1111/cgf.12286en_US
dc.description.abstractWe present a method to automatically segment indoor scenes by detecting repeated objects. Our algorithm scales to datasets with 198 million points and does not require any training data. We propose a trivially parallelizable preprocessing step, which compresses a point cloud into a collection of nearly-planar patches related by geometric transformations. This representation enables us to robustly filter out noise and greatly reduces the computational cost and memory requirements of our method, enabling execution at interactive rates. We propose a patch similarity measure based on shape descriptors and spatial configurations of neighboring patches. The patches are clustered in a Euclidean embedding space based on the similarity matrix to yield the segmentation of the input point cloud. The generated segmentation can be used to compress the raw point cloud, create an object database, and increase the clarity of the point cloud visualization.en_US
dc.publisherThe Eurographics Association and John Wiley and Sons Ltd.en_US
dc.titleObject Detection and Classification from Large-Scale Cluttered Indoor Scansen_US
dc.description.seriesinformationComputer Graphics Forumen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record