Spina, SandroDebattista, KurtBugeja, KeithChalmers, AlanJohn Keyser and Young J. Kim and Peter Wonka2014-12-162014-12-162014978-3-905674-73-6https://doi.org/10.2312/pgs.20141244The continuous development of new commodity hardware intended to capture the surface structure of objects is quickly making point cloud data ubiquitous. Scene understanding methods address the problem of determining the objects present in a point cloud which, dependant on sensor capabilities and object occlusions, is normally noisy and incomplete. In this paper, we propose a novel technique which enables automatic identification of semantically meaningful structures within point clouds acquired using different sensors on a variety of scenes. The representation model, namely the structure graph, with nodes representing planar surface segments, is computed over these point clouds to help with the identification task. In order to accommodate for more complex objects (e.g. chair, couch, cabinet, table), a training process is used to determine and concisely describe, within each object's structure graph, its important shape characteristics. Results on a variety of point clouds show how our method can quickly discern certain object types.I.3.0 [Computer Graphics]GeneralI.3.5 [Computer Graphics]Boundary RepresentationI.3.8 [Computer Graphics]ApplicationsScene Segmentation and Understanding for Context-Free Point Clouds