Poux, FlorentNeuville, RomainHallot, PierreBillen, RolandVincent Tourre and Filip Biljecki2016-12-072016-12-072016978-3-03868-013-02307-8251https://doi.org/10.2312/udmv.20161417https://diglib.eg.org:443/handle/10.2312/udmv201614173D point clouds describe urban shape at different scales, precisions and resolutions depending on the underlying sensors and acquisition methodology. These factors influence the quality of the data, as well as its representativity. In this paper, we propose a multi-scale workflow to obtain a better description of the captured environment through a multi-scale representative point cloud, presenting an unlimited depth and multisensory data fusion. Our method is shown over a ''smart point cloud'' data structure and based on data fusion principles retaining higher description and precision on overlapping areas. The concept is illustrated through a use case on the castle of Jehay (Belgium), where aerial LiDAR data, terrestrial laser scanner point cloud and photogrammetrybased reconstruction are combined to obtain a multi-scale data structure.I.2.10 [Artificial Intelligence]Vision and Scene UnderstandingIntensitycolorphotometrythresholdingRepresentationsdata structurestransformsPoint Clouds as an Efficient Multiscale Layered Spatial Representation10.2312/udmv.2016141731-36