Spina, SandroDebattista, KurtBugeja, KeithChalmers, AlanHamish Carr and Silvester Czanner2013-11-082013-11-082012978-3-905673-93-7https://doi.org/10.2312/LocalChapterEvents/TPCG/TPCG12/085-092The process of reconstructing virtual representations of large real-world sites is traditionally carried out through the use of laser scanning technology. Recent advances in these technologies led to improvements in precision and accuracy and higher sampling rates. State of the art laser scanners are capable of acquiring around a million points per second, generating enormous point cloud data sets. These data sets are usually cleaned through the application of numerous post-processing algorithms, like normal determination, clustering and noise removal. A common factor in these algorithms is the recurring need for the computation of point neighborhoods, usually by applying algorithms to compute the k-nearest neighbours of each point. The majority of these algorithms work under the assumption that the data sets operated on can fit in main memory, while others take into account the size of the data sets and are thus designed to keep data on disk. We present a hybrid approach which exploits the spatial locality of point clusters in the point cloud and loads them in system memory on demand by taking advantage of paged virtual memory in modern operating systems. In this way, we maximize processor utilization while keeping I/O overheads to a minimum. We evaluate our approach on point cloud sizes ranging from 50K to 333M points on machines with 1GB, 2GB, 4GB and 8GB of system memory.I.3.6 [Computer Graphics]Methodology and TechniquesGraphics data structures and data typesFast Scalable k-NN Computation for Very Large Point Clouds