Search Results

Now showing 1 - 2 of 2
  • Item
    Fast and Robust Semi-Automatic Registration of Photographs to 3D Geometry
    (The Eurographics Association, 2011) Pintus, Ruggero; Gobbetti, Enrico; Combet, Roberto; Franco Niccolucci and Matteo Dellepiane and Sebastian Pena Serna and Holly Rushmeier and Luc Van Gool
    We present a simple, fast and robust technique for semi-automatic 2D-3D registration capable to align a large set of unordered images to a massive point cloud with minimal human effort. Our method converts the hard to solve image-to-geometry registration problem in a Structure-from-Motion (SfM) plus a 3D-3D registration problem. We exploit a SfM framework that, starting just from the unordered image collection, computes an estimate of camera parameters and a sparse 3D geometry deriving from matched image features. We then coarsely register this model to the given 3D geometry by estimating a global scale and absolute orientation using minimal manual intervention. A specialized sparse bundle adjustment (SBA) step, exploiting the correspondence between the model deriving from image features and the fine input 3D geometry, is then used to refine intrinsic and extrinsic parameters of each camera. Output data is suitable for photo blending frameworks to produce seamless colored models. The effectiveness of the method is demonstrated on a series of real-world 3D/2D Cultural Heritage datasets.
  • Item
    Real-time Rendering of Massive Unstructured Raw Point Clouds using Screen-space Operators
    (The Eurographics Association, 2011) Pintus, Ruggero; Gobbetti, Enrico; Agus, Marco; Franco Niccolucci and Matteo Dellepiane and Sebastian Pena Serna and Holly Rushmeier and Luc Van Gool
    Nowadays, 3D acquisition devices allow us to capture the geometry of huge Cultural Heritage (CH) sites, historical buildings and urban environments. We present a scalable real-time method to render this kind of models without requiring lengthy preprocessing. The method does not make any assumptions about sampling density or availability of normal vectors for the points. On a frame-by-frame basis, our GPU accelerated renderer computes point cloud visibility, fills and filters the sparse depth map to generate a continuous surface representation of the point cloud, and provides a screen-space shading term to effectively convey shape features. The technique is applicable to all rendering pipelines capable of projecting points to the frame buffer. To deal with extremely massive models, we integrate it within a multi-resolution out-of-core real-time rendering framework with small pre-computation times. Its effectiveness is demonstrated on a series of massive unstructured real-world Cultural Heritage datasets. The small precomputation times and the low memory requirements make the method suitable for quick onsite visualizations during scan campaigns.