Search Results

Now showing 1 - 3 of 3
  • Item
    The Minimal Bounding Volume Hierarchy
    (The Eurographics Association, 2010) Bauszat, Pablo; Eisemann, Martin; Magnor, Marcus; Reinhard Koch and Andreas Kolb and Christof Rezk-Salama
    Bounding volume hierarchies (BVH) are a commonly used method for speeding up ray tracing. Even though the memory footprint of a BVH is relatively low compared to other acceleration data structures, they still can consume a large amount of memory for complex scenes and exceed the memory bounds of the host system. This can lead to a tremendous performance decrease on the order of several magnitudes. In this paper we present a novel scheme for construction and storage of BVHs that can reduce the memory consumption to less than 1% of a standard BVH. We show that our representation, which uses only 2 bits per node, is the smallest possible representation on a per node basis that does not produce empty space deadlocks. Our data structure, called the Minimal Bounding Volume Hierarchy (MVH) reduces the memory requirements in two important ways: using implicit indexing and preset surface reduction factors. Obviously, this scheme has a non-negligible computational overhead, but this overhead can be compensated to a large degree by shooting larger ray bundles instead of single rays, using a simpler intersection scheme and a two-level representation of the hierarchy. These measure enable interactive ray tracing performance without the necessity to rely on out-of-core techniques that would be inevitable for a standard BVH.
  • Item
    Reconstructing Shape and Motion from Asynchronous Cameras
    (The Eurographics Association, 2010) Klose, Felix; Lipski, Christian; Magnor, Marcus; Reinhard Koch and Andreas Kolb and Christof Rezk-Salama
    We present an algorithm for scene flow reconstruction from multi-view data. The main contribution is its ability to cope with asynchronously captured videos. Our holistic approach simultaneously estimates depth, orientation and 3D motion, as a result we obtain a quasi-dense surface patch representation of the dynamic scene. The reconstruction starts with the generation of a sparse set of patches from the input views which are then iteratively expanded along the object surfaces. We show that the approach performs well for scenes ranging from single objects to cluttered real world scenarios.
  • Item
    ZIPMAPS: Zoom-Into-Parts Texture Maps
    (The Eurographics Association, 2010) Eisemann, Martin; Magnor, Marcus; Reinhard Koch and Andreas Kolb and Christof Rezk-Salama
    In this paper, we propose a method for rendering highly detailed close-up views of arbitrary textured surfaces. Our hierarchical texture representation can easily be rendered in real-time, enabling zooming into specific texture regions to almost arbitrary magnification. To augment the texture map locally with high-resolution information, we describe how to automatically, seamlessly merge unregistered images of different scales. Our method is useful wherever close-up renderings of specific regions shall be provided, without the need for excessively large texture maps.