Search Results

Now showing 1 - 10 of 11
  • Item
    Projection Mapping for In-Situ Surgery Planning by the Example of DIEP Flap Breast Reconstruction
    (The Eurographics Association, 2021) Martschinke, Jana; Klein, Vanessa; Kurth, Philipp; Engel, Klaus; Ludolph, Ingo; Hauck, Theresa; Horch, Raymund; Stamminger, Marc; Oeltze-Jafra, Steffen and Smit, Noeska N. and Sommer, Björn and Nieselt, Kay and Schultz, Thomas
    Nowadays, many surgical procedures require preoperative planning, mostly relying on data from 3D imaging techniques like computed tomography or magnetic resonance imaging. However, preoperative assessment of this data is carried out on the PC (using classical CT/MR viewing software) and not on the patient's body itself. Therefore, surgeons need to transfer both their overall understanding of the patient's individual anatomy and also specific markers and labels for important points from the PC to the patient only with the help of imaginative power or approximative measurement. In order to close the gap between preoperative planning on the PC and surgery on the patient, we propose a system to directly project preoperative knowledge to the body surface by projection mapping. As a result, we are able to display both assigned labels and a volumetric and view-dependent view of the 3D data in-situ. Furthermore, we offer a method to interactively navigate through the data and add 3D markers directly in the projected volumetric view. We demonstrate the benefits of our approach using DIEP flap breast reconstruction as an example. By means of a small pilot study, we show that our method outperforms standard surgical planning in accuracy and can easily be understood and utilized even by persons without any medical knowledge.
  • Item
    Path-Traced Motion Blur using Motion Trees
    (The Eurographics Association, 2020) Martinek, Magdalena; Thiemann, Philip; Stamminger, Marc; Biasotti, Silvia and Pintus, Ruggero and Berretti, Stefano
    Motion Blur is an important effect of photo-realistic rendering. Distribution ray tracing can simulate motion blur very well by integrating light, both over the spatial and the temporal domain. However, increasing the problem by the temporal dimension entails many challenges, particularly in cinematic multi-bounce path tracing of complex scenes where heavy-weight geometry with complex lighting and even offscreen elements contribute to the final image. In particular, for fast moving objects, undersampling in the time domain results in severe artefacts. In this paper, we propose the Motion Tree, a novel Level-of-Detail data structure for efficient handling of animated objects, that both filters in the spatial and the temporal domain. The Motion Tree is a compact nesting of a temporal interval binary tree for filtering time consecutive data and a sparse voxel octree (SVO) which simplifies spatially nearby data. It is generated during a pre-process and fits nicely into any conventional physically based path tracer. When used in a production-scale environment it significantly reduces memory requirements allowing for a speedup in rendering performance with user control over the degree of impact on quality.
  • Item
    Neural Volumetric Level of Detail for Path Tracing
    (The Eurographics Association, 2024) Stadter, Linda; Hofmann, Nikolai; Stamminger, Marc; Linsen, Lars; Thies, Justus
    We introduce a neural level of detail pipeline for use in a GPU path tracer based on a sparse volumetric representation derived from neural radiance fields. We pre-compute lighting and occlusion to train a neural radiance field which faithfully captures appearance and shading via image-based optimization. By converting the resulting neural network into an efficiently rendered representation, we eliminate costly evaluations at runtime and keep performance competitive. When applying our representation to certain areas of the scene, we trade a slight bias from gradient-based optimization and lossy volumetric conversion for highly anti-aliased results at low sample counts. This enables virtually noise-free and temporally stable results at low computational cost and without any additional post-processing, such as denoising. We demonstrate the applicability of our method to both individual objects and a challenging outdoor scene composed of highly detailed foliage.
  • Item
    Professional Board Report
    (2024-04-18) Stamminger, Marc
  • Item
    SuBloNet: Sparse Super Block Networks for Large Scale Volumetric Fusion
    (The Eurographics Association, 2021) Rückert, Darius; Stamminger, Marc; Andres, Bjoern and Campen, Marcel and Sedlmair, Michael
    Training and inference of convolutional neural networks (CNNs) on truncated signed distance fields (TSDFs) is a challenging task. Large parts of the scene are usually empty, which makes dense implementations inefficient in terms of memory consumption and compute throughput. However, due to the truncation distance, non-zero values are grouped around the surface creating small dense blocks inside the large empty space. We show that this structure can be exploited by storing the TSDF in a block sparse tensor and then decomposing it into rectilinear super blocks. A super block is a dense 3d cuboid of variable size and can be processed by conventional CNNs. We analyze the rectilinear decomposition and present a formulation for computing the bandwidth-optimal solution given a specific network architecture. However, this solution is NP-complete, therefore we also a present a heuristic approach for fast training and inference tasks. We verify the effectiveness of SuBloNet and report a speedup of 4x towards dense implementations and 1.7x towards state-of-the-art sparse implementations. Using the super block architecture, we show that recurrent volumetric fusion is now possible on large scale scenes. Such a systems is able to reconstruct high-quality surfaces from few noisy depth images.
  • Item
    TRIPS: Trilinear Point Splatting for Real-Time Radiance Field Rendering
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Franke, Linus; Rückert, Darius; Fink, Laura; Stamminger, Marc; Bermano, Amit H.; Kalogerakis, Evangelos
    Point-based radiance field rendering has demonstrated impressive results for novel view synthesis, offering a compelling blend of rendering quality and computational efficiency. However, also latest approaches in this domain are not without their shortcomings. 3D Gaussian Splatting [KKLD23] struggles when tasked with rendering highly detailed scenes, due to blurring and cloudy artifacts. On the other hand, ADOP [RFS22] can accommodate crisper images, but the neural reconstruction network decreases performance, it grapples with temporal instability and it is unable to effectively address large gaps in the point cloud. In this paper, we present TRIPS (Trilinear Point Splatting), an approach that combines ideas from both Gaussian Splatting and ADOP. The fundamental concept behind our novel technique involves rasterizing points into a screen-space image pyramid, with the selection of the pyramid layer determined by the projected point size. This approach allows rendering arbitrarily large points using a single trilinear write. A lightweight neural network is then used to reconstruct a hole-free image including detail beyond splat resolution. Importantly, our render pipeline is entirely differentiable, allowing for automatic optimization of both point sizes and positions. Our evaluation demonstrate that TRIPS surpasses existing state-of-the-art methods in terms of rendering quality while maintaining a real-time frame rate of 60 frames per second on readily available hardware. This performance extends to challenging scenarios, such as scenes featuring intricate geometry, expansive landscapes, and auto-exposed footage. The project page is located at: https://lfranke.github.io/trips
  • Item
    Time‐Warped Foveated Rendering for Virtual Reality Headsets
    (© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Franke, Linus; Fink, Laura; Martschinke, Jana; Selgrad, Kai; Stamminger, Marc; Benes, Bedrich and Hauser, Helwig
    Rendering in real time for virtual reality headsets with high user immersion is challenging due to strict framerate constraints as well as due to a low tolerance for artefacts. Eye tracking‐based foveated rendering presents an opportunity to strongly increase performance without loss of perceived visual quality. To this end, we propose a novel foveated rendering method for virtual reality headsets with integrated eye tracking hardware. Our method comprises recycling pixels in the periphery by spatio‐temporally reprojecting them from previous frames. Artefacts and disocclusions caused by this reprojection are detected and re‐evaluated according to a confidence value that is determined by a newly introduced formalized perception‐based metric, referred to as confidence function. The foveal region, as well as areas with low confidence values, are redrawn efficiently, as the confidence value allows for the delicate regulation of hierarchical geometry and pixel culling. Hence, the average primitive processing and shading costs are lowered dramatically. Evaluated against regular rendering as well as established foveated rendering methods, our approach shows increased performance in both cases. Furthermore, our method is not restricted to static scenes and provides an acceleration structure for post‐processing passes.
  • Item
    Neural Denoising for Path Tracing of Medical Volumetric Data
    (ACM, 2020) Hofmann, Nikolai; Martschinke, Jana; Engel, Klaus; Stamminger, Marc; Yuksel, Cem and Membarth, Richard and Zordan, Victor
    In this paper, we transfer machine learning techniques previously applied to denoising surface-only Monte Carlo renderings to path-traced visualizations of medical volumetric data. In the domain of medical imaging, path-traced videos turned out to be an efficient means to visualize and understand internal structures, in particular for less experienced viewers such as students or patients. However, the computational demands for the rendering of high-quality path-traced videos are very high due to the large number of samples necessary for each pixel. To accelerate the process, we present a learning-based technique for denoising path-traced videos of volumetric data by increasing the sample count per pixel; both through spatial (integrating neighboring samples) and temporal filtering (reusing samples over time). Our approach uses a set of additional features and a loss function both specifically designed for the volumetric case. Furthermore, we present a novel network architecture tailored for our purpose, and introduce reprojection of samples to improve temporal stability and reuse samples over frames. As a result, we achieve good image quality even from severely undersampled input images, as visible in the teaser image.
  • Item
    Visualization Aided Interface Reconstruction
    (The Eurographics Association, 2020) Penk, Dominik; Müller, Jonas; Felfer, Peter; Grosso, Roberto; Stamminger, Marc; Krüger, Jens and Niessner, Matthias and Stückler, Jörg
    Modern atom probe tomography measurements generate large point clouds of atomic locations in solids. A common analysis task in these datasets is to put the location of specific atom types in relation to crystallographic features such as the interface between two crystals (grain boundaries). In cases where these features represent surfaces, their extraction is carried out manually in most cases. In this paper we propose a method for semi automatic extraction of such two dimensional manifold and non-manifold surfaces from a given dataset. We first aid the user to filter the atom data by providing an interactive visualization of the dataset tailored towards enhancing these interfaces. Once a desired set of points representing the interface is found, we provide an automatic surface extraction method to compute an explicit parametric representation of the visualized surface. In case of non-manifold interface structures, this parametric representation is then used to calculate the intersections of the individual manifold parts of the interfaces.
  • Item
    Professional Board Report - Update
    (2024-04-22) Stamminger, Marc