Search Results

Now showing 1 - 10 of 40
  • Item
    Reconstructing Lost Altarpieces: A Differentiable Rendering Approach
    (The Eurographics Association, 2024) Pagès-Vilà, Anna; Munoz-Pandiella, Imanol; Corsini, Massimiliano; Ferdani, Daniele; Kuijper, Arjan; Kutlu, Hasan
    Studying works that have completely or partially disappeared is always difficult due to the lack of information. In more fortunate scenarios where photographs were taken before the destruction, the study of the piece is limited by the viewpoints captured in the available photographs. In this interdisciplinary research, we present a new methodology for reconstructing lost altarpieces from a single historical image, utilizing differentiable rendering techniques. We test our methodology by reconstructing some reliefs from the altarpiece of Sant Joan Baptista (Valls, Spain), which was destroyed in 1936. These results are valuable for both experts and the public, as they facilitate a better understanding of the relief's volumetrics and their spatial relationships, representing a significant advancement in the virtual recovery of lost artifacts.
  • Item
    Patch Decomposition for Efficient Mesh Contours Extraction
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Tsiapkolis, Panagiotis; Bénard, Pierre; Garces, Elena; Haines, Eric
    Object-space occluding contours of triangular meshes (a.k.a. mesh contours) are at the core of many methods in computer graphics and computational geometry. A number of hierarchical data-structures have been proposed to accelerate their computation on the CPU, but they do not map well to the GPU for real-time applications, such as video games. We show that a simple, flat data-structure composed of patches bounded by a normal cone and a bounding sphere may reach this goal, provided it is constructed to maximize the probability for a patch to be culled over all viewpoints. We derive a heuristic metric to efficiently estimate this probability, and present a greedy, bottom-up algorithm that constructs patches by grouping mesh edges according to this metric. In addition, we propose an effective way of computing their bounding sphere. We demonstrate through extensive experiments that this data-structure achieves similar performance as the state-of-the-art on the CPU but is also perfectly adapted to the GPU, leading to up to ×5 speedups.
  • Item
    Skipping Spheres: SDF Scaling & Early Ray Termination for Fast Sphere Tracing
    (The Eurographics Association, 2024) Polychronakis, Andreas; Koulieris, George Alex; Mania, Katerina; Hunter, David; Slingsby, Aidan
    This paper presents a rapid rendering pipeline for sphere tracing Signed Distance Functions (SDFs), showcasing a notable boost in performance compared to the current state-of-the-art. Existing methods endeavor to reduce the ray step count by adjusting step size using heuristics or by rendering multiple intermediate lower-resolution buffers to pre-calculate non-salient pixels at reduced quality. However, the accelerated performance with low-resolution buffers often introduces artifacts compared to fully sphere-traced scenes, especially for smaller features, which might go unnoticed altogether. Our approach significantly reduces steps compared to prior work while minimising artifacts. We accomplish this based on two key observations and by employing a single low-resolution buffer: Firstly, we perform SDF scaling in the low-resolution buffer, effectively enlarging the footprint of the implicit surfaces when rendered in low resolution, ensuring visibility of all SDFs. Secondly, leveraging the low-resolution buffer rendering, we detect when a ray converges to high-cost surface edges and can terminate sphere tracing earlier than usual, further reducing step count. Our method achieves a substantial performance improvement (exceeding 3× in certain scenes) compared to previous approaches, while minimizing artifacts, as demonstrated in our visual fidelity evaluation.
  • Item
    Efficient Construction of Out-of-Core Octrees for Managing Large Point Sets
    (The Eurographics Association, 2024) Fischer, Jonathan; Rosenthal, Paul; Linsen, Lars; Reina, Guido; Rizzi, Silvio
    Among various space partitioning approaches for managing point sets out-of-core, octrees are commonly used for being simple and effective. An efficient and adaptive out-of-core octree construction method has been proposed by Kontkanen et al. [KTO11], generating the octree data in a single sweep over the points sorted in Morton order, for a given maximum point count m per octree leaf. Their method keeps m+1 points in memory during the process, which may become an issue for large m. We present an extension to their algorithm that requires a minimum of two points to be held in memory in addition to a limited sequence of integers, thus adapting their method for use cases with large m. Moreover, we do not compute Morton codes explicitly but rather perform both the sorting and the octree generation directly on the point data, supporting coordinates of any finite precision.
  • Item
    SCARF: Scalable Continual Learning Framework for Memory-efficiency Multiple Neural Radiance Fields
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Wang, Yuze; Wang, Junyi; Wang, Chen; Duan, Wantong; Bao, Yongtang; Qi, Yue; Chen, Renjie; Ritschel, Tobias; Whiting, Emily
    This paper introduces a novel continual learning framework for synthesising novel views of multiple scenes, learning multiple 3D scenes incrementally, and updating the network parameters only with the training data of the upcoming new scene. We build on Neural Radiance Fields (NeRF), which uses multi-layer perceptron to model the density and radiance field of a scene as the implicit function. While NeRF and its extensions have shown a powerful capability of rendering photo-realistic novel views in a single 3D scene, managing these growing 3D NeRF assets efficiently is a new scientific problem. Very few works focus on the efficient representation or continuous learning capability of multiple scenes, which is crucial for the practical applications of NeRF. To achieve these goals, our key idea is to represent multiple scenes as the linear combination of a cross-scene weight matrix and a set of scene-specific weight matrices generated from a global parameter generator. Furthermore, we propose an uncertain surface knowledge distillation strategy to transfer the radiance field knowledge of previous scenes to the new model. Representing multiple 3D scenes with such weight matrices significantly reduces memory requirements. At the same time, the uncertain surface distillation strategy greatly overcomes the catastrophic forgetting problem and maintains the photo-realistic rendering quality of previous scenes. Experiments show that the proposed approach achieves state-of-the-art rendering quality of continual learning NeRF on NeRF-Synthetic, LLFF, and TanksAndTemples datasets while preserving extra low storage cost.
  • Item
    Bridge Sampling for Connections via Multiple Scattering Events
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Schüßler, Vincent; Hanika, Johannes; Dachsbacher, Carsten; Garces, Elena; Haines, Eric
    Explicit sampling of and connecting to light sources is often essential for reducing variance in Monte Carlo rendering. In dense, forward-scattering participating media, its benefit declines, as significant transport happens over longer multiple-scattering paths around the straight connection to the light. Sampling these paths is challenging, as their contribution is shaped by the product of reciprocal squared distance terms and the phase functions. Previous work demonstrates that sampling several of these terms jointly is crucial. However, these methods are tied to low-order scattering or struggle with highly-peaked phase functions. We present a method for sampling a bridge: a subpath of arbitrary vertex count connecting two vertices. Its probability density is proportional to all phase functions at inner vertices and reciprocal squared distance terms. To achieve this, we importance sample the phase functions first, and subsequently all distances at once. For the latter, we sample an independent, preliminary distance for each edge of the bridge, and afterwards scale the bridge such that it matches the connection distance. The scale factor can be marginalized out analytically to obtain the probability density of the bridge. This approach leads to a simple algorithm and can construct bridges of any vertex count. For the case of one or two inserted vertices, we also show an alternative without scaling or marginalization. For practical path sampling, we present a method to sample the number of bridge vertices whose distribution depends on the connection distance, the phase function, and the collision coefficient. While our importance sampling treats media as homogeneous we demonstrate its effectiveness on heterogeneous media.
  • Item
    BlendPCR: Seamless and Efficient Rendering of Dynamic Point Clouds captured by Multiple RGB-D Cameras
    (The Eurographics Association, 2024) Mühlenbrock, Andre; Weller, Rene; Zachmann, Gabriel; Hasegawa, Shoichi; Sakata, Nobuchika; Sundstedt, Veronica
    Traditional techniques for rendering continuous surfaces from dynamic, noisy point clouds using multi-camera setups often suffer from disruptive artifacts in overlapping areas, similar to z-fighting. We introduce BlendPCR, an advanced rendering technique that effectively addresses these artifacts through a dual approach of point cloud processing and screen space blending. Additionally, we present a UV coordinate encoding scheme to enable high-resolution texture mapping via standard camera SDKs. We demonstrate that our approach offers superior visual rendering quality over traditional splat and mesh-based methods and exhibits no artifacts in those overlapping areas, which still occur in leading-edge NeRF and Gaussian Splat based approaches like Pointersect and P2ENet. In practical tests with seven Microsoft Azure Kinects, processing, including uploading the point clouds to GPU, requires only 13.8 ms (when using one color per point) or 29.2 ms (using high-resolution color textures), and rendering at a resolution of 3580 x 2066 takes just 3.2 ms, proving its suitability for real-time VR applications.
  • Item
    GSEditPro: 3D Gaussian Splatting Editing with Attention-based Progressive Localization
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Sun, Yanhao; Tian, Runze; Han, Xiao; Liu, Xinyao; Zhang, Yan; Xu, Kai; Chen, Renjie; Ritschel, Tobias; Whiting, Emily
    With the emergence of large-scale Text-to-Image(T2I) models and implicit 3D representations like Neural Radiance Fields (NeRF), many text-driven generative editing methods based on NeRF have appeared. However, the implicit encoding of geometric and textural information poses challenges in accurately locating and controlling objects during editing. Recently, significant advancements have been made in the editing methods of 3D Gaussian Splatting, a real-time rendering technology that relies on explicit representation. However, these methods still suffer from issues including inaccurate localization and limited manipulation over editing. To tackle these challenges, we propose GSEditPro, a novel 3D scene editing framework which allows users to perform various creative and precise editing using text prompts only. Leveraging the explicit nature of the 3D Gaussian distribution, we introduce an attention-based progressive localization module to add semantic labels to each Gaussian during rendering. This enables precise localization on editing areas by classifying Gaussians based on their relevance to the editing prompts derived from cross-attention layers of the T2I model. Furthermore, we present an innovative editing optimization method based on 3D Gaussian Splatting, obtaining stable and refined editing results through the guidance of Score Distillation Sampling and pseudo ground truth. We prove the efficacy of our method through extensive experiments.
  • Item
    Fast Approximation to Large-Kernel Edge-Preserving Filters by Recursive Reconstruction from Image Pyramids
    (The Eurographics Association, 2024) Xu, Tianchen; Yang, Jiale; Qin, Yiming; Sheng, Bin; Wu, Enhua; Chen, Renjie; Ritschel, Tobias; Whiting, Emily
    Edge-preserving filters, as known as bilateral filters, are fundamental to graphics rendering techniques, providing greater generality and capability of edge preservation than pure convolution filters. However, sampling with a large kernel per pixel for these filters can be computationally intensive in real-time rendering. Existing acceleration methods for approximating edgepreserving filters still struggle to balance blur controllability, edge clarity, and runtime efficiency. In this paper, we propose a novel scheme for approximating edge-preserving filters with large anisotropic kernels by recursively reconstructing them from multi-image pyramid (MIP) layers that are weightedly filtered in a dual 3×3 kernel space. Our approach introduces a concise unified processing pipeline independent of kernel size, which includes upsampling and downsampling on MIP layers and enables the integration of custom edge-stopping functions. We also derive the implicit relations of the sampling weights and formulate a weight template model for inference. Furthermore, we convert the pipeline into a lightweight neural network for numerical solutions through data training. Consequently, our image post-processors achieve high-quality and high-performance edgepreserving filters in real-time, using the same control parameters as the original bilateral filters. These filters are applicable for depth-of-fields, global illumination denoising, and screen-space particle rendering. The simplicity of the reconstruction process in our pipeline makes it user-friendly and cost-effective, saving both runtime and implementation costs.
  • Item
    Real-Time Rendering of Glints in the Presence of Area Lights
    (The Eurographics Association, 2024) Kneiphof, Tom; Klein, Reinhard; Chen, Renjie; Ritschel, Tobias; Whiting, Emily
    Many real-world materials are characterized by a glittery appearance. Reproducing this effect in physically based renderings is a challenging problem due to its discrete nature, especially in real-time applications which require a consistently low runtime. Recent work focuses on glittery appearance illuminated by infinitesimally small light sources only. For light sources like the sun this approximation is a reasonable choice. In the real world however, all light sources are fundamentally area light sources. In this paper, we derive an efficient method for rendering glints illuminated by spatially constant diffuse area lights in real time. To this end, we require an adequate estimate for the probability of a single microfacet to be correctly oriented for reflection from the source to the observer. A good estimate is achieved either using linearly transformed cosines (LTC) for large light sources, or a locally constant approximation of the normal distribution for small spherical caps of light directions. To compute the resulting number of reflecting microfacets, we employ a counting model based on the binomial distribution. In the evaluation, we demonstrate the visual accuracy of our approach, which is easily integrated into existing real-time rendering frameworks, especially if they already implement shading for area lights using LTCs and a counting model for glint shading under point and directional illumination. Besides the overhead of the preexisting constituents, our method adds little to no additional overhead.