Search Results

Now showing 1 - 10 of 51
  • Item
    Analytic Spectral Integration of Birefringence-Induced Iridescence
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Steinberg, Shlomi; Boubekeur, Tamy and Sen, Pradeep
    Optical phenomena that are only observable in optically anisotropic materials are generally ignored in the computer graphics. However, such optical effects are not restricted to exotic materials and can also be observed with common translucent objects when optical anisotropy is induced, e.g. via mechanical stress. Furthermore accurate prediction and reproduction of those optical effects has important practical applications. We provide a short but complete analysis of the relevant electromagnetic theory of light propagation in optically anisotropic media and derive the full set of formulations required to render birefringent materials. We then present a novel method for spectral integration of refraction and reflection in an anisotropic slab. Our approach allows fast and robust rendering of birefringence-induced iridescence in a physically faithful manner and is applicable to both real-time and offline rendering.
  • Item
    Reconstructing Lost Altarpieces: A Differentiable Rendering Approach
    (The Eurographics Association, 2024) Pagès-Vilà, Anna; Munoz-Pandiella, Imanol; Corsini, Massimiliano; Ferdani, Daniele; Kuijper, Arjan; Kutlu, Hasan
    Studying works that have completely or partially disappeared is always difficult due to the lack of information. In more fortunate scenarios where photographs were taken before the destruction, the study of the piece is limited by the viewpoints captured in the available photographs. In this interdisciplinary research, we present a new methodology for reconstructing lost altarpieces from a single historical image, utilizing differentiable rendering techniques. We test our methodology by reconstructing some reliefs from the altarpiece of Sant Joan Baptista (Valls, Spain), which was destroyed in 1936. These results are valuable for both experts and the public, as they facilitate a better understanding of the relief's volumetrics and their spatial relationships, representing a significant advancement in the virtual recovery of lost artifacts.
  • Item
    Patch Decomposition for Efficient Mesh Contours Extraction
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Tsiapkolis, Panagiotis; Bénard, Pierre; Garces, Elena; Haines, Eric
    Object-space occluding contours of triangular meshes (a.k.a. mesh contours) are at the core of many methods in computer graphics and computational geometry. A number of hierarchical data-structures have been proposed to accelerate their computation on the CPU, but they do not map well to the GPU for real-time applications, such as video games. We show that a simple, flat data-structure composed of patches bounded by a normal cone and a bounding sphere may reach this goal, provided it is constructed to maximize the probability for a patch to be culled over all viewpoints. We derive a heuristic metric to efficiently estimate this probability, and present a greedy, bottom-up algorithm that constructs patches by grouping mesh edges according to this metric. In addition, we propose an effective way of computing their bounding sphere. We demonstrate through extensive experiments that this data-structure achieves similar performance as the state-of-the-art on the CPU but is also perfectly adapted to the GPU, leading to up to ×5 speedups.
  • Item
    Color-mapped Noise Vector Fields for Generating Procedural Micro-patterns
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Grenier, Charline; Sauvage, Basile; Dischler, Jean-Michel; Thery, Sylvain; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    Stochastic micro-patterns successfully enhance the realism of virtual scenes. Procedural models using noise combined with transfer functions are extremely efficient. However, most patterns produced today employ 1D transfer functions, which assign color, transparency, or other material attributes, based solely on the single scalar quantity of noise. Multi-dimensional transfer functions have received widespread attention in other fields, such as scientific volume rendering. But their potential has not yet been well explored for modeling micro-patterns in the field of procedural texturing. We propose a new procedural model for stochastic patterns, defined as the composition of a bi-dimensional transfer function (a.k.a. color-map) with a stochastic vector field. Our model is versatile, as it encompasses several existing procedural noises, including Gaussian noise and phasor noise. It also generates a much larger gamut of patterns, including locally structured patterns which are notoriously difficult to reproduce. We leverage the Gaussian assumption and a tiling and blending algorithm to provide real-time generation and filtering. A key contribution is a real-time approximation of the second order statistics over an arbitrary pixel footprint, which enables, in addition, the filtering of procedural normal maps. We exhibit a wide variety of results, including Gaussian patterns, profiled waves, concentric and non-concentric patterns.
  • Item
    Skipping Spheres: SDF Scaling & Early Ray Termination for Fast Sphere Tracing
    (The Eurographics Association, 2024) Polychronakis, Andreas; Koulieris, George Alex; Mania, Katerina; Hunter, David; Slingsby, Aidan
    This paper presents a rapid rendering pipeline for sphere tracing Signed Distance Functions (SDFs), showcasing a notable boost in performance compared to the current state-of-the-art. Existing methods endeavor to reduce the ray step count by adjusting step size using heuristics or by rendering multiple intermediate lower-resolution buffers to pre-calculate non-salient pixels at reduced quality. However, the accelerated performance with low-resolution buffers often introduces artifacts compared to fully sphere-traced scenes, especially for smaller features, which might go unnoticed altogether. Our approach significantly reduces steps compared to prior work while minimising artifacts. We accomplish this based on two key observations and by employing a single low-resolution buffer: Firstly, we perform SDF scaling in the low-resolution buffer, effectively enlarging the footprint of the implicit surfaces when rendered in low resolution, ensuring visibility of all SDFs. Secondly, leveraging the low-resolution buffer rendering, we detect when a ray converges to high-cost surface edges and can terminate sphere tracing earlier than usual, further reducing step count. Our method achieves a substantial performance improvement (exceeding 3× in certain scenes) compared to previous approaches, while minimizing artifacts, as demonstrated in our visual fidelity evaluation.
  • Item
    Denoising Deep Monte Carlo Renderings
    (© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Vicini, D.; Adler, D.; Novák, J.; Rousselle, F.; Burley, B.; Chen, Min and Benes, Bedrich
    We present a novel algorithm to denoise deep Monte Carlo renderings, in which pixels contain multiple colour values, each for a different range of depths. Deep images are a more expressive representation of the scene than conventional flat images. However, since each depth bin receives only a fraction of the flat pixel's samples, denoising the bins is harder due to the less accurate mean and variance estimates. Furthermore, deep images lack a regular structure in depth—the number of depth bins and their depth ranges vary across pixels. This prevents a straightforward application of patch‐based distance metrics frequently used to improve the robustness of existing denoising filters. We address these constraints by combining a flat image‐space non‐local means filter operating on pixel colours with a cross‐bilateral filter operating on auxiliary features (albedo, normal, etc.). Our approach significantly reduces noise in deep images while preserving their structure. To our best knowledge, our algorithm is the first to enable efficient deep‐compositing workflows with denoised Monte Carlo renderings. We demonstrate the performance of our filter on a range of scenes highlighting the challenges and advantages of denoising deep images.
  • Item
    Relativistic Effects for Time‐Resolved Light Transport
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Jarabo, Adrian; Masia, Belen; Velten, Andreas; Barsi, Christopher; Raskar, Ramesh; Gutierrez, Diego; Deussen, Oliver and Zhang, Hao (Richard)
    We present a real‐time framework which allows interactive visualization of relativistic effects for time‐resolved light transport. We leverage data from two different sources: real‐world data acquired with an effective exposure time of less than 2 picoseconds, using an ultra‐fast imaging technique termed , and a transient renderer based on ray‐tracing. We explore the effects of time dilation, light aberration, frequency shift and radiance accumulation by modifying existing models of these relativistic effects to take into account the time‐resolved nature of light propagation. Unlike previous works, we do not impose limiting constraints in the visualization, allowing the virtual camera to explore freely a reconstructed 3D scene depicting dynamic illumination. Moreover, we consider not only linear motion, but also acceleration and rotation of the camera. We further introduce, for the first time, a pinhole camera model into our relativistic rendering framework, and account for subsequent changes in focal length and field of view as the camera moves through the scene..
  • Item
    Efficient Construction of Out-of-Core Octrees for Managing Large Point Sets
    (The Eurographics Association, 2024) Fischer, Jonathan; Rosenthal, Paul; Linsen, Lars; Reina, Guido; Rizzi, Silvio
    Among various space partitioning approaches for managing point sets out-of-core, octrees are commonly used for being simple and effective. An efficient and adaptive out-of-core octree construction method has been proposed by Kontkanen et al. [KTO11], generating the octree data in a single sweep over the points sorted in Morton order, for a given maximum point count m per octree leaf. Their method keeps m+1 points in memory during the process, which may become an issue for large m. We present an extension to their algorithm that requires a minimum of two points to be held in memory in addition to a limited sequence of integers, thus adapting their method for use cases with large m. Moreover, we do not compute Morton codes explicitly but rather perform both the sorting and the octree generation directly on the point data, supporting coordinates of any finite precision.
  • Item
    Generative Deformable Radiance Fields for Disentangled Image Synthesis of Topology-Varying Objects
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Wang, Ziyu; Deng, Yu; Yang, Jiaolong; Yu, Jingyi; Tong, Xin; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    3D-aware generative models have demonstrated their superb performance to generate 3D neural radiance fields (NeRF) from a collection of monocular 2D images even for topology-varying object categories. However, these methods still lack the capability to separately control the shape and appearance of the objects in the generated radiance fields. In this paper, we propose a generative model for synthesizing radiance fields of topology-varying objects with disentangled shape and appearance variations. Our method generates deformable radiance fields, which builds the dense correspondence between the density fields of the objects and encodes their appearances in a shared template field. Our disentanglement is achieved in an unsupervised manner without introducing extra labels to previous 3D-aware GAN training. We also develop an effective image inversion scheme for reconstructing the radiance field of an object in a real monocular image and manipulating its shape and appearance. Experiments show that our method can successfully learn the generative model from unstructured monocular images and well disentangle the shape and appearance for objects (e.g., chairs) with large topological variance. The model trained on synthetic data can faithfully reconstruct the real object in a given single image and achieve high-quality texture and shape editing results.
  • Item
    SCARF: Scalable Continual Learning Framework for Memory-efficiency Multiple Neural Radiance Fields
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Wang, Yuze; Wang, Junyi; Wang, Chen; Duan, Wantong; Bao, Yongtang; Qi, Yue; Chen, Renjie; Ritschel, Tobias; Whiting, Emily
    This paper introduces a novel continual learning framework for synthesising novel views of multiple scenes, learning multiple 3D scenes incrementally, and updating the network parameters only with the training data of the upcoming new scene. We build on Neural Radiance Fields (NeRF), which uses multi-layer perceptron to model the density and radiance field of a scene as the implicit function. While NeRF and its extensions have shown a powerful capability of rendering photo-realistic novel views in a single 3D scene, managing these growing 3D NeRF assets efficiently is a new scientific problem. Very few works focus on the efficient representation or continuous learning capability of multiple scenes, which is crucial for the practical applications of NeRF. To achieve these goals, our key idea is to represent multiple scenes as the linear combination of a cross-scene weight matrix and a set of scene-specific weight matrices generated from a global parameter generator. Furthermore, we propose an uncertain surface knowledge distillation strategy to transfer the radiance field knowledge of previous scenes to the new model. Representing multiple 3D scenes with such weight matrices significantly reduces memory requirements. At the same time, the uncertain surface distillation strategy greatly overcomes the catastrophic forgetting problem and maintains the photo-realistic rendering quality of previous scenes. Experiments show that the proposed approach achieves state-of-the-art rendering quality of continual learning NeRF on NeRF-Synthetic, LLFF, and TanksAndTemples datasets while preserving extra low storage cost.