Search Results

Now showing 1 - 10 of 218
  • Item
    A Multifragment Renderer for Material Aging Visualization
    (The Eurographics Association, 2018) Adamopoulos, Georgios; Moutafidou, Anastasia; Drosou, Anastasios; Tzovaras, Dimitrios; Fudos, Ioannis; Jain, Eakta and Kosinka, JirĆ­
    People involved in curatorial work and in preservation/conservation tasks need to understand exactly the nature of aging and to prevent it with minimal preservation work. In this scenario, it is of extreme importance to have tools to produce and visualize digital representations and models of visual surface appearance and material properties, to help the scientist understand how they evolve over time and under particular environmental conditions. We report on the development of a multifragment renderer for visualizing and combining the results of simulated aging of artwork objects. Several natural aging processes manifest themselves through change of color, fading, deformations or cracks. Furthermore, changes in the materials underneath the visible layers may be detected or simulated.
  • Item
    Edge-Friend: Fast and Deterministic Catmull-Clark Subdivision Surfaces
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Kuth, Bastian; Oberberger, Max; Chajdas, MatthƤus; Meyer, Quirin; Bikker, Jacco; Gribble, Christiaan
    We present edge-friend, a data structure for quad meshes with access to neighborhood information required for Catmull-Clark subdivision surface refinement. Edge-friend enables efficient real-time subdivision surface rendering. In particular, the resulting algorithm is deterministic, does not require hardware support for atomic floating-point arithmetic, and is optimized for efficient rendering on GPUs. Edge-friend exploits that after one subdivision step, two edges can be uniquely and implicitly assigned to each quad. Additionally, edge-friend is a compact data structure, adding little overhead. Our algorithm is simple to implement in a single compute shader kernel, and requires minimal synchronization which makes it particularly suited for asynchronous execution. We easily extend our kernel to support relevant Catmull-Clark subdivision surface features, including semi-smooth creases, boundaries, animation and attribute interpolation. In case of topology changes, our data structure requires little preprocessing, making it amendable for a variety of applications, including real-time editing and animations. Our method can process and render billions of triangles per second on modern GPUs. For a sample mesh, our algorithm generates and renders 2.9 million triangles in 0.58ms on an AMD Radeon RX 7900 XTX GPU.
  • Item
    Towards a Neural Graphics Pipeline for Controllable Image Generation
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Chen, Xuelin; Cohen-Or, Daniel; Chen, Baoquan; Mitra, Niloy J.; Mitra, Niloy and Viola, Ivan
    In this paper, we leverage advances in neural networks towards forming a neural rendering for controllable image generation, and thereby bypassing the need for detailed modeling in conventional graphics pipeline. To this end, we present Neural Graphics Pipeline (NGP), a hybrid generative model that brings together neural and traditional image formation models. NGP decomposes the image into a set of interpretable appearance feature maps, uncovering direct control handles for controllable image generation. To form an image, NGP generates coarse 3D models that are fed into neural rendering modules to produce view-specific interpretable 2D maps, which are then composited into the final output image using a traditional image formation model. Our approach offers control over image generation by providing direct handles controlling illumination and camera parameters, in addition to control over shape and appearance variations. The key challenge is to learn these controls through unsupervised training that links generated coarse 3D models with unpaired real images via neural and traditional (e.g., Blinn- Phong) rendering functions, without establishing an explicit correspondence between them. We demonstrate the effectiveness of our approach on controllable image generation of single-object scenes. We evaluate our hybrid modeling framework, compare with neural-only generation methods (namely, DCGAN, LSGAN, WGAN-GP, VON, and SRNs), report improvement in FID scores against real images, and demonstrate that NGP supports direct controls common in traditional forward rendering. Code is available at http://geometry.cs.ucl.ac.uk/projects/2021/ngp.
  • Item
    High Dynamic Range Point Clouds for Real-Time Relighting
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Sabbadin, Manuele; Palma, Gianpaolo; BANTERLE, FRANCESCO; Boubekeur, Tamy; Cignoni, Paolo; Lee, Jehee and Theobalt, Christian and Wetzstein, Gordon
    Acquired 3D point clouds make possible quick modeling of virtual scenes from the real world.With modern 3D capture pipelines, each point sample often comes with additional attributes such as normal vector and color response. Although rendering and processing such data has been extensively studied, little attention has been devoted using the light transport hidden in the recorded per-sample color response to relight virtual objects in visual effects (VFX) look-dev or augmented reality (AR) scenarios. Typically, standard relighting environment exploits global environment maps together with a collection of local light probes to reflect the light mood of the real scene on the virtual object. We propose instead a unified spatial approximation of the radiance and visibility relationships present in the scene, in the form of a colored point cloud. To do so, our method relies on two core components: High Dynamic Range (HDR) expansion and real-time Point-Based Global Illumination (PBGI). First, since an acquired color point cloud typically comes in Low Dynamic Range (LDR) format, we boost it using a single HDR photo exemplar of the captured scene that can cover part of it. We perform this expansion efficiently by first expanding the dynamic range of a set of renderings of the point cloud and then projecting these renderings on the original cloud. At this stage, we propagate the expansion to the regions not covered by the renderings or with low-quality dynamic range by solving a Poisson system. Then, at rendering time, we use the resulting HDR point cloud to relight virtual objects, providing a diffuse model of the indirect illumination propagated by the environment. To do so, we design a PBGI algorithm that exploits the GPU's geometry shader stage as well as a new mipmapping operator, tailored for G-buffers, to achieve real-time performances. As a result, our method can effectively relight virtual objects exhibiting diffuse and glossy physically-based materials in real time. Furthermore, it accounts for the spatial embedding of the object within the 3D environment. We evaluate our approach on manufactured scenes to assess the error introduced at every step from the perfect ground truth. We also report experiments with real captured data, covering a range of capture technologies, from active scanning to multiview stereo reconstruction.
  • Item
    Dense 3D Gaussian Splatting Initialization for Sparse Image Data
    (The Eurographics Association, 2024) Seibt, Simon; Chang, Thomas Vincent Siu-Lung; von Rymon Lipinski, Bartosz ; Latoschik, Marc Erich; Liu, Lingjie; Averkiou, Melinos
    This paper presents advancements in novel-view synthesis with 3D Gaussian Splatting (3DGS) using a dense and accurate SfM point cloud initialization approach. We address the challenge of achieving photorealistic renderings from sparse image data, where basic 3DGS training may result in suboptimal convergence, thus leading to visual artifacts. The proposed method enhances precision and density of initially reconstructed point clouds by refining 3D positions and extrapolating additional points, even for difficult image regions, e.g. with repeating patterns and suboptimal visual coverage. Our contributions focus on improving ''Dense Feature Matching for Structure-from-Motion'' (DFM4SfM) based on a homographic decomposition of the image space to support 3DGS training: First, a grid-based feature detection method is introduced for DFM4SfM to ensure a welldistributed 3D Gaussian initialization uniformly over all depth planes. Second, the SfM feature matching is complemented by a geometric plausibility check, priming the homography estimation and thereby improving the initial placement of 3D Gaussians. Experimental results on the NeRF-LLFF dataset demonstrate that this approach achieves superior qualitative and quantitative results, even for fewer views, and the potential for a significantly accelerated 3DGS training with faster convergence.
  • Item
    Single-Image SVBRDF Estimation with Learned Gradient Descent
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Luo, Xuejiao; Scandolo, Leonardo; Bousseau, Adrien; Eisemann, Elmar; Bermano, Amit H.; Kalogerakis, Evangelos
    Recovering spatially-varying materials from a single photograph of a surface is inherently ill-posed, making the direct application of a gradient descent on the reflectance parameters prone to poor minima. Recent methods leverage deep learning either by directly regressing reflectance parameters using feed-forward neural networks or by learning a latent space of SVBRDFs using encoder-decoder or generative adversarial networks followed by a gradient-based optimization in latent space. The former is fast but does not account for the likelihood of the prediction, i.e., how well the resulting reflectance explains the input image. The latter provides a strong prior on the space of spatially-varying materials, but this prior can hinder the reconstruction of images that are too different from the training data. Our method combines the strengths of both approaches. We optimize reflectance parameters to best reconstruct the input image using a recurrent neural network, which iteratively predicts how to update the reflectance parameters given the gradient of the reconstruction likelihood. By combining a learned prior with a likelihood measure, our approach provides a maximum a posteriori estimate of the SVBRDF. Our evaluation shows that this learned gradient-descent method achieves state-of-the-art performance for SVBRDF estimation on synthetic and real images.
  • Item
    Sequences with Low-Discrepancy Blue-Noise 2-D Projections
    (The Eurographics Association and John Wiley & Sons Ltd., 2018) Perrier, Hélène; Coeurjolly, David; Xie, Feng; Pharr, Matt; Hanrahan, Pat; Ostromoukhov, Victor; Gutierrez, Diego and Sheffer, Alla
    Distributions of samples play a very important role in rendering, affecting variance, bias and aliasing in Monte-Carlo and Quasi-Monte Carlo evaluation of the rendering equation. In this paper, we propose an original sampler which inherits many important features of classical low-discrepancy sequences (LDS): a high degree of uniformity of the achieved distribution of samples, computational efficiency and progressive sampling capability. At the same time, we purposely tailor our sampler in order to improve its spectral characteristics, which in turn play a crucial role in variance reduction, anti-aliasing and improving visual appearance of rendering. Our sampler can efficiently generate sequences of multidimensional points, whose power spectra approach so-called Blue-Noise (BN) spectral property while preserving low discrepancy (LD) in certain 2-D projections. In our tile-based approach, we perform permutations on subsets of the original Sobol LDS. In a large space of all possible permutations, we select those which better approach the target BN property, using pair-correlation statistics. We pre-calculate such ''good'' permutations for each possible Sobol pattern, and store them in a lookup table efficiently accessible in runtime. We provide a complete and rigorous proof that such permutations preserve dyadic partitioning and thus the LDS properties of the point set in 2-D projections. Our construction is computationally efficient, has a relatively low memory footprint and supports adaptive sampling. We validate our method by performing spectral/discrepancy/aliasing analysis of the achieved distributions, and provide variance analysis for several target integrands of theoretical and practical interest.
  • Item
    Video-Driven Animation of Neural Head Avatars
    (The Eurographics Association, 2023) Paier, Wolfgang; Hinzer, Paul; Hilsmann, Anna; Eisert, Peter; Guthe, Michael; Grosch, Thorsten
    We present a new approach for video-driven animation of high-quality neural 3D head models, addressing the challenge of person-independent animation from video input. Typically, high-quality generative models are learned for specific individuals from multi-view video footage, resulting in person-specific latent representations that drive the generation process. In order to achieve person-independent animation from video input, we introduce an LSTM-based animation network capable of translating person-independent expression features into personalized animation parameters of person-specific 3D head models. Our approach combines the advantages of personalized head models (high quality and realism) with the convenience of video-driven animation employing multi-person facial performance capture.We demonstrate the effectiveness of our approach on synthesized animations with high quality based on different source videos as well as an ablation study.
  • Item
    Art-directing Appearance using an Environment Map Latent Space
    (The Eurographics Association, 2021) Petikam, Lohit; Chalmers, Andrew; Anjyo, Ken; Rhee, Taehyun; Lee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, Burkhard
    In look development, environment maps (EMs) are used to verify 3D appearance in varied lighting (e.g., overcast, sunny, and indoor). Artists can only assign one fixed material, making it laborious to edit appearance uniquely for all EMs. Artists can artdirect material and lighting in film post-production. However, this is impossible in dynamic real-time games and live augmented reality (AR), where environment lighting is unpredictable. We present a new workflow to customize appearance variation across a wide range of EM lighting, for live applications. Appearance edits can be predefined, and then automatically adapted to environment lighting changes. We achieve this by learning a novel 2D latent space of varied EM lighting. The latent space lets artists browse EMs in a semantically meaningful 2D view. For different EMs, artists can paint different material and lighting parameter values directly on the latent space. We robustly encode new EMs into the same space, for automatic look-up of the desired appearance. This solves a new problem of preserving art-direction in live applications, without any artist intervention.
  • Item
    World-Space Spatiotemporal Path Resampling for Path Tracing
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Zhang, Hangyu; Wang, Beibei; Chaine, Raphaƫlle; Deng, Zhigang; Kim, Min H.
    With the advent of hardware-accelerated ray tracing, more and more real-time rendering applications tend to render images with ray-traced global illumination (GI). However, the low sample counts at real-time framerates bring enormous challenges to existing path sampling methods. Recent work (ReSTIR GI) samples indirect illumination effectively with a dramatic bias reduction. However, as a screen-space based path resampling approach, it can only reuse the path at the first bounce and brings subtle benefits for complex scenes. To this end, we propose a world-space based spatiotemporal path resampling approach. Our approach caches more path samples into a world-space grid, which allows reusing sub-path starting from non-primary path vertices. Furthermore, we introduce a practical normal-aware hash grid construction approach, providing more efficient candidate samples for path resampling. Eventually, our method achieves improvements ranging from 16.6% to 41.9% in terms of mean squared errors (MSE) compared against the previous method with only 4.4% ~ 8.4% extra time cost.