9 results
Search Results
Now showing 1 - 9 of 9
Item Deformed Tiling and Blending: Application to the Correction of Distortions Implied by Texture Mapping(The Eurographics Association and John Wiley & Sons Ltd., 2025) Wendling, Quentin; Ravaglia, Joris; Sauvage, Basile; Bousseau, Adrien; Day, AngelaThe prevailing model in virtual 3D scenes is a 3D surface, which a texture is mapped onto, through a parameterization from the texture plane. We focus on accounting for the parameterization during the texture creation process, to control the deformations and remove the cuts induced by the mapping. We rely on the tiling and blending, a real-time and parallel algorithm that generates an arbitrary large texture from a small input example. Our first contribution is to enhance the tiling and blending with a deformation field, which controls smooth spatial variations in the texture plane. Our second contribution is to derive, from a parameterized triangle mesh, a deformation field to compensate for texture distortions and to control for the texture orientation. Our third contribution is a technique to enforce texture continuity across the cuts, thanks to a proper tile selection. This opens the door to interactive sessions with artistic control, and real-time rendering with improved visual quality.Item Adaptive Multi-view Radiance Caching for Heterogeneous Participating Media(The Eurographics Association and John Wiley & Sons Ltd., 2025) Stadlbauer, Pascal; Tatzgern, Wolfgang; Mueller, Joerg H.; Winter, Martin; Stojanovic, Robert; Weinrauch, Alexander; Steinberger, Markus; Bousseau, Adrien; Day, AngelaAchieving lifelike atmospheric effects, such as fog, is essential in creating immersive environments and poses a formidable challenge in real-time rendering. Highly realistic rendering of complex lighting interacting with dynamic fog can be very resourceintensive, due to light bouncing through a complex participating media multiple times. We propose an approach that uses a multi-layered spherical harmonics probe grid to share computations temporarily. In addition, this world-space storage enables the sharing of radiance data between multiple viewers. In the context of cloud rendering this means faster rendering and a significant enhancement in overall rendering quality with efficient resource utilization.Item Learning Image Fractals Using Chaotic Differentiable Point Splatting(The Eurographics Association and John Wiley & Sons Ltd., 2025) Djeacoumar, Adarsh; Mujkanovic, Felix; Seidel, Hans-Peter; Leimkühler, Thomas; Bousseau, Adrien; Day, AngelaFractal geometry, defined by self-similar patterns across scales, is crucial for understanding natural structures. This work addresses the fractal inverse problem, which involves extracting fractal codes from images to explain these patterns and synthesize them at arbitrary finer scales. We introduce a novel algorithm that optimizes Iterated Function System parameters using a custom fractal generator combined with differentiable point splatting. By integrating both stochastic and gradient-based optimization techniques, our approach effectively navigates the complex energy landscapes typical of fractal inversion, ensuring robust performance and the ability to escape local minima. We demonstrate the method's effectiveness through comparisons with various fractal inversion techniques, highlighting its ability to recover high-quality fractal codes and perform extensive zoom-ins to reveal intricate patterns from just a single image.Item 4-LEGS: 4D Language Embedded Gaussian Splatting(The Eurographics Association and John Wiley & Sons Ltd., 2025) Fiebelman, Gal; Cohen, Tamir; Morgenstern, Ayellet; Hedman, Peter; Averbuch-Elor, Hadar; Bousseau, Adrien; Day, AngelaThe emergence of neural representations has revolutionized our means for digitally viewing a wide range of 3D scenes, enabling the synthesis of photorealistic images rendered from novel views. Recently, several techniques have been proposed for connecting these low-level representations with the high-level semantics understanding embodied within the scene. These methods elevate the rich semantic understanding from 2D imagery to 3D representations, distilling high-dimensional spatial features onto 3D space. In our work, we are interested in connecting language with a dynamic modeling of the world. We show how to lift spatio-temporal features to a 4D representation based on 3D Gaussian Splatting. This enables an interactive interface where the user can spatiotemporally localize events in the video from text prompts. We demonstrate our system on public 3D video datasets of people and animals performing various actions.Item All-frequency Full-body Human Image Relighting(The Eurographics Association and John Wiley & Sons Ltd., 2025) Tajima, Daichi; Kanamori, Yoshihiro; Endo, Yuki; Bousseau, Adrien; Day, AngelaRelighting of human images enables post-photography editing of lighting effects in portraits. The current mainstream approach uses neural networks to approximate lighting effects without explicitly accounting for the principle of physical shading. As a result, it often has difficulty representing high-frequency shadows and shading. In this paper, we propose a two-stage relighting method that can reproduce physically-based shadows and shading from low to high frequencies. The key idea is to approximate an environment light source with a set of a fixed number of area light sources. The first stage employs supervised inverse rendering from a single image using neural networks and calculates physically-based shading. The second stage then calculates shadow for each area light and sums up to render the final image. We propose to make soft shadow mapping differentiable for the area-light approximation of environment lighting. We demonstrate that our method can plausibly reproduce all-frequency shadows and shading caused by environment illumination, which have been difficult to reproduce using existing methods.Item Image-Based Spatio-Temporal Upsampling for Split Rendering(The Eurographics Association and John Wiley & Sons Ltd., 2025) Steiner, Michael; Köhler, Thomas; Radl, Lukas; Budge, Brian; Steinberger, Markus; Knoll, Aaron; Peters, ChristophLow-powered devices - such as small form factor head-mounted displays (HMDs) - struggle to deliver a smooth and highquality viewing experience, due to their limited power and rendering capabilities. Cloud rendering attempts to solve the quality issue, but leads to prohibitive latency and bandwidth requirements, hindering use with HMDs over mobile connections or even over Wifi. One solution - split rendering - where frames are partially rendered on the client device, often either requires geometry and rendering hardware, or struggles to generate frames faithfully under viewpoint changes and object motion. Our method enables spatio-temporal interpolation via bidirectional reprojection to efficiently generate intermediate frames in a split rendering setting, while limiting the communication cost and relying purely on image-based rendering. Furthermore, our method is robust to modest connectivity issues and handles effects such as dynamic smooth shadows.Item Perceived Quality of BRDF Models(The Eurographics Association and John Wiley & Sons Ltd., 2025) Kavoosighafi, Behnaz; Mantiuk, Rafal K.; Hajisharif, Saghi; Miandji, Ehsan; Unger, Jonas; Wang, Beibei; Wilkie, AlexanderMaterial appearance is commonly modeled with the Bidirectional Reflectance Distribution Functions (BRDFs), which need to trade accuracy for complexity and storage cost. To investigate the current practices of BRDF modeling, we collect the first high dynamic range stereoscopic video dataset that captures the perceived quality degradation with respect to a number of parametric and non-parametric BRDF models. Our dataset shows that the current loss functions used to fit BRDF models, such as mean-squared error of logarithmic reflectance values, correlate poorly with the perceived quality of materials in rendered videos. We further show that quality metrics that compare rendered material samples give a significantly higher correlation with subjective quality judgments, and a simple Euclidean distance in the ITP color space (DEITP) shows the highest correlation. Additionally, we investigate the use of different BRDF-space metrics as loss functions for fitting BRDF models and find that logarithmic mapping is the most effective approach for BRDF-space loss functions.Item MatSwap: Light-aware Material Transfers in Images(The Eurographics Association and John Wiley & Sons Ltd., 2025) Lopes, Ivan; Deschaintre, Valentin; Hold-Geoffroy, Yannick; Charette, Raoul de; Wang, Beibei; Wilkie, AlexanderWe present MatSwap, a method to transfer materials to designated surfaces in an image realistically. Such a task is non-trivial due to the large entanglement of material appearance, geometry, and lighting in a photograph. In the literature, material editing methods typically rely on either cumbersome text engineering or extensive manual annotations requiring artist knowledge and 3D scene properties that are impractical to obtain. In contrast, we propose to directly learn the relationship between the input material-as observed on a flat surface-and its appearance within the scene, without the need for explicit UV mapping. To achieve this, we rely on a custom light- and geometry-aware diffusion model. We fine-tune a large-scale pre-trained text-toimage model for material transfer using our synthetic dataset, preserving its strong priors to ensure effective generalization to real images. As a result, our method seamlessly integrates a desired material into the target location in the photograph while retaining the identity of the scene. MatSwap is evaluated on synthetic and real images showing that it compares favorably to recent works. Our code and data are made publicly available on https://github.com/astra-vision/MatSwapItem GreenCloud: Volumetric Gradient Filtering via Regularized Green's Functions(The Eurographics Association and John Wiley & Sons Ltd., 2025) Tojo, Kenji; Umetani, Nobuyuki; Attene, Marco; Sellán, SilviaGradient-based optimization is a fundamental tool in geometry processing, but it is often hampered by geometric distortion arising from noisy or sparse gradients. Existing methods mitigate these issues by filtering (i.e., diffusing) gradients over a surface mesh, but they require explicit mesh connectivity and solving large linear systems, making them unsuitable for point-based representation. In this work, we introduce a gradient filtering method tailored for point-based geometry. Our method bypasses explicit connectivity by leveraging regularized Green's functions to directly compute the filtered gradient field from discrete spatial points. Additionally, our approach incorporates elastic deformation based on Green's function of linear elasticity (known as Kelvinlets), reproducing various elastic behaviors such as smoothness and volume preservation while improving robustness in affine transformations. We further accelerate computation using a hierarchical Barnes-Hut style approximation, enabling scalable optimization of one million points. Our method significantly improves convergence across a wide range of applications, including reconstruction, editing, stylization, and simplified optimization experiments with Gaussian splatting.