EG2025
Permanent URI for this community
Browse
Browsing EG2025 by Issue Date
Now showing 1 - 20 of 141
Results Per Page
Sort Options
Item SOBB: Skewed Oriented Bounding Boxes for Ray Tracing(The Eurographics Association and John Wiley & Sons Ltd., 2025) Kácerik, Martin; Bittner, Jirí; Bousseau, Adrien; Day, AngelaWe propose skewed oriented bounding boxes (SOBB) as a novel bounding primitive for accelerating the calculation of rayscene intersections. SOBBs have the same memory footprint as the well-known oriented bounding boxes (OBB) and can be used with a similar ray intersection algorithm. We propose an efficient algorithm for constructing a BVH with SOBBs, using a transformation from a standard BVH built for axis-aligned bounding boxes (AABB). We use discrete orientation polytopes as a temporary bounding representation to find tightly fitting SOBBs. Additionally, we propose a compression scheme for SOBBs that makes their memory requirements comparable to those of AABBs. For secondary rays, the SOBB BVH provides a ray tracing speedup of 1.0-11.0x over the AABB BVH and it is 1.1x faster than the OBB BVH on average. The transformation of AABB BVH to SOBB BVH is, on average, 2.6x faster than the ditetrahedron-based AABB BVH to OBB BVH transformation.Item Infusion: Internal Diffusion for Inpainting of Dynamic Textures and Complex Motion(The Eurographics Association and John Wiley & Sons Ltd., 2025) Cherel, Nicolas; Almansa, Andrés; Gousseau, Yann; Newson, Alasdair; Bousseau, Adrien; Day, AngelaVideo inpainting is the task of filling a region in a video in a visually convincing manner. It is very challenging due to the high dimensionality of the data and the temporal consistency required for obtaining convincing results. Recently, diffusion models have shown impressive results in modeling complex data distributions, including images and videos. Such models remain nonetheless very expensive to train and to perform inference with, which strongly reduce their applicability to videos, and yields unreasonable computational loads. We show that in the case of video inpainting, thanks to the highly auto-similar nature of videos, the training data of a diffusion model can be restricted to the input video and still produce very satisfying results. With this internal learning approach, where the training data is limited to a single video, our lightweight models perform very well with only half a million parameters, in contrast to the very large networks with billions of parameters typically found in the literature. We also introduce a new method for efficient training and inference of diffusion models in the context of internal learning, by splitting the diffusion process into different learning intervals corresponding to different noise levels of the diffusion process. We show qualitative and quantitative results, demonstrating that our method reaches or exceeds state of the art performance in the case of dynamic textures and complex dynamic backgrounds.Item Double QuickCurve: revisiting 3-axis non-planar 3D printing(The Eurographics Association, 2025) Ottonello, Emilio; Hugron, Pierre-Alexandre; Parmiggiani, Alberto; Lefebvre, Sylvain; Ceylan, Duygu; Li, Tzu-MaoAdditive manufacturing builds physical objects by accumulating layers of solidified material. This is typically done with planar layers. Fused filament printers however have the capability to extrude material along 3D curves, leading to the idea of depositing in a non-planar fashion. In this paper we introduce a novel algorithm for this purpose, targeting simplicity, robustness and efficiency. Our method interpolates curved slicing surfaces between two top and bottom slicing surfaces, optimized to align with the object curvatures. These slicing surfaces are intersected with the input model to extract non-planar layers and curved deposition trajectories. We further orient trajectories according to the object's curvatures, improving deposition.Item Learning Fast 3D Gaussian Splatting Rendering using Continuous Level of Detail(The Eurographics Association and John Wiley & Sons Ltd., 2025) Milef, Nicholas; Seyb, Dario; Keeler, Todd; Nguyen-Phuoc, Thu; Bozic, Aljaz; Kondguli, Sushant; Marshall, Carl; Bousseau, Adrien; Day, Angela3D Gaussian splatting (3DGS) has shown potential for rendering photorealistic 3D scenes in real-time. Unfortunately, rendering these scenes on less powerful hardware is still a challenge, especially with high-resolution displays. We introduce a continuous level of detail (CLOD) algorithm and demonstrate how our method can improve performance while preserving as much quality as possible. Our approach learns to order splats based on importance and optimize them such that a representative and realistic scene can be rendered for an arbitrary splat count. Our method does not require any additional memory or rendering overhead and works with existing 3DGS renderers. We also demonstrate the flexibility of our CLOD method by extending it with distance-based LOD selection, foveated rendering, and budget-based rendering.Item Single-Shot Facial Appearance Acquisition without Statistical Appearance Priors(The Eurographics Association, 2025) Soh, Guan Yu; Ghosh, Abhijeet; Ceylan, Duygu; Li, Tzu-MaoSingle-shot in-the-wild facial reflectance acquisition has been a long-standing challenge in the field of computer graphics and computer vision. Current state-of-the-art methods are typically learning-based methods, pre-trained on a dataset of facial reflectance data. However, due to the high cost and time-consuming nature of gathering these datasets, they are usually limited in the number of subjects covered and hence are prone to biases in the dataset. To this end, we propose a novel multi-stage guided optimization with differentiable rendering to tackle this problem, without the use of statistical facial appearance priors. This makes our method immune to these biases, and we demonstrate the advantage with qualitative and quantitative evaluations against current state-of-the-art methods.Item Towards Scaling-Invariant Projections for Data Visualization(The Eurographics Association and John Wiley & Sons Ltd., 2025) Dierkes, Joel; Stelter, Daniel; Rössl, Christian; Theisel, Holger; Bousseau, Adrien; Day, AngelaFinding projections of multidimensional data domains to the 2D screen space is a well-known problem. Multidimensional data often comes with the property that the dimensions are measured in different physical units, which renders the ratio between dimensions, i.e., their scale, arbitrary. The result of common projections, like PCA, t-SNE, or MDS, depends on this ratio, i.e., these projections are variant to scaling. This results in an undesired subjective view of the data, and thus, their projection. Simple solutions like normalization of each dimension are widely used, but do not always give high-quality results. We propose to visually analyze the space of all scalings and to find optimal scalings w.r.t. the quality of the visualization. For this, we evaluate different quality criteria on scatter plots. Given a quality criterion, our approach finds scalings that yield good visualizations with little to no user input using numerical optimization. Simultaneously, our method results in a scaling invariant projection, proposing an objective view to the projected data. We show for several examples that such an optimal scaling can significantly improve the visualization quality.Item Demystifying noise: The role of randomness in generative AI(The Eurographics Association, 2025) Singh, Gurprit; Huang, Xingchang; Vandersanden, Jente; Oztireli, Cengiz; Mitra, Niloy; Mantiuk, Rafal; Hildebrandt, KlausThis tutorial offers a thorough exploration of the role of randomness in generative AI, leveraging foundational knowledge from statistical physics, stochastic differential equations, and computer graphics. By connecting these disciplines, the tutorial aims to provide participants with a deep understanding of how noise impacts generative modeling and introduce state-of-the-art techniques and applications of noise in AI. First, we revisit the mathematical concepts essential for understanding diffusion and the integral role of noise in diffusion-based generative modeling. In the second part of the tutorial, we introduce the various types of noises studied within the computer graphics community and present their impact on rendering, texture synthesis and content creation. In the last part, we will look at how different noise correlations and noise schedulers impact the expressive power of image and video generation models. By the end of the tutorial, participants will gain an in-depth understanding of the mathematical constructs for diffusion models and how noise correlations can play an important role in enhancing the diversity and expressiveness of these models. The audience will also learn to code these noises developed in the graphics literature and their impact on generative modeling. The tutorial is aimed for students, researchers and practitioners, with our panel members bringing insights from the industry. All the materials related to the tutorial will be available on diffusion-noise.mpi-inf.mpg.de.Item EUROGRAPHICS 2025: CGF 44-2 Frontmatter(The Eurographics Association and John Wiley & Sons Ltd., 2025) Dai, Angela; Bousseau, Adrien; Dai, Angela; Bousseau, AdrienItem Mesh Compression with Quantized Neural Displacement Fields(The Eurographics Association and John Wiley & Sons Ltd., 2025) Pentapati, Sai Karthikey; Phillips, Gregoire; Bovik, Alan C.; Bousseau, Adrien; Day, AngelaImplicit neural representations (INRs) have been successfully used to compress a variety of 3D surface representations such as Signed Distance Functions (SDFs), voxel grids, and also other forms of structured data such as images, videos, and audio. However, these methods have been limited in their application to unstructured data such as 3D meshes and point clouds. This work presents a simple yet effective method that extends the usage of INRs to compress 3D triangle meshes. Our method encodes a displacement field that refines the coarse version of the 3D mesh surface to be compressed using a small neural network. Once trained, the neural network weights occupy much lower memory than the displacement field or the original surface. We show that our method is capable of preserving intricate geometric textures and demonstrates state-of-the-art performance for compression ratios ranging from 4x to 380x (See Figure 1 for an example).Item VisibleUS: From Cryosectional Images to Real-Time Ultrasound Simulation(The Eurographics Association, 2025) Casanova-Salas, Pablo; Gimeno, Jesus; Blasco-Serra, Arantxa; González-Soler, Eva María; Escamilla-Muñoz, Laura; Valverde-Navarro, Alfonso Amador; Fernández, Marcos; Portalés, Cristina; Günther, Tobias; Montazeri, ZahraThe VisibleUS project aims to generate synthetic ultrasound images from cryosection images, focusing on the musculoskeletal system. Cryosection images provide a highly accurate representation of real tissue structures without artifacts. Using this rich anatomical data, we developed a ray-tracing-based simulation algorithm that models ultrasound wave propagation, scattering, and attenuation. This results in highly realistic ultrasound images that accurately depict fine anatomical details, such as muscle fibers and connective tissues. The simulation tool has various applications, including generating datasets for training neural networks and developing interactive training tools for ultrasound specialists. Its ability to produce realistic ultrasound images in real time enhances medical education and research, improving both the understanding and interpretation of ultrasound imaging.Item NePHIM: A Neural Physics-Based Head-Hand Interaction Model(The Eurographics Association and John Wiley & Sons Ltd., 2025) Wagner, Nicolas; Schwanecke, Ulrich; Botsch, Mario; Bousseau, Adrien; Day, AngelaDue to the increasing use of virtual avatars, the animation of head-hand interactions has recently gained attention. To this end, we present a novel volumetric and physics-based interaction simulation. In contrast to previous work, our simulation incorporates temporal effects such as collision paths, respects anatomical constraints, and can detect and simulate skin pulling. As a result, we can achieve more natural-looking interaction animations and take a step towards greater realism. However, like most complex and computationally expensive simulations, ours is not real-time capable even on high-end machines. Therefore, we train small and efficient neural networks as accurate approximations that achieve about 200 FPS on consumer GPUs, about 50 FPS on CPUs, and are learned in less than four hours for one person. In general, our focus is not to generalize the approximation networks to low-resolution head models but to adapt them to more detailed personalized avatars. Nevertheless, we show that these networks can learn to approximate our head-hand interaction model for multiple identities while maintaining computational efficiency. Since the quality of the simulations can only be judged subjectively, we conducted a comprehensive user study which confirms the improved realism of our approach. In addition, we provide extensive visual results and inspect the neural approximations quantitatively. All data used in this work has been recorded with a multi-view camera rig. Code and data are available at https://gitlab.cs.hs-rm.de/cvmr_releases/HeadHand.Item Material Transforms from Disentangled NeRF Representations(The Eurographics Association and John Wiley & Sons Ltd., 2025) Lopes, Ivan; Lalonde, Jean-François; Charette, Raoul de; Bousseau, Adrien; Day, AngelaIn this paper, we first propose a novel method for transferring material transformations across different scenes. Building on disentangled Neural Radiance Field (NeRF) representations, our approach learns to map Bidirectional Reflectance Distribution Functions (BRDF) from pairs of scenes observed in varying conditions, such as dry and wet. The learned transformations can then be applied to unseen scenes with similar materials, therefore effectively rendering the transformation learned with an arbitrary level of intensity. Extensive experiments on synthetic scenes and real-world objects validate the effectiveness of our approach, showing that it can learn various transformations such as wetness, painting, coating, etc. Our results highlight not only the versatility of our method but also its potential for practical applications in computer graphics. We publish our method implementation, along with our synthetic/real datasets on https://github.com/astra-vision/BRDFTransformItem Generative Motion Infilling from Imprecisely Timed Keyframes(The Eurographics Association and John Wiley & Sons Ltd., 2025) Goel, Purvi; Zhang, Haotian; Liu, C. Karen; Fatahalian, Kayvon; Bousseau, Adrien; Day, AngelaKeyframes are a standard representation for kinematic motion specification. Recent learned motion-inbetweening methods use keyframes as a way to control generative motion models, and are trained to generate life-like motion that matches the exact poses and timings of input keyframes. However, the quality of generated motion may degrade if the timing of these constraints is not perfectly consistent with the desired motion. Unfortunately, correctly specifying keyframe timings is a tedious and challenging task in practice. Our goal is to create a system that synthesizes high-quality motion from keyframes, even if keyframes are imprecisely timed. We present a method that allows constraints to be retimed as part of the generation process. Specifically, we introduce a novel model architecture that explicitly outputs a time-warping function to correct mistimed keyframes, and spatial residuals that add pose details. We demonstrate how our method can automatically turn approximately timed keyframe constraints into diverse, realistic motions with plausible timing and detailed submovements.Item Differential Diffusion: Giving Each Pixel Its Strength(The Eurographics Association and John Wiley & Sons Ltd., 2025) Levin, Eran; Fried, Ohad; Bousseau, Adrien; Day, AngelaDiffusion models have revolutionized image generation and editing, producing state-of-the-art results in conditioned and unconditioned image synthesis. While current techniques enable user control over the degree of change in an image edit, the controllability is limited to global changes over an entire edited region. This paper introduces a novel framework that enables customization of the amount of change per pixel or per image region. Our framework can be integrated into any existing diffusion model, enhancing it with this capability. Such granular control opens up a diverse array of new editing capabilities, such as control of the extent to which individual objects are modified, or the ability to introduce gradual spatial changes. Furthermore, we showcase the framework's effectiveness in soft-inpainting-the completion of portions of an image while subtly adjusting the surrounding areas to ensure seamless integration. Additionally, we introduce a new tool for exploring the effects of different change quantities. Our framework operates solely during inference, requiring no model training or fine-tuning. We demonstrate our method with the current open state-of-the-art models, and validate it via both quantitative and qualitative comparisons, and a user study. Our code is published and integrated into several platforms.Item Advancing XR Education: Towards a Multimodal Human-Machine Interaction Course for Doctoral Students in Computer Science(The Eurographics Association, 2025) Silva, Samuel; Marques, Bernardo; Mendes, Daniel; Rodrigues, Rui; Kuffner dos Anjos, Rafael; Rodriguez Echavarria, KarinaNowadays, eXtended Reality (XR) has matured to the point where it seamlessly integrates various input and output modalities, enhancing the way users interact with digital environments. From traditional controllers and hand tracking to voice commands, eye tracking, and even biometric sensors, XR systems now offer more natural interactions. Similarly, output modalities have expanded beyond visual displays to include haptic feedback, spatial audio, and others, enriching the overall user experience. In this vein, as the field of XR becomes increasingly multimodal, the education process must also evolve to reflect these advancements. There is a growing need to incorporate additional modalities into the curriculum, helping students understand their relevance and practical applications. By exposing students to a diverse range of interaction techniques, they can better assess which modalities are most suitable for different contexts, enabling them to design more effective and human-centered solutions. This work describes an Advanced Human-Machine Interaction (HMI) course aimed at Doctoral Students in Computer Science. The primary objective is to provide students with the necessary knowledge in HMI by enabling them to articulate the fundamental concepts of the field, recognize and analyze the role of human factors, identify modern interaction methods and technologies, apply HCD principles to interactive system design and development, and implement appropriate methods for assessing interaction experiences across advanced HMI topics. In this vein, the course structure, the range of topics covered, assessment strategies, as well as the hardware and infrastructure employed are presented. Additionally, it highlights mini-projects, including flexibility for students to integrate their projects, fostering personalized and project-driven learning. The discussion reflects on the challenges inherent in keeping pace with this rapidly evolving field and emphasizes the importance of adapting to emerging trends. Finally, the paper outlines future directions and potential enhancements for the course.Item Lightweight Morphology-Aware Encoding for Motion Learning(The Eurographics Association, 2025) Wu, Ziyu; Michel, Thomas; Rohmer, Damien; Ceylan, Duygu; Li, Tzu-MaoWe present a lightweight method for encoding, learning, and predicting 3D rigged character motion sequences that consider both the character's pose and morphology. Specifically, we introduce an enhanced skeletal embedding that extends the standard skeletal representation by incorporating the radius of proxy cylinders, which conveys geometric information about the character's morphology at each joint. This additional geometric data is represented using compact tokens designed to work seamlessly with transformer architectures. This simple yet effective representation demonstrated through three distinct tokenization strategies, maintains the efficiency of skeletal-based representations while enhancing the accuracy of motion sequence predictions across diverse morphologies. Notably, our method achieves these results despite being trained on a limited dataset, showcasing its potential for applications with scarce animation data.Item ReConForM: Real-time Contact-aware Motion Retargeting for more Diverse Character Morphologies(The Eurographics Association and John Wiley & Sons Ltd., 2025) Cheynel, Théo; Rossi, Thomas; Bellot-Gurlet, Baptiste; Rohmer, Damien; Cani, Marie-Paule; Bousseau, Adrien; Day, AngelaPreserving semantics, in particular in terms of contacts, is a key challenge when retargeting motion between characters of different morphologies. Our solution relies on a low-dimensional embedding of the character's mesh, based on rigged key vertices that are automatically transferred from the source to the target. Motion descriptors are extracted from the trajectories of these key vertices, providing an embedding that contains combined semantic information about both shape and pose. A novel, adaptive algorithm is then used to automatically select and weight the most relevant features over time, enabling us to efficiently optimize the target motion until it conforms to these constraints, so as to preserve the semantics of the source motion. Our solution allows extensions to several novel use-cases where morphology and mesh contacts were previously overlooked, such as multi-character retargeting and motion transfer on uneven terrains. As our results show, our method is able to achieve real-time retargeting onto a wide variety of characters. Extensive experiments and comparison with state-of-the-art methods using several relevant metrics demonstrate improved results, both in terms of motion smoothness and contact accuracy.Item Smaller than Pixels: Rendering Millions of Stars in Real-Time(The Eurographics Association, 2025) Schneegans, Simon; Kreskowski, Adrian; Gerndt, Andreas; Ceylan, Duygu; Li, Tzu-MaoMany applications need to display realistic stars. However, rendering stars with their correct luminance is surprisingly difficult: Usually, stars are so far away from the observer, that they appear smaller than a single pixel. As one can not visualize objects smaller than a pixel, one has to either distribute a star's luminance over an entire pixel or draw some kind of proxy geometry for the star. We also have to consider that pixels at the edge of the screen cover a smaller portion of the observer's field of view than pixels in the centre. Hence, single-pixel stars at the edge of the screen have to be drawn proportionally brighter than those in the centre. This is especially important for virtual-reality or dome renderings, where the field of view is large. In this paper, we compare different rendering techniques for stars and show how to compute their luminance based on the solid angle covered by their geometric proxies. This includes point-based stars, and various types of camera-aligned billboards. In addition, we present a software rasterizer which outperforms these classic rendering techniques in almost all cases. Furthermore, we show how a perception-based glare filter can be used to efficiently distribute a star's luminance to neighbouring pixels. Our implementation is part of the open-source space-visualization software CosmoScout VR.Item Non-linear, Team-based VR Training for Cardiac Arrest Care with enhanced CRM Toolkit(The Eurographics Association, 2025) Kentros, Mike; Kamarianakis, Manos; Cole, Michael; Popov, Vitaliy; Protopsaltis, Antonis; Papagiannakis, George; Ceylan, Duygu; Li, Tzu-MaoThis paper introduces iREACT, a novel VR simulation addressing key limitations in traditional cardiac arrest (CA) training. Conventional methods struggle to replicate the dynamic nature of real CA events, hindering Crew Resource Management (CRM) skill development. iREACT provides a non-linear, collaborative environment where teams respond to changing patient states, mirroring real CA complexities. By capturing multi-modal data (user actions, cognitive load, visual gaze) and offering real-time and post-session feedback, iREACT enhances CRM assessment beyond traditional methods. A formative evaluation with medical experts underscores its usability and educational value, with potential applications in other high-stakes training scenarios to improve teamwork, communication, and decision-making.Item Many-Light Rendering Using ReSTIR-Sampled Shadow Maps(The Eurographics Association and John Wiley & Sons Ltd., 2025) Zhang, Song; Lin, Daqi; Wyman, Chris; Yuksel, Cem; Bousseau, Adrien; Day, AngelaWe present a practical method targeting dynamic shadow maps for many light sources in real-time rendering. We compute fullresolution shadow maps for a subset of lights, which we select with spatiotemporal reservoir resampling (ReSTIR). Our selection strategy automatically regenerates shadow maps for lights with the strongest contributions to pixels in the current camera view. The remaining lights are handled using imperfect shadow maps, which provide low-resolution shadow approximation. We significantly reduce the computation and storage compared to using all full-resolution shadow maps and substantially improve shadow quality compared to handling all lights with imperfect shadow maps.