EG 2025 - Short Papers
Permanent URI for this collection
Browse
Browsing EG 2025 - Short Papers by Issue Date
Now showing 1 - 20 of 27
Results Per Page
Sort Options
Item Double QuickCurve: revisiting 3-axis non-planar 3D printing(The Eurographics Association, 2025) Ottonello, Emilio; Hugron, Pierre-Alexandre; Parmiggiani, Alberto; Lefebvre, Sylvain; Ceylan, Duygu; Li, Tzu-MaoAdditive manufacturing builds physical objects by accumulating layers of solidified material. This is typically done with planar layers. Fused filament printers however have the capability to extrude material along 3D curves, leading to the idea of depositing in a non-planar fashion. In this paper we introduce a novel algorithm for this purpose, targeting simplicity, robustness and efficiency. Our method interpolates curved slicing surfaces between two top and bottom slicing surfaces, optimized to align with the object curvatures. These slicing surfaces are intersected with the input model to extract non-planar layers and curved deposition trajectories. We further orient trajectories according to the object's curvatures, improving deposition.Item Single-Shot Facial Appearance Acquisition without Statistical Appearance Priors(The Eurographics Association, 2025) Soh, Guan Yu; Ghosh, Abhijeet; Ceylan, Duygu; Li, Tzu-MaoSingle-shot in-the-wild facial reflectance acquisition has been a long-standing challenge in the field of computer graphics and computer vision. Current state-of-the-art methods are typically learning-based methods, pre-trained on a dataset of facial reflectance data. However, due to the high cost and time-consuming nature of gathering these datasets, they are usually limited in the number of subjects covered and hence are prone to biases in the dataset. To this end, we propose a novel multi-stage guided optimization with differentiable rendering to tackle this problem, without the use of statistical facial appearance priors. This makes our method immune to these biases, and we demonstrate the advantage with qualitative and quantitative evaluations against current state-of-the-art methods.Item Lightweight Morphology-Aware Encoding for Motion Learning(The Eurographics Association, 2025) Wu, Ziyu; Michel, Thomas; Rohmer, Damien; Ceylan, Duygu; Li, Tzu-MaoWe present a lightweight method for encoding, learning, and predicting 3D rigged character motion sequences that consider both the character's pose and morphology. Specifically, we introduce an enhanced skeletal embedding that extends the standard skeletal representation by incorporating the radius of proxy cylinders, which conveys geometric information about the character's morphology at each joint. This additional geometric data is represented using compact tokens designed to work seamlessly with transformer architectures. This simple yet effective representation demonstrated through three distinct tokenization strategies, maintains the efficiency of skeletal-based representations while enhancing the accuracy of motion sequence predictions across diverse morphologies. Notably, our method achieves these results despite being trained on a limited dataset, showcasing its potential for applications with scarce animation data.Item Smaller than Pixels: Rendering Millions of Stars in Real-Time(The Eurographics Association, 2025) Schneegans, Simon; Kreskowski, Adrian; Gerndt, Andreas; Ceylan, Duygu; Li, Tzu-MaoMany applications need to display realistic stars. However, rendering stars with their correct luminance is surprisingly difficult: Usually, stars are so far away from the observer, that they appear smaller than a single pixel. As one can not visualize objects smaller than a pixel, one has to either distribute a star's luminance over an entire pixel or draw some kind of proxy geometry for the star. We also have to consider that pixels at the edge of the screen cover a smaller portion of the observer's field of view than pixels in the centre. Hence, single-pixel stars at the edge of the screen have to be drawn proportionally brighter than those in the centre. This is especially important for virtual-reality or dome renderings, where the field of view is large. In this paper, we compare different rendering techniques for stars and show how to compute their luminance based on the solid angle covered by their geometric proxies. This includes point-based stars, and various types of camera-aligned billboards. In addition, we present a software rasterizer which outperforms these classic rendering techniques in almost all cases. Furthermore, we show how a perception-based glare filter can be used to efficiently distribute a star's luminance to neighbouring pixels. Our implementation is part of the open-source space-visualization software CosmoScout VR.Item Non-linear, Team-based VR Training for Cardiac Arrest Care with enhanced CRM Toolkit(The Eurographics Association, 2025) Kentros, Mike; Kamarianakis, Manos; Cole, Michael; Popov, Vitaliy; Protopsaltis, Antonis; Papagiannakis, George; Ceylan, Duygu; Li, Tzu-MaoThis paper introduces iREACT, a novel VR simulation addressing key limitations in traditional cardiac arrest (CA) training. Conventional methods struggle to replicate the dynamic nature of real CA events, hindering Crew Resource Management (CRM) skill development. iREACT provides a non-linear, collaborative environment where teams respond to changing patient states, mirroring real CA complexities. By capturing multi-modal data (user actions, cognitive load, visual gaze) and offering real-time and post-session feedback, iREACT enhances CRM assessment beyond traditional methods. A formative evaluation with medical experts underscores its usability and educational value, with potential applications in other high-stakes training scenarios to improve teamwork, communication, and decision-making.Item Approximate and Exact Buoyancy Calculation for Real-time Floating Simulation of Meshes(The Eurographics Association, 2025) Fábián, Gábor; Ceylan, Duygu; Li, Tzu-MaoIn this paper, we present methods for simulating floatation of bodies represented by triangular meshes. The primary challenge in creating such a simulation is determining the buoyant force and its reference point. We propose 5 algorithms, 3 approximations and 2 exact methods, that enable the real-time calculation of buoyant forces. Each algorithm is based on rigorous physical and mathematical principles, performing calculations directly on the triangular mesh rather than its approximation. Finally, we test the accuracy and efficiency of these algorithms through simple examples.Item Light the Sprite: Pixel Art Dynamic Light Map Generation(The Eurographics Association, 2025) Nikolov, Ivan; Ceylan, Duygu; Li, Tzu-MaoCorrect lighting and shading are vital for pixel art design. Automating texture generation, such as normal, depth, and occlusion maps, has been a long-standing focus. We extend this by proposing a deep learning model that generates point and directional light maps from RGB pixel art sprites and specified light vectors. Our approach modifies a UNet architecture with CIN layers to incorporate positional and directional information, using ZoeDepth for training depth data. Testing on a popular pixel art dataset shows that the generated light maps closely match those from depth or normal maps, as well as from manual programs. The model effectively relights complex sprites across styles and functions in real time, enhancing artist workflows. The code and dataset are here - https://github.com/IvanNik17/light-sprite.Item LabanLab: An Interactive Choreographical System with Labanotation-Motion Preview(The Eurographics Association, 2025) Yan, Zhe; Yu, Borou; Wang, Zeyu; Ceylan, Duygu; Li, Tzu-MaoThis paper introduces LabanLab, a novel choreography system that facilitates the creation of dance notation with motion preview. LabanLab features an interactive interface for creating Labanotation staff coupled with visualization of corresponding movements. Leveraging large language models (LLMs) and text-to-motion frameworks, LabanLab translates symbolic notation into natural language descriptions to generate lifelike character animations. As the first web-based Labanotation editor with motion synthesis capabilities, LabanLab makes Labanotation an input modality for multitrack human motion generation, empowering choreographers with practical tools and inviting novices to explore dance notation interactively.Item Parallel Dense-Geometry-Format Topology Decompression(The Eurographics Association, 2025) Meyer, Quirin; Barczak, Joshua; Reitter, Sander; Benthin, Carsten; Ceylan, Duygu; Li, Tzu-MaoDense Geometry Format (DGF) [BBM24] is a hardware-friendly representation for compressed triangle meshes specifically designed to support GPU hardware ray tracing. It decomposes a mesh into meshlets, i.e., small meshes with up to 64 positions, triangles, primitive indices, and opacity values, in a 128-byte block. However, accessing a triangle requires a slow sequential decompression algorithm with O(T) steps, where T is the number of triangles in a DGF block. We propose a novel parallel algorithm with O(logT) steps for arbitrary T. For DGF, where T ≤ 64, we transform our algorithm to allow O(1) access. We believe that our algorithm is suitable for hardware implementations. With our algorithm, a custom intersection shader outperforms the existing serial decompression method. Further, our mesh shader implementation achieves competitive rasterization performance with the vertex pipeline. Finally, we show how our method may parallelize other topology decompression schemes.Item Cardioid Caustics Generation with Conditional Diffusion Models(The Eurographics Association, 2025) Uss, Wojciech; Kaliński, Wojciech; Kuznetsov, Alexandr; Anand, Harish; Kim, Sungye; Ceylan, Duygu; Li, Tzu-MaoDespite the latest advances in generative neural techniques for producing photorealistic images, they lack generation of multi-bounce, high-frequency lighting effect like caustics. In this work, we tackle the problem of generating cardioid-shaped reflective caustics using diffusion-based generative models. We approach this problem as conditional image generation using a diffusion-based model conditioned with multiple images of geometric, material and illumination information as well as light property. We introduce a framework to fine-tune a pre-trained diffusion model and present results with visually plausible caustics.Item Pixels2Points: Fusing 2D and 3D Features for Facial Skin Segmentation(The Eurographics Association, 2025) Chen, Victoria Yue; Wang, Daoye; Garbin, Stephan; Bednarik, Jan; Winberg, Sebastian; Bolkart, Timo; Beeler, Thabo; Ceylan, Duygu; Li, Tzu-MaoFace registration deforms a template mesh to closely fit a 3D face scan, the quality of which commonly degrades in non-skin regions (e.g., hair, beard, accessories), because the optimized template-to-scan distance pulls the template mesh towards the noisy scan surface. Improving registration quality requires a clean separation of skin and non-skin regions on the scan mesh. Existing image-based (2D) or scan-based (3D) segmentation methods however perform poorly. Image-based segmentation outputs multi-view inconsistent masks, and they cannot account for scan inaccuracies or scan-image misalignment, while scan-based methods suffer from lower spatial resolution compared to images. In this work, we introduce a novel method that accurately separates skin from non-skin geometry on 3D human head scans. For this, our method extracts features from multi-view images using a frozen image foundation model and aggregates these features in 3D. These lifted 2D features are then fused with 3D geometric features extracted from the scan mesh, to then predict a segmentation mask directly on the scan mesh. We show that our segmentations improve the registration accuracy over pure 2D or 3D segmentation methods by 8.89% and 14.3%, respectively. Although trained only on synthetic data, our model generalizes well to real data.Item EUROGRAPHICS 2025: Short Papers Frontmatter(Eurographics Association, 2025) Ceylan, Duygu; Li, Tzu-Mao; Ceylan, Duygu; Li, Tzu-MaoItem Implicit Shape Avatar Generalization across Pose and Identity(The Eurographics Association, 2025) Loranchet, Guillaume; Hellier, Pierre; Schnitzler, Francois; Boukhayma, Adnane; Regateiro, Joao; Multon, Franck; Ceylan, Duygu; Li, Tzu-MaoThe creation of realistic animated avatars has become a hot-topic in both academia and the creative industry. Recent advancements in deep learning and implicit representations have opened new research avenues, particularly in enhancing avatar details with lightweight models. This paper introduces an improvement over the state-of-the-art implicit Fast-SNARF method to permit generalization to novel motions and shape identities. Fast-SNARF trains two networks: an occupancy network to predict the shape of a character in canonical space, and a Linear Blend Skinning network to deform it into arbitrary poses. However, it requires a separated model for each subject. We extend this work by conditioning both networks on an identity parameter, enabling a single model to generalize across multiple identities, without increasing the model's size, compared to Fast-SNARF.Item Automated Skeleton Transformations on 3D Tree Models Captured from an RGB Video(The Eurographics Association, 2025) Michels, Joren; Moonen, Steven; GÜNEY, ENES; Temsamani, Abdellatif Bey; Michiels, Nick; Ceylan, Duygu; Li, Tzu-MaoA lot of work has been done surrounding the generation of realistically looking 3D models of trees. In most cases, L-systems are used to create variations of specific trees from a set of rules. While achieving good results, these techniques require knowledge of the structure of the tree to construct generative rules. We propose a system that can create variations of trees captured by a single RGB video. Using our method, plausible variations can be created without needing prior knowledge of the specific type of tree. This results in a fast and cost-efficient way to generate trees that resemble their real-life counterparts.Item Neural Facial Deformation Transfer(The Eurographics Association, 2025) Chandran, Prashanth; Ciccone, Loïc; Zoss, Gaspard; Bradley, Derek; Ceylan, Duygu; Li, Tzu-MaoWe address the practical problem of generating facial blendshapes and reference animations for a new 3D character in production environments where blendshape expressions and reference animations are readily available on a pre-defined template character. We propose Neural Facial Deformation Transfer (NFDT); a data-driven approach to transfer facial expressions from such a template character to new target characters given only the target's neutral shape. To accomplish this, we first present a simple data generation strategy to automatically create a large training dataset consisting of pairs of template and target character shapes in the same expression. We then leverage this dataset through a decoder-only transformer that transfers facial expressions from the template character to a target character in high fidelity. Through quantitative evaluations and a user study, we demonstrate that NFDT surpasses the previous state-of-the-art in facial expression transfer. NFDT provides good results across varying mesh topologies, generalizes to humanoid creatures, and can save time and cost in facial animation workflows.Item Multi-Objective Packing of 3D Objects into Arbitrary Containers(The Eurographics Association, 2025) Meißenhelter, Hermann; Weller, Rene; Zachmann, Gabriel; Ceylan, Duygu; Li, Tzu-MaoPacking problems arise in numerous real-world applications and often take diverse forms. We focus on the relatively underexplored task of packing a set of arbitrary 3D objects-drawn from a predefined distribution-into a single arbitrary 3D container. We simultaneously optimize two potentially conflicting objectives: maximizing the packed volume and maintaining sufficient spacing among objects of the same type to prevent clustering. We present an algorithm to compute solutions to this challenging problem heuristically. Our approach is a flexible two-tier pipeline that computes and refines an initial arrangement. Our results confirm that this approach achieves dense packings across various objects and container shapes.Item Personalized Visual Dubbing through Virtual Dubber and Full Head Reenactment(The Eurographics Association, 2025) Jeon, Bobae; Paquette, Eric; Mudur, Sudhir; Popa, Tiberiu; Ceylan, Duygu; Li, Tzu-MaoVisual dubbing aims to modify facial expressions to ''lip-sync'' a new audio track. While person-generic talking head generation methods achieve expressive lip synchronization across arbitrary identities, they usually lack person-specific details and fail to generate high-quality results. Conversely, person-specific methods require extensive training. Our method combines the strengths of both methods by incorporating a virtual dubber, a person-generic talking head, as an intermediate representation. We then employ an autoencoder-based person-specific identity swapping network to transfer the actor identity, enabling fullhead reenactment that includes hair, face, ears, and neck. This eliminates artifacts while ensuring temporal consistency. Our quantitative and qualitative evaluation demonstrate that our method achieves a superior balance between lip-sync accuracy and realistic facial reenactment.Item 3D Gabor Splatting: Reconstruction of High-frequnecy Surface Texture using Gabor Noise(The Eurographics Association, 2025) Watanabe, Haato; Tojo, Kenji; Umetani, Nobuyuki; Ceylan, Duygu; Li, Tzu-Mao3D Gaussian splatting has experienced explosive popularity in the past few years in the field of novel view synthesis. The lightweight and differentiable representation of the radiance field using the Gaussian enables rapid and high-quality reconstruction and fast rendering. However, reconstructing objects with high-frequency surface textures (e.g., fine stripes) requires many skinny Gaussian kernels because each Gaussian represents only one color if viewed from one direction. Thus, reconstructing the stripes pattern, for example, requires Gaussians for at least the number of stripes. We present 3D Gabor splatting, which augments the Gaussian kernel to represent spatially high-frequency signals using Gabor noise. The Gabor kernel is a combination of a Gaussian term and spatially fluctuating wave functions, making it suitable for representing spatial high-frequency texture. We demonstrate that our 3D Gabor splatting can reconstruct various high-frequency textures on the objects.Item TemPCC: Completing Temporal Occlusions in Large Dynamic Point Clouds captured by Multiple RGB-D Cameras(The Eurographics Association, 2025) Mühlenbrock, Andre; Weller, Rene; Zachmann, Gabriel; Ceylan, Duygu; Li, Tzu-MaoWe present TemPCC, an approach to complete temporal occlusions in large dynamic point clouds. Our method manages a point set over time, integrates new observations into this set, and predicts the motion of occluded points based on the flow of surrounding visible ones. Unlike existing methods, our approach efficiently handles arbitrarily large point sets with linear complexity, does not reconstruct a canonical representation, and considers only local features. Our tests, performed on an Nvidia GeForce RTX 4090, demonstrate that our approach can complete a frame with 30,000 points in under 30 ms, while, in general, being able to handle point sets exceeding 1,000,000 points. This scalability enables the mitigation of temporal occlusions across entire scenes captured by multi-RGB-D camera setups. Our initial results demonstrate that self-occlusions are effectively completed and successfully generalized to unknown scenes despite limited training data.Item Controlled Image Variability via Diffusion Processes(The Eurographics Association, 2025) Zhu, Yueze; Mitra, Niloy J.; Ceylan, Duygu; Li, Tzu-MaoDiffusion models have shown remarkable abilities in generating realistic images. Unfortunately, diffusion processes do not directly produce diverse samples. Recent work has addressed this problem by applying a joint-particle time-evolving potential force that encourages varied and distinct generations. However, such a method focuses on improving the diversity across any batch of generation rather than producing variations of a specific sample. In this paper, we propose a method for creating subtle variations of a single (generated) image - specifically, we propose Single Sample Refinement, a simple and training-free method to improve the diversity of one specific sample at different levels of variability. This mode is useful for creative content generation, allowing users to explore controlled variations without sacrificing the identity of the main objects.