Eurographics Digital Library

This is the DSpace 7 platform of the Eurographics Digital Library.
  • The contents of the Eurographics Digital Library Archive are freely accessible. Only access to the full-text documents of the journal Computer Graphics Forum (joint property of Wiley and Eurographics) is restricted to Eurographics members, people from institutions who have an Institutional Membership at Eurographics, or users of the TIB Hannover. On the item pages you will find so-called purchase links to the TIB Hannover.
  • As a Eurographics member, you can log in with your email address and password from https://services.eg.org. If you are part of an institutional member and you are on a computer with a Eurographics registered IP domain, you can proceed immediately.
  • From 2022, all new releases published by Eurographics will be licensed under Creative Commons. Publishing with Eurographics is Plan-S compliant. Please visit Eurographics Licensing and Open Access Policy for more details.
 

Recent Submissions

Item
CANRIG: Cross-Attention Neural Face Rigging with Variable Local Control
(The Eurographics Association and John Wiley & Sons Ltd., 2026) Mohammadi, Arad; Weiss, Sebastian; Buhmann, Jakob; Ciccone, Loic; Sumner, Robert W.; Bradley, Derek; Guay, Martin; Masia, Belen; Thies, Justus
Facial animation is one of the most labor-intensive aspects of animation and visual effects, as traditional rigging consumes weeks of expert time and forces animators to spend countless hours manipulating hundreds of controls to achieve varied expressions. This technical complexity creates a barrier between artistic vision and execution, limiting creative exploration and iteration. In this paper, we introduce CANRig, a fully automated neural facial rigging approach that simplifies the process of creating and editing facial poses by benefiting from global correlations learned from data. Unlike existing neural face models that either sacrifice local control or demand extensive manual region setup, our method introduces continuous local control through a novel conditioning mechanism that operates on a variable region. By modeling deformation as cross-attention between control handles and mesh vertices, modulated by a user-defined region, we enable seamless transitions from precise local adjustments to broad global changes. We further expand our method with a shape-preserving workflow that enables iterative edits, guaranteeing that changes remain untouched even as controls are reconfigured. Our method delivers the best of both worlds: the automation and naturalness of neural methods with the granular control that professional animators demand, and we demonstrate its effectiveness across multiple applications in both animation and high-end visual effects pipelines.
Item
Generative Cutout Animation
(The Eurographics Association and John Wiley & Sons Ltd., 2026) Puhachov, Ivan; Aigerman, Noam; Groueix, Thibault; Bessmeltsev, Mikhail; Masia, Belen; Thies, Justus
Cutout animation is one of the earliest forms of animation, and to this day remains a popular technique featured in numerous films including Monty Python and South Park series. Most computer animation systems, however, focus on different styles, including cel animation, making cutout animation somewhat underexplored. As creating cutouts is meticulous, we propose a novel generative cutout animation system. Taking a skeletal animation and a text prompt as input, we automatically generate a 2.5D cutout rig ready for production in films and games. Our system optimizes cutout images with an SDS (Score Distillation Sampling) loss with a LoRA (Low-Rank Adaptation) prior, in multiple target poses. Naïvely optimizing an SDS loss, however, would lead to inconsistent target pose images, and, as a result, blurry or transparent cutouts. To address this, we introduce a novel optimization with techniques targeting pose and noise consistency, resulting in coherent target images and sharp cutouts. We validate our system by demonstrating a gallery of results, comparing with previous works, ablations, and other analyses. Once generated, our cutout rigs can be used both for the given input animation and repurposed for other animations or edited as independent assets.
Item
Embedding Optimization of Layouts via Distortion Minimization
(The Eurographics Association and John Wiley & Sons Ltd., 2026) Heuschling, Alexandra; Lim, Isaak; Kobbelt, Leif; Masia, Belen; Thies, Justus
Given an embedding of a layout in the surface of a target mesh, we consider the problem of optimizing the embedding geometrically. Layout embeddings partition the surface into multiple disk-like patches, making them particularly useful for parametrization and remeshing tasks such as quad-remeshing, since these problems can then be solved on simpler subdomains. Existing methods can either not guarantee to maintain patch connectivity, limiting downstream applications, or are specialized for quad layout optimization relying on principal curvature information. We propose a framework that balances per-patch distortion minimization with strict connectivity control through an explicit representation. By inserting additional nodes along layout arcs, they can be embedded as piecewise geodesic curves on the surface. This sampling of arcs provides additional flexibility where required, enabling joint optimization of both node positions and arc embeddings. Our representation naturally supports a multi-resolution workflow: optimization on coarse meshes can be prolongated to high-resolution inputs. We demonstrate its effectiveness in applications requiring connectivity-preserving, low-distortion surface layouts.
Item
SAGE: Structure-Aware Generative Video Transitions between Diverse Clips
(The Eurographics Association and John Wiley & Sons Ltd., 2026) Kan, Mia; Liu, Yilin; Mitra, Niloy J.; Masia, Belen; Thies, Justus
Video transitions aim to synthesize intermediate frames between two clips, but naïve approaches such as linear blending introduce artifacts that limit professional use or break temporal coherence. Traditional techniques (cross-fades, morphing, frame interpolation) and recent generative inbetweening methods can produce high-quality plausible intermediates, but they struggle with bridging diverse clips involving large temporal gaps or significant semantic differences, leaving a gap for content-aware and visually coherent transitions. We address this challenge by drawing on artistic workflows, distilling strategies such as aligning silhouettes and interpolating salient features to preserve structure and perceptual continuity. Building on these strategies, we propose SAGE (Structure-Aware Generative vidEo transitions) as a simple yet effective zero-shot approach that combines structural guidance, provided via line maps and motion flow, with generative synthesis, enabling smooth, motion-consistent transitions without fine-tuning. Extensive experiments and comparison with current alternatives demonstrate that SAGE outperforms both classical and the latest generative baselines on quantitative metrics and user studies for producing transitions between diverse clips.
Item
ZeroScene: A Zero-Shot Framework for 3D Scene Generation from a Single Image and Controllable Texture Editing
(The Eurographics Association and John Wiley & Sons Ltd., 2026) Tang, Xiang; Li, Ruotong; Fan, Xiaopeng; Masia, Belen; Thies, Justus
In the field of 3D content generation, single image scene reconstruction methods still struggle to simultaneously ensure the quality of individual assets and the coherence of the overall scene in complex environments, while texture editing techniques often fail to maintain both local continuity and multi-view consistency. In this paper, we propose a novel system ZeroScene, which leverages the prior knowledge of large vision models to accomplish both single image-to-3D scene reconstruction and texture editing in a zero-shot manner. ZeroScene extracts object-level 2D segmentation and depth information from input images to infer spatial relationships within the scene. It then jointly optimizes 3D and 2D projection losses of the point cloud to update object poses for precise scene alignment, ultimately constructing a coherent and complete 3D scene that encompasses both foreground and background. Moreover, ZeroScene supports texture editing of objects in the scene. By imposing constraints on the diffusion model and introducing a mask-guided progressive image generation strategy, we effectively maintain texture consistency across multiple viewpoints and further enhance the realism of rendered results through Physically Based Rendering (PBR) material estimation. Experimental results demonstrate that our framework not only ensures the geometric and appearance accuracy of generated assets, but also faithfully reconstructs scene layouts and produces highly detailed textures that closely align with text prompts. Leveraging generative artificial intelligence, ZeroScene can transform 2D images into diversified 3D worlds with various styles, showing broad application potential in virtual content creation such as digital twins and immersive game production, while also effectively supporting real-to-sim transfer in robotics through the generation of highly realistic simulation environments.