Eurographics Digital Library

This is the DSpace 7 platform of the Eurographics Digital Library.
  • The contents of the Eurographics Digital Library Archive are freely accessible. Only access to the full-text documents of the journal Computer Graphics Forum (joint property of Wiley and Eurographics) is restricted to Eurographics members, people from institutions who have an Institutional Membership at Eurographics, or users of the TIB Hannover. On the item pages you will find so-called purchase links to the TIB Hannover.
  • As a Eurographics member, you can log in with your email address and password from https://services.eg.org. If you are part of an institutional member and you are on a computer with a Eurographics registered IP domain, you can proceed immediately.
  • From 2022, all new releases published by Eurographics will be licensed under Creative Commons. Publishing with Eurographics is Plan-S compliant. Please visit Eurographics Licensing and Open Access Policy for more details.
 

Recent Submissions

Item
Robust Diffusion-based Motion In-betweening
(The Eurographics Association and John Wiley & Sons Ltd., 2024) Qin, Jia; Yan, Peng; An, Bo; Chen, Renjie; Ritschel, Tobias; Whiting, Emily
The emergence of learning-based motion in-betweening techniques offers animators a more efficient way to animate characters. However, existing non-generative methods either struggle to support long transition generation or produce results that lack diversity. Meanwhile, diffusion models have shown promising results in synthesizing diverse and high-quality motions driven by text and keyframes. However, in these methods, keyframes often serve as a guide rather than a strict constraint and can sometimes be ignored when keyframes are sparse. To address these issues, we propose a lightweight yet effective diffusionbased motion in-betweening framework that generates animations conforming to keyframe constraints.We incorporate keyframe constraints into the training phase to enhance robustness in handling various constraint densities. Moreover, we employ relative positional encoding to improve the model's generalization on long range in-betweening tasks. This approach enables the model to learn from short animations while generating realistic in-betweening motions spanning thousands of frames. We conduct extensive experiments to validate our framework using the newly proposed metrics K-FID, K-Diversity, and K-Error, designed to evaluate generative in-betweening methods. Results demonstrate that our method outperforms existing diffusion-based methods across various lengths and keyframe densities. We also show that our method can be applied to text-driven motion synthesis, offering fine-grained control over the generated results.
Item
G-Style: Stylized Gaussian Splatting
(The Eurographics Association and John Wiley & Sons Ltd., 2024) Kovács, Áron Samuel; Hermosilla, Pedro; Raidou, Renata Georgia; Chen, Renjie; Ritschel, Tobias; Whiting, Emily
We introduce G -Style, a novel algorithm designed to transfer the style of an image onto a 3D scene represented using Gaussian Splatting. Gaussian Splatting is a powerful 3D representation for novel view synthesis, as-compared to other approaches based on Neural Radiance Fields-it provides fast scene renderings and user control over the scene. Recent pre-prints have demonstrated that the style of Gaussian Splatting scenes can be modified using an image exemplar. However, since the scene geometry remains fixed during the stylization process, current solutions fall short of producing satisfactory results. Our algorithm aims to address these limitations by following a three-step process: In a pre-processing step, we remove undesirable Gaussians with large projection areas or highly elongated shapes. Subsequently, we combine several losses carefully designed to preserve different scales of the style in the image, while maintaining as much as possible the integrity of the original scene content. During the stylization process and following the original design of Gaussian Splatting, we split Gaussians where additional detail is necessary within our scene by tracking the gradient of the stylized color. Our experiments demonstrate that G -Style generates high-quality stylizations within just a few minutes, outperforming existing methods both qualitatively and quantitatively
Item
Palette-Based Recolouring of Gradient Meshes
(The Eurographics Association and John Wiley & Sons Ltd., 2024) Houssaije, Willard A. Verschoore de la; Echevarria, Jose; Kosinka, Jirí; Chen, Renjie; Ritschel, Tobias; Whiting, Emily
Gradient meshes are a vector graphics primitive formed by a regular grid of bicubic quad patches. They allow for the creation of complex geometries and colour gradients, with recent extensions supporting features such as local refinement and sharp colour transitions. While many methods exist for recolouring raster images, often achieved by modifying an automatically detected palette of the image, gradient meshes have not received the same amount of attention when it comes to global colour editing. We present a novel method that allows for real-time palette-based recolouring of gradient meshes, including gradient meshes constructed using local refinement and containing sharp colour transitions. We demonstrate the utility of our method on synthetic illustrative examples as well as on complex gradient meshes.
Item
LGSur-Net: A Local Gaussian Surface Representation Network for Upsampling Highly Sparse Point Cloud
(The Eurographics Association and John Wiley & Sons Ltd., 2024) Xiao, Zijian; Zhou, Tianchen; Yao, Li; Chen, Renjie; Ritschel, Tobias; Whiting, Emily
We introduce LGSur-Net, an end-to-end deep learning architecture, engineered for the upsampling of sparse point clouds. LGSur-Net harnesses a trainable Gaussian local representation by positioning a series of Gaussian functions on an oriented plane, complemented by the optimization of individual covariance matrices. The integration of parametric factors allows for the encoding of the plane's rotational dynamics and Gaussian weightings into a linear transformation matrix. Then we extract the feature maps from the point cloud and its adjoining edges and learn the local Gaussian depictions to accurately model the shape's local geometry through an attention-based network. The Gaussian representation's inherent high-order continuity endows LGSur-Net with the natural ability to predict surface normals and support upsampling to any specified resolution. Comprehensive experiments validate that LGSur-Net efficiently learns from sparse data inputs, surpassing the performance of existing state-of-the-art upsampling methods. Our code is publicly available at https://github.com/Rangiant5b72/LGSur-Net.
Item
GauLoc: 3D Gaussian Splatting-based Camera Relocalization
(The Eurographics Association and John Wiley & Sons Ltd., 2024) Xin, Zhe; Dai, Chengkai; Li, Ying; Wu, Chenming; Chen, Renjie; Ritschel, Tobias; Whiting, Emily
3D Gaussian Splatting (3DGS) has emerged as a promising representation for scene reconstruction and novel view synthesis for its explicit representation and real-time capabilities. This technique thus holds immense potential for use in mapping applications. Consequently, there is a growing need for an efficient and effective camera relocalization method to complement the advantages of 3DGS. This paper presents a camera relocalization method, namely GauLoc, in a scene represented by 3DGS. Unlike previous methods that rely on pose regression or photometric alignment, our proposed method leverages the differential rendering capability provided by 3DGS. The key insight of our work is the proposed implicit featuremetric alignment, which effectively optimizes the alignment between rendered keyframes and the query frames, and leverages the epipolar geometry to facilitate the convergence of camera poses conditioned explicit 3DGS representation. The proposed method significantly improves the relocalization accuracy even in complex scenarios with large initial camera rotation and translation deviations. Extensive experiments validate the effectiveness of our proposed method, showcasing its potential to be applied in many realworld applications. Source code will be released at https://github.com/xinzhe11/GauLoc.