43-Issue 7
Permanent URI for this collection
Browse
Browsing 43-Issue 7 by Issue Date
Now showing 1 - 20 of 57
Results Per Page
Sort Options
Item LightUrban: Similarity Based Fine-grained Instancing for Lightweighting Complex Urban Point Clouds(The Eurographics Association and John Wiley & Sons Ltd., 2024) Lu, Zi Ang; Xiong, Wei Dan; Ren, Peng; Jia, Jin Yuan; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyLarge-scale urban point clouds play a vital role in various applications, while rendering and transmitting such data remains challenging due to its large volume, complicated structures, and significant redundancy. In this paper, we present LightUrban, the first point cloud instancing framework for efficient rendering and transmission of fine-grained complex urban scenes.We first introduce a segmentation method to organize the point clouds into individual buildings and vegetation instances from coarse to fine. Next, we propose an unsupervised similarity detection approach to accurately group instances with similar shapes. Furthermore, a fast pose and size estimation component is applied to calculate the transformations between the representative instance and the corresponding similar instances in each group. By replacing individual instances with their group's representative instances, the data volume and redundancy can be dramatically reduced. Experimental results on large-scale urban scenes demonstrate the effectiveness of our algorithm. To sum up, our method not only structures the urban point clouds but also significantly reduces data volume and redundancy, filling the gap in lightweighting urban landscapes through instancing.Item Controllable Anime Image Editing Based on the Probability of Attribute Tags(The Eurographics Association and John Wiley & Sons Ltd., 2024) Song, Zhenghao; Mo, Haoran; Gao, Chengying; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyEditing anime images via probabilities of attribute tags allows controlling the degree of the manipulation in an intuitive and convenient manner. Existing methods fall short in the progressive modification and preservation of unintended regions in the input image. We propose a controllable anime image editing framework based on adjusting the tag probabilities, in which a probability encoding network (PEN) is developed to encode the probabilities into features that capture continuous characteristic of the probabilities. Thus, the encoded features are able to direct the generative process of a pre-trained diffusion model and facilitate the linear manipulation.We also introduce a local editing module that automatically identifies the intended regions and constrains the edits to be applied to those regions only, which preserves the others unchanged. Comprehensive comparisons with existing methods indicate the effectiveness of our framework in both one-shot and linear editing modes. Results in additional applications further demonstrate the generalization ability of our approach.Item Disentangled Lifespan Synthesis via Transformer-Based Nonlinear Regression(The Eurographics Association and John Wiley & Sons Ltd., 2024) Li, Mingyuan; Guo, Yingchun; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyLifespan face age transformation aims to generate facial images that accurately depict an individual's appearance at different age stages. This task is highly challenging due to the need for reasonable changes in facial features while preserving identity characteristics. Existing methods tend to synthesize unsatisfactory results, such as entangled facial attributes and low identity preservation, especially when dealing with large age gaps. Furthermore, over-manipulating the style vector may deviate it from the latent space and damage image quality. To address these issues, this paper introduces a novel nonlinear regression model- Disentangled Lifespan face Aging (DL-Aging) to achieve high-quality age transformation images. Specifically, we propose an age modulation encoder to extract age-related multi-scale facial features as key and value, and use the reconstructed style vector of the image as the query. The multi-head cross-attention in the W+ space is utilized to update the query for aging image reconstruction iteratively. This nonlinear transformation enables the model to learn a more disentangled mode of transformation, which is crucial for alleviating facial attribute entanglement. Additionally, we introduce a W+ space age regularization term to prevent excessive manipulation of the style vector and ensure it remains within theW+ space during transformation, thereby improving generation quality and aging accuracy. Extensive qualitative and quantitative experiments demonstrate that the proposed DL-Aging outperforms state-of-the-art methods regarding aging accuracy, image quality, attribute disentanglement, and identity preservation, especially for large age gaps.Item G-Style: Stylized Gaussian Splatting(The Eurographics Association and John Wiley & Sons Ltd., 2024) Kovács, Áron Samuel; Hermosilla, Pedro; Raidou, Renata Georgia; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyWe introduce G -Style, a novel algorithm designed to transfer the style of an image onto a 3D scene represented using Gaussian Splatting. Gaussian Splatting is a powerful 3D representation for novel view synthesis, as-compared to other approaches based on Neural Radiance Fields-it provides fast scene renderings and user control over the scene. Recent pre-prints have demonstrated that the style of Gaussian Splatting scenes can be modified using an image exemplar. However, since the scene geometry remains fixed during the stylization process, current solutions fall short of producing satisfactory results. Our algorithm aims to address these limitations by following a three-step process: In a pre-processing step, we remove undesirable Gaussians with large projection areas or highly elongated shapes. Subsequently, we combine several losses carefully designed to preserve different scales of the style in the image, while maintaining as much as possible the integrity of the original scene content. During the stylization process and following the original design of Gaussian Splatting, we split Gaussians where additional detail is necessary within our scene by tracking the gradient of the stylized color. Our experiments demonstrate that G -Style generates high-quality stylizations within just a few minutes, outperforming existing methods both qualitatively and quantitativelyItem Robust Diffusion-based Motion In-betweening(The Eurographics Association and John Wiley & Sons Ltd., 2024) Qin, Jia; Yan, Peng; An, Bo; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyThe emergence of learning-based motion in-betweening techniques offers animators a more efficient way to animate characters. However, existing non-generative methods either struggle to support long transition generation or produce results that lack diversity. Meanwhile, diffusion models have shown promising results in synthesizing diverse and high-quality motions driven by text and keyframes. However, in these methods, keyframes often serve as a guide rather than a strict constraint and can sometimes be ignored when keyframes are sparse. To address these issues, we propose a lightweight yet effective diffusionbased motion in-betweening framework that generates animations conforming to keyframe constraints.We incorporate keyframe constraints into the training phase to enhance robustness in handling various constraint densities. Moreover, we employ relative positional encoding to improve the model's generalization on long range in-betweening tasks. This approach enables the model to learn from short animations while generating realistic in-betweening motions spanning thousands of frames. We conduct extensive experiments to validate our framework using the newly proposed metrics K-FID, K-Diversity, and K-Error, designed to evaluate generative in-betweening methods. Results demonstrate that our method outperforms existing diffusion-based methods across various lengths and keyframe densities. We also show that our method can be applied to text-driven motion synthesis, offering fine-grained control over the generated results.Item P-Hologen: An End-to-End Generative Framework for Phase-Only Holograms(The Eurographics Association and John Wiley & Sons Ltd., 2024) Park, JooHyun; Jeon, YuJin; Kim, HuiYong; Baek, SeungHwan; Kang, HyeongYeop; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyHolography stands at the forefront of visual technology, offering immersive, three-dimensional visualizations through the manipulation of light wave amplitude and phase. Although generative models have been extensively explored in the image domain, their application to holograms remains relatively underexplored due to the inherent complexity of phase learning. Exploiting generative models for holograms offers exciting opportunities for advancing innovation and creativity, such as semantic-aware hologram generation and editing. Currently, the most viable approach for utilizing generative models in the hologram domain involves integrating an image-based generative model with an image-to-hologram conversion model, which comes at the cost of increased computational complexity and inefficiency. To tackle this problem, we introduce P-Hologen, the first endto- end generative framework designed for phase-only holograms (POHs). P-Hologen employs vector quantized variational autoencoders to capture the complex distributions of POHs. It also integrates the angular spectrum method into the training process, constructing latent spaces for complex phase data using strategies from the image processing domain. Extensive experiments demonstrate that P-Hologen achieves superior quality and computational efficiency compared to the existing methods. Furthermore, our model generates high-quality unseen, diverse holographic content from its learned latent space without requiring pre-existing images. Our work paves the way for new applications and methodologies in holographic content creation, opening a new era in the exploration of generative holographic content. The code for our paper is publicly available on https://github.com/james0223/P-Hologen.Item Seamless and Aligned Texture Optimization for 3D Reconstruction(The Eurographics Association and John Wiley & Sons Ltd., 2024) Wang, Lei; Ge, Linlin; Zhang, Qitong; Feng, Jieqing; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyRestoring the appearance of the model is a crucial step for achieving realistic 3D reconstruction. High-fidelity textures can also conceal some geometric defects. Since the estimated camera parameters and reconstructed geometry usually contain errors, subsequent texture mapping often suffers from undesirable visual artifacts such as blurring, ghosting, and visual seams. In particular, significant misalignment between the reconstructed model and the registered images will lead to texturing the mesh with inconsistent image regions. However, eliminating various artifacts to generate high-quality textures remains a challenge. In this paper, we address this issue by designing a texture optimization method to generate seamless and aligned textures for 3D reconstruction. The main idea is to detect misalignment regions between images and geometry and exclude them from texture mapping. To handle the texture holes caused by these excluded regions, a cross-patch texture hole-filling method is proposed, which can also synthesize plausible textures for invisible faces. Moreover, for better stitching of the textures from different views, an improved camera pose optimization is present by introducing color adjustment and boundary point sampling. Experimental results show that the proposed method can eliminate the artifacts caused by inaccurate input data robustly and produce highquality texture results compared with state-of-the-art methods.Item GLTScene: Global-to-Local Transformers for Indoor Scene Synthesis with General Room Boundaries(The Eurographics Association and John Wiley & Sons Ltd., 2024) Li, Yijie; Xu, Pengfei; Ren, Junquan; Shao, Zefan; Huang, Hui; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyWe present GLTScene, a novel data-driven method for high-quality furniture layout synthesis with general room boundaries as conditions. This task is challenging since the existing indoor scene datasets do not cover the variety of general room boundaries. We incorporate the interior design principles with learning techniques and adopt a global-to-local strategy for this task. Globally, we learn the placement of furniture objects from the datasets without considering their alignment. Locally, we learn the alignment of furniture objects relative to their nearest walls, according to the alignment principle in interior design. The global placement and local alignment of furniture objects are achieved by two transformers respectively. We compare our method with several baselines in the task of furniture layout synthesis with general room boundaries as conditions. Our method outperforms these baselines both quantitatively and qualitatively. We also demonstrate that our method can achieve other conditional layout synthesis tasks, including object-level conditional generation and attribute-level conditional generation. The code is publicly available at https://github.com/WWalter-Lee/GLTScene.Item FastFlow: GPU Acceleration of Flow and Depression Routing for Landscape Simulation(The Eurographics Association and John Wiley & Sons Ltd., 2024) Jain, Aryamaan; Kerbl, Bernhard; Gain, James; Finley, Brandon; Cordonnier, Guillaume; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyTerrain analysis plays an important role in computer graphics, hydrology and geomorphology. In particular, analyzing the path of material flow over a terrain with consideration of local depressions is a precursor to many further tasks in erosion, river formation, and plant ecosystem simulation. For example, fluvial erosion simulation used in terrain modeling computes water discharge to repeatedly locate erosion channels for soil removal and transport. Despite its significance, traditional methods face performance constraints, limiting their broader applicability. In this paper, we propose a novel GPU flow routing algorithm that computes the water discharge in O(logn) iterations for a terrain with n vertices (assuming n processors). We also provide a depression routing algorithm to route the water out of local minima formed by depressions in the terrain, which converges in O(log2 n) iterations. Our implementation of these algorithms leads to a 5× speedup for flow routing and 34× to 52× speedup for depression routing compared to previous work on a 10242 terrain, enabling interactive control of terrain simulation.Item Spatially and Temporally Optimized Audio-Driven Talking Face Generation(The Eurographics Association and John Wiley & Sons Ltd., 2024) Dong, Biao; Ma, Bo-Yao; Zhang, Lei; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyAudio-driven talking face generation is essentially a cross-modal mapping from audio to video frames. The main challenge lies in the intricate one-to-many mapping, which affects lip sync accuracy. And the loss of facial details during image reconstruction often results in visual artifacts in the generated video. To overcome these challenges, this paper proposes to enhance the quality of generated talking faces with a new spatio-temporal consistency. Specifically, the temporal consistency is achieved through consecutive frames of the each phoneme, which form temporal modules that exhibit similar lip appearance changes. This allows for adaptive adjustment in the lip movement for accurate sync. The spatial consistency pertains to the uniform distribution of textures within local regions, which form spatial modules and regulate the texture distribution in the generator. This yields fine details in the reconstructed facial images. Extensive experiments show that our method can generate more natural talking faces than previous state-of-the-art methods in both accurate lip sync and realistic facial details.Item A TransISP Based Image Enhancement Method for Visual Disbalance in Low-light Images(The Eurographics Association and John Wiley & Sons Ltd., 2024) Wu, Jiaqi; Guo, Jing; Jing, Rui; Zhang, Shihao; Tian, Zijian; Chen, Wei; Wang, Zehua; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyExisting image enhancement algorithms often fail to effectively address issues of visual disbalance, such as brightness unevenness and color distortion, in low-light images. To overcome these challenges, we propose a TransISP-based image enhancement method specifically designed for low-light images. To mitigate color distortion, we design dual encoders based on decoupled representation learning, which enable complete decoupling of the reflection and illumination components, thereby preventing mutual interference during the image enhancement process. To address brightness unevenness, we introduce CNNformer, a hybrid model combining CNN and Transformer. This model efficiently captures local details and long-distance dependencies between pixels, contributing to the enhancement of brightness features across various local regions. Additionally, we integrate traditional image signal processing algorithms to achieve efficient color correction and denoising of the reflection component. Furthermore, we employ a generative adversarial network (GAN) as the overarching framework to facilitate unsupervised learning. The experimental results show that, compared with six SOTA image enhancement algorithms, our method obtains significant improvement in evaluation indexes (e.g., on LOL, PSNR: 15.59%, SSIM: 9.77%, VIF: 9.65%), and it can improve visual disbalance defects in low-light images captured from real-world coal mine underground scenarios.Item Frequency-Aware Facial Image Shadow Removal through Skin Color and Texture Learning(The Eurographics Association and John Wiley & Sons Ltd., 2024) Zhang, Ling; Xie, Wenyang; Xiao, Chunxia; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyExisting facial image shadow removal methods predominantly rely on pre-extracted facial features. However, these methods often fail to capitalize on the full potential of these features, resorting to simplified utilization. Furthermore, they tend to overlook the importance of low-frequency information during the extraction of prior features, which can be easily compromised by noises. In our work, we propose a frequency-aware shadow removal network (FSRNet) for facial image shadow removal, which utilizes the skin color and texture information in the face to help recover illumination in shadow regions. Our FSRNet uses a frequencydomain image decomposition network to extract the low-frequency skin color map and high-frequency texture map from the face images, and applies a color-texture guided shadow removal network to produce final shadow removal result. Concretely, the designed fourier sparse attention block (FSABlock) can transform images from the spatial domain to the frequency domain and help the network focus on the key information. We also introduce a skin color fusion module (CFModule) and a texture fusion module (TFModule) to enhance the understanding and utilization of color and texture features, promoting high-quality result without color distortion and detail blurring. Extensive experiments demonstrate the superiority of the proposed method. The code is available at https://github.com/laoxie521/FSRNet.Item iShapEditing: Intelligent Shape Editing with Diffusion Models(The Eurographics Association and John Wiley & Sons Ltd., 2024) Li, Jing; Zhang, Juyong; Chen, Falai; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyRecent advancements in generative models have enabled image editing very effective with impressive results. By extending this progress to 3D geometry models, we introduce iShapEditing, a novel framework for 3D shape editing which is applicable to both generated and real shapes. Users manipulate shapes by dragging handle points to corresponding targets, offering an intuitive and intelligent editing interface. Leveraging the Triplane Diffusion model and robust intermediate feature correspondence, our framework utilizes classifier guidance to adjust noise representations during sampling process, ensuring alignment with user expectations while preserving plausibility. For real shapes, we employ shape predictions at each time step alongside a DDPM-based inversion algorithm to derive their latent codes, facilitating seamless editing. iShapEditing provides effective and intelligent control over shapes without the need for additional model training or fine-tuning. Experimental examples demonstrate the effectiveness and superiority of our method in terms of editing accuracy and plausibility.Item Inverse Garment and Pattern Modeling with a Differentiable Simulator(The Eurographics Association and John Wiley & Sons Ltd., 2024) Yu, Boyang; Cordier, Frederic; Seo, Hyewon; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyThe capability to generate simulation-ready garment models from 3D shapes of clothed people will significantly enhance the interpretability of captured geometry of real garments, as well as their faithful reproduction in the digital world. This will have notable impact on fields like shape capture in social VR, and virtual try-on in the fashion industry. To align with the garment modeling process standardized by the fashion industry and cloth simulation software, it is required to recover 2D patterns, which are then placed around the wearer's body model and seamed prior to the draping simulation. This involves an inverse garment design problem, which is the focus of our work here: Starting with an arbitrary target garment geometry, our system estimates its animatable replica along with its corresponding 2D pattern. Built upon a differentiable cloth simulator, it runs an optimization process that is directed towards minimizing the deviation of the simulated garment shape from the target geometry, while maintaining desirable properties such as left-to-right symmetry. Experimental results on various real-world and synthetic data show that our method outperforms state-of-the-art methods in producing both high-quality garment models and accurate 2D patterns.Item FSH3D: 3D Representation via Fibonacci Spherical Harmonics(The Eurographics Association and John Wiley & Sons Ltd., 2024) Li, Zikuan; Huang, Anyi; Jia, Wenru; Wu, Qiaoyun; Wei, Mingqiang; Wang, Jun; Chen, Renjie; Ritschel, Tobias; Whiting, EmilySpherical harmonics are a favorable technique for 3D representation, employing a frequency-based approach through the spherical harmonic transform (SHT). Typically, SHT is performed using equiangular sampling grids. However, these grids are non-uniform on spherical surfaces and exhibit local anisotropy, a common limitation in existing spherical harmonic decomposition methods. This paper proposes a 3D representation method using Fibonacci Spherical Harmonics (FSH3D). We introduce a spherical Fibonacci grid (SFG), which is more uniform than equiangular grids for SHT in the frequency domain. Our method employs analytical weights for SHT on SFG, effectively assigning sampling errors to spherical harmonic degrees higher than the recovered band-limited function. This provides a novel solution for spherical harmonic transformation on non-equiangular grids. The key advantages of our FSH3D method include: 1) With the same number of sampling points, SFG captures more features without bias compared to equiangular grids; 2) The root mean square error of 32-degree spherical harmonic coefficients is reduced by approximately 34.6% for SFG compared to equiangular grids; and 3) FSH3D offers more stable frequency domain representations, especially for rotating functions. FSH3D enhances the stability of frequency domain representations under rotational transformations. Its application in 3D shape reconstruction and 3D shape classification results in more accurate and robust representations. Our code is publicly available at https://github.com/Miraclelzk/Fibonacci-Spherical-Harmonics.Item Evolutive 3D Urban Data Representation through Timeline Design Space(The Eurographics Association and John Wiley & Sons Ltd., 2024) Gautier, Corentin Le Bihan; Delanoy, Johanna; Gesquière, Gilles; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyCities are constantly changing to adapt to new societal and environmental challenges. Understanding their evolution is thus essential to make informed decisions about their future. To capture these changes, cities are increasingly offering digital 3D snapshots of their territory over time. However, existing tools to visualise these data typically represent the city at a specific point in time, limiting a comprehensive analysis of its evolution. In this paper, we propose a new method for simultaneously visualising different versions of the city in a 3D space. We integrate the different versions of the city along a new way of 3D timeline that can take different shapes depending on the needs of the user and the dataset being visualised. We propose four different shapes of timelines and three ways to place the versions along it. Our method places the versions such that there is no visual overlap for the user by varying the parameters of the timelines, and offer options to ease the understanding of the scene by changing the orientation or scale of the versions. We evaluate our method on different datasets to demonstrate the advantages and limitations of the different shapes of timeline and provide recommendations so as to which shape to chose.Item VRTree: Example-Based 3D Interactive Tree Modeling in Virtual Reality(The Eurographics Association and John Wiley & Sons Ltd., 2024) Wu, Di; Yang, Mingxin; Liu, Zhihao; Tu, Fangyuan; Liu, Fang; Cheng, Zhanglin; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyWe present VRTree, an example-based interactive virtual reality (VR) system designed to efficiently create diverse 3D tree models while faithfully preserving botanical characteristics of real-world references. Our method employs a novel representation called Hierarchical Branch Lobe (HBL), which captures the hierarchical features of trees and serves as a versatile intermediary for intuitive VR interaction. The HBL representation decomposes a 3D tree into a series of concise examples, each consisting of a small set of main branches, secondary branches, and lobe-bounded twigs. The core of our system involves two key components: (1) We design an automatic algorithm to extract an initial library of HBL examples from real tree point clouds. These HBL examples can be optionally refined according to user intentions through an interactive editing process. (2) Users can interact with the extracted HBL examples to assemble new tree structures, ensuring the local features align with the target tree species. A shape-guided procedural growth algorithm then transforms these assembled HBL structures into highly realistic, finegrained 3D tree models. Extensive experiments and user studies demonstrate that VRTree outperforms current state-of-the-art approaches, offering a highly effective and easy-to-use VR tool for tree modeling.Item Distinguishing Structures from Textures by Patch-based Contrasts around Pixels for High-quality and Efficient Texture filtering(The Eurographics Association and John Wiley & Sons Ltd., 2024) Wang, Shengchun; Xu, Panpan; Hou, Fei; Wang, Wencheng; Zhao, Chong; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyIt is still challenging with existing methods to distinguish structures from texture details, and so preventing texture filtering. Considering that the textures on both sides of a structural edge always differ much from each other in appearances, we determine whether a pixel is on a structure edge by exploiting the appearance contrast between patches around the pixel, and further propose an efficient implementation method. We demonstrate that our proposed method is more effective than existing methods to distinguish structures from texture details, and our required patches for texture measurement can be smaller than the used patches in existing methods by at least half. Thus, we can improve texture filtering on both quality and efficiency, as shown by the experimental results, e.g., we can handle the textured images with a resolution of 800 × 600 pixels in real-time. (The code is available at https://github.com/hefengxiyulu/MLPC)Item Multiscale Spectral Manifold Wavelet Regularizer for Unsupervised Deep Functional Maps(The Eurographics Association and John Wiley & Sons Ltd., 2024) Wang, Haibo; Meng, Jing; Li, Qinsong; Hu, Ling; Guo, Yueyu; Liu, Xinru; Yang, Xiaoxia; Liu, Shengjun; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyIn deep functional maps, the regularizer computing the functional map is especially crucial for ensuring the global consistency of the computed pointwise map. As the regularizers integrated into deep learning should be differentiable, it is not trivial to incorporate informative axiomatic structural constraints into the deep functional map, such as the orientation-preserving term. Although commonly used regularizers include the Laplacian-commutativity term and the resolvent Laplacian commutativity term, these are limited to single-scale analysis for capturing geometric information. To this end, we propose a novel and theoretically well-justified regularizer commuting the functional map with the multiscale spectral manifold wavelet operator. This regularizer enhances the isometric constraints of the functional map and is conducive to providing it with better structural properties with multiscale analysis. Furthermore, we design an unsupervised deep functional map with the regularizer in a fully differentiable way. The quantitative and qualitative comparisons with several existing techniques on the (near-)isometric and non-isometric datasets show our method's superior accuracy and generalization capabilities. Additionally, we illustrate that our regularizer can be easily inserted into other functional map methods and improve their accuracy.Item Ray Tracing Animated Displaced Micro-Meshes(The Eurographics Association and John Wiley & Sons Ltd., 2024) Gruen, Holger; Benthin, Carsten; Kensler, Andrew; Barczak, Joshua; McAllister, David; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyWe present a new method that allows efficient ray tracing of virtually artefact-free animated displaced micro-meshes (DMMs) [MMT23] and preserves their low memory footprint and low BVH build and update cost. DMMs allow for compact representation of micro-triangle geometry through hierarchical encoding of displacements. Displacements are computed with respect to a coarse base mesh and are used to displace new vertices introduced during 1 : 4 subdivision of the base mesh. Applying non-rigid transformation to the base mesh can result in silhouette and normal artefacts (see Figure 1) during animation. We propose an approach which prevents these artefacts by interpolating transformation matrices before applying them to the DMM representation. Our interpolation-based algorithm does not change DMM data structures and it allows for efficient bounding of animated micro-triangle geometry which is essential for fast tessellation-free ray tracing of animated DMMs.
- «
- 1 (current)
- 2
- 3
- »