Search Results

Now showing 1 - 10 of 999
  • Item
    Harmonics Virtual Lights: Fast Projection of Luminance Field on Spherical Harmonics for Efficient Rendering
    (© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2022) Mézières, Pierre; Desrichard, François; Vanderhaeghe, David; Paulin, Mathias; Hauser, Helwig and Alliez, Pierre
    In this paper, we introduce harmonics virtual lights (HVL), to model indirect light sources for interactive global illumination of dynamic 3D scenes. Virtual point lights (VPL) are an efficient approach to define indirect light sources and to evaluate the resulting indirect lighting. Nonetheless, VPL suffer from disturbing artefacts, especially with high‐frequency materials. Virtual spherical lights (VSL) avoid these artefacts by considering spheres instead of points but estimates the lighting integral using Monte‐Carlo which results to noise in the final image. We define HVL as an extension of VSL in a spherical harmonics (SH) framework, defining a closed form of the lighting integral evaluation. We propose an efficient SH projection of spherical lights contribution faster than existing methods. Computing the outgoing luminance requires operations when using materials with circular symmetric lobes, and operations for the general case, where is the number of SH bands. HVL can be used with either parametric or measured BRDF without extra cost and offers control over rendering time and image quality, by either decreasing or increasing the band limit used for SH projection. Our approach is particularly well‐designed to render medium‐frequency one‐bounce global illumination with arbitrary BRDF at an interactive frame rate.
  • Item
    Corrigendum to “Making Procedural Water Waves Boundary‐aware”, “Primal/Dual Descent Methods for Dynamics”, and “Detailed Rigid Body Simulation with Extended Position Based Dynamics”
    (© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Hauser, Helwig and Alliez, Pierre
  • Item
    Level of Detail Exploration of Electronic Transition Ensembles using Hierarchical Clustering
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Sidwall Thygesen, Signe; Masood, Talha Bin; Linares, Mathieu; Natarajan, Vijay; Hotz, Ingrid; Borgo, Rita; Marai, G. Elisabeta; Schreck, Tobias
    We present a pipeline for the interactive visual analysis and exploration of molecular electronic transition ensembles. Each ensemble member is specified by a molecular configuration, the charge transfer between two molecular states, and a set of physical properties. The pipeline is targeted towards theoretical chemists, supporting them in comparing and characterizing electronic transitions by combining automatic and interactive visual analysis. A quantitative feature vector characterizing the electron charge transfer serves as the basis for hierarchical clustering as well as for the visual representations. The interface for the visual exploration consists of four components. A dendrogram provides an overview of the ensemble. It is augmented with a level of detail glyph for each cluster. A scatterplot using dimensionality reduction provides a second visualization, highlighting ensemble outliers. Parallel coordinates show the correlation with physical parameters. A spatial representation of selected ensemble members supports an in-depth inspection of transitions in a form that is familiar to chemists. All views are linked and can be used to filter and select ensemble members. The usefulness of the pipeline is shown in three different case studies.
  • Item
    Seamless and Aligned Texture Optimization for 3D Reconstruction
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Wang, Lei; Ge, Linlin; Zhang, Qitong; Feng, Jieqing; Chen, Renjie; Ritschel, Tobias; Whiting, Emily
    Restoring the appearance of the model is a crucial step for achieving realistic 3D reconstruction. High-fidelity textures can also conceal some geometric defects. Since the estimated camera parameters and reconstructed geometry usually contain errors, subsequent texture mapping often suffers from undesirable visual artifacts such as blurring, ghosting, and visual seams. In particular, significant misalignment between the reconstructed model and the registered images will lead to texturing the mesh with inconsistent image regions. However, eliminating various artifacts to generate high-quality textures remains a challenge. In this paper, we address this issue by designing a texture optimization method to generate seamless and aligned textures for 3D reconstruction. The main idea is to detect misalignment regions between images and geometry and exclude them from texture mapping. To handle the texture holes caused by these excluded regions, a cross-patch texture hole-filling method is proposed, which can also synthesize plausible textures for invisible faces. Moreover, for better stitching of the textures from different views, an improved camera pose optimization is present by introducing color adjustment and boundary point sampling. Experimental results show that the proposed method can eliminate the artifacts caused by inaccurate input data robustly and produce highquality texture results compared with state-of-the-art methods.
  • Item
    Polygon Laplacian Made Robust
    (© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024)
  • Item
    ETBHD‐HMF: A Hierarchical Multimodal Fusion Architecture for Enhanced Text‐Based Hair Design
    (© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) He, Rong; Jiao, Ge; Li, Chen; Alliez, Pierre; Wimmer, Michael
    Text‐based hair design (TBHD) represents an innovative approach that utilizes text instructions for crafting hairstyle and colour, renowned for its flexibility and scalability. However, enhancing TBHD algorithms to improve generation quality and editing accuracy remains a current research difficulty. One important reason is that existing models fall short in alignment and fusion designs. Therefore, we propose a new layered multimodal fusion network called ETBHD‐HMF, which decouples the input image and hair text information into layered hair colour and hairstyle representations. Within this network, the channel enhancement separation (CES) module is proposed to enhance important signals and suppress noise for text representation obtained from CLIP, thus improving generation quality. Based on this, we develop the weighted mapping fusion (WMF) sub‐networks for hair colour and hairstyle. This sub‐network applies the mapper operations to input image and text representations, acquiring joint information. The WMF then selectively merges image representation and joint information from various style layers using weighted operations, ultimately achieving fine‐grained hairstyle designs. Additionally, to enhance editing accuracy and quality, we design a modality alignment loss to refine and optimize the information transmission and integration of the network. The experimental results of applying the network to the CelebA‐HQ dataset demonstrate that our proposed model exhibits superior overall performance in terms of generation quality, visual realism, and editing accuracy. ETBHD‐HMF (27.8 PSNR, 0.864 IDS) outperformed HairCLIP (26.9 PSNR, 0.828 IDS), with a 3% higher PSNR and a 4% higher IDS.
  • Item
    EHR STAR: The State‐Of‐the‐Art in Interactive EHR Visualization
    (© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2022) Wang, Q.; Laramee, R.S.; Hauser, Helwig and Alliez, Pierre
    Since the inception of electronic health records (EHR) and population health records (PopHR), the volume of archived digital health records is growing rapidly. Large volumes of heterogeneous health records require advanced visualization and visual analytics systems to uncover valuable insight buried in complex databases. As a vibrant sub‐field of information visualization and visual analytics, many interactive EHR and PopHR visualization (EHR Vis) systems have been proposed, developed, and evaluated by clinicians to support effective clinical analysis and decision making. We present the state‐of‐the‐art (STAR) of EHR Vis literature and open access healthcare data sources and provide an up‐to‐date overview on this important topic. We identify trends and challenges in the field, introduce novel literature and data classifications, and incorporate a popular medical terminology standard called the Unified Medical Language System (UMLS). We provide a curated list of electronic and population healthcare data sources and open access datasets as a resource for potential researchers, in order to address one of the main challenges in this field. We classify the literature based on multidisciplinary research themes stemming from reoccurring topics. The survey provides a valuable overview of EHR Vis revealing both mature areas and potential future multidisciplinary research directions.
  • Item
    Curved Three-Director Cosserat Shells with Strong Coupling
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Löschner, Fabian; Fernández-Fernández, José Antonio; Jeske, Stefan Rhys; Bender, Jan; Skouras, Melina; Wang, He
    Continuum-based shell models are an established approach for the simulation of thin deformables in computer graphics. However, existing research in physically-based animation is mostly focused on shear-rigid Kirchhoff-Love shells. In this work we explore three-director Cosserat (micropolar) shells which introduce additional rotational degrees of freedom. This microrotation field models transverse shearing and in-plane drilling rotations. We propose an incremental potential formulation of the Cosserat shell dynamics which allows for strong coupling with frictional contact and other physical systems. We evaluate a corresponding finite element discretization for non-planar shells using second-order elements which alleviates shear-locking and permits simulation of curved geometries. Our formulation and the discretization, in particular of the rotational degrees of freedom, is designed to integrate well with typical simulation approaches in physically-based animation. While the discretization of the rotations requires some care, we demonstrate that they do not pose significant numerical challenges in Newton's method. In our experiments we also show that the codimensional shell model is consistent with the respective three-dimensional model. We qualitatively compare our formulation with Kirchhoff-Love shells and demonstrate intriguing use cases for the additional modes of control over dynamic deformations offered by the Cosserat model such as directly prescribing rotations or angular velocities and influencing the shell's curvature.
  • Item
    3D Generative Model Latent Disentanglement via Local Eigenprojection
    (© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Foti, Simone; Koo, Bongjin; Stoyanov, Danail; Clarkson, Matthew J.; Hauser, Helwig and Alliez, Pierre
    Designing realistic digital humans is extremely complex. Most data‐driven generative models used to simplify the creation of their underlying geometric shape do not offer control over the generation of local shape attributes. In this paper, we overcome this limitation by introducing a novel loss function grounded in spectral geometry and applicable to different neural‐network‐based generative models of 3D head and body meshes. Encouraging the latent variables of mesh variational autoencoders (VAEs) or generative adversarial networks (GANs) to follow the local eigenprojections of identity attributes, we improve latent disentanglement and properly decouple the attribute creation. Experimental results show that our local eigenprojection disentangled (LED) models not only offer improved disentanglement with respect to the state‐of‐the‐art, but also maintain good generation capabilities with training times comparable to the vanilla implementations of the models. Our code and pre‐trained models are available at .
  • Item
    Directional Texture Editing for 3D Models
    (© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Liu, Shengqi; Chen, Zhuo; Gao, Jingnan; Yan, Yichao; Zhu, Wenhan; Lyu, Jiangjing; Yang, Xiaokang; Alliez, Pierre; Wimmer, Michael
    Texture editing is a crucial task in 3D modelling that allows users to automatically manipulate the surface materials of 3D models. However, the inherent complexity of 3D models and the ambiguous text description lead to the challenge of this task. To tackle this challenge, we propose ITEM3D, a exture diting odel designed for automatic object editing according to the text nstructions. Leveraging the diffusion models and the differentiable rendering, ITEM3D takes the rendered images as the bridge between text and 3D representation and further optimizes the disentangled texture and environment map. Previous methods adopted the absolute editing direction, namely score distillation sampling (SDS) as the optimization objective, which unfortunately results in noisy appearances and text inconsistencies. To solve the problem caused by the ambiguous text, we introduce a relative editing direction, an optimization objective defined by the noise difference between the source and target texts, to release the semantic ambiguity between the texts and images. Additionally, we gradually adjust the direction during optimization to further address the unexpected deviation in the texture domain. Qualitative and quantitative experiments show that our ITEM3D outperforms the state‐of‐the‐art methods on various 3D objects. We also perform text‐guided relighting to show explicit control over lighting. Our project page: .