Search Results

Now showing 1 - 5 of 5
  • Item
    Neural Face Skinning for Mesh-agnostic Facial Expression Cloning
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Cha, Sihun; Yoon, Serin; Seo, Kwanggyoon; Noh, Junyong; Bousseau, Adrien; Day, Angela
    Accurately retargeting facial expressions to a face mesh while enabling manipulation is a key challenge in facial animation retargeting. Recent deep-learning methods address this by encoding facial expressions into a global latent code, but they often fail to capture fine-grained details in local regions. While some methods improve local accuracy by transferring deformations locally, this often complicates overall control of the facial expression. To address this, we propose a method that combines the strengths of both global and local deformation models. Our approach enables intuitive control and detailed expression cloning across diverse face meshes, regardless of their underlying structures. The core idea is to localize the influence of the global latent code on the target mesh. Our model learns to predict skinning weights for each vertex of the target face mesh through indirect supervision from predefined segmentation labels. These predicted weights localize the global latent code, enabling precise and region-specific deformations even for meshes with unseen shapes. We supervise the latent code using Facial Action Coding System (FACS)-based blendshapes to ensure interpretability and allow straightforward editing of the generated animation. Through extensive experiments, we demonstrate improved performance over state-of-the-art methods in terms of expression fidelity, deformation transfer accuracy, and adaptability across diverse mesh structures.
  • Item
    Optimizing Free-Form Grid Shells with Reclaimed Elements under Inventory Constraints
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Favilli, Andrea; Laccone, Francesco; Cignoni, Paolo; Malomo, Luigi; Giorgi, Daniela; Bousseau, Adrien; Day, Angela
    We propose a method for designing 3D architectural free-form surfaces, represented as grid shells with beams sourced from inventories of reclaimed elements from dismantled buildings. In inventory-constrained design, the reused elements must be paired with elements in the target design. Traditional solutions to this assignment problem often result in cuts and material waste or geometric distortions that affect the surface aesthetics and buildability. Our method for inventory-constrained assisted design blends the traditional assignment problem with differentiable geometry optimization to reduce cut-off waste while preserving the design intent. Additionally, we extend our approach to incorporate strain energy minimization for structural efficiency. We design differentiable losses that account for inventory, geometry, and structural constraints, and streamline them into a complete pipeline, demonstrated through several case studies. Our approach enables the reuse of existing elements for new designs, reducing the need for sourcing new materials and disposing of waste. Consequently, it can serve as an initial step towards mitigating the significant environmental impact of the construction sector.
  • Item
    Learning Image Fractals Using Chaotic Differentiable Point Splatting
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Djeacoumar, Adarsh; Mujkanovic, Felix; Seidel, Hans-Peter; Leimkühler, Thomas; Bousseau, Adrien; Day, Angela
    Fractal geometry, defined by self-similar patterns across scales, is crucial for understanding natural structures. This work addresses the fractal inverse problem, which involves extracting fractal codes from images to explain these patterns and synthesize them at arbitrary finer scales. We introduce a novel algorithm that optimizes Iterated Function System parameters using a custom fractal generator combined with differentiable point splatting. By integrating both stochastic and gradient-based optimization techniques, our approach effectively navigates the complex energy landscapes typical of fractal inversion, ensuring robust performance and the ability to escape local minima. We demonstrate the method's effectiveness through comparisons with various fractal inversion techniques, highlighting its ability to recover high-quality fractal codes and perform extensive zoom-ins to reveal intricate patterns from just a single image.
  • Item
    Multi-Modal Instrument Performances (MMIP): A Musical Database
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Kyriakou, Theodoros; Aristidou, Andreas; Charalambous, Panayiotis; Bousseau, Adrien; Day, Angela
    Musical instrument performances are multimodal creative art forms that integrate audiovisual elements, resulting from musicians' interactions with instruments through body movements, finger actions, and facial expressions. Digitizing such performances for archiving, streaming, analysis, or synthesis requires capturing every element that shapes the overall experience, which is crucial for preserving the performance's essence. In this work, following current trends in large-scale dataset development for deep learning analysis and generative models, we introduce the Multi-Modal Instrument Performances (MMIP) database (https://mmip.cs.ucy.ac.cy). This is the first dataset to incorporate synchronized high-quality 3D motion capture data for the body, fingers, facial expressions, and instruments, along with audio, multi-angle videos, and MIDI data. The database currently includes 3.5 hours of performances featuring three instruments: guitar, piano, and drums. Additionally, we discuss the challenges of acquiring these multi-modal data, detailing our approach to data collection, signal synchronization, annotation, and metadata management. Our data formats align with industry standards for ease of use, and we have developed an open-access online repository that offers a user-friendly environment for data exploration, supporting data organization, search capabilities, and custom visualization tools. Notable features include a MIDI-to-instrument animation project for visualizing the instruments and a script for playing back FBX files with synchronized audio in a web environment.
  • Item
    FlairGPT: Repurposing LLMs for Interior Designs
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Littlefair, Gabrielle; Dutt, Niladri Shekhar; Mitra, Niloy J.; Bousseau, Adrien; Day, Angela
    Interior design involves the careful selection and arrangement of objects to create an aesthetically pleasing, functional, and harmonized space that aligns with the client's design brief. This task is particularly challenging, as a successful design must not only incorporate all the necessary objects in a cohesive style, but also ensure they are arranged in a way that maximizes accessibility, while adhering to a variety of affordability and usage considerations. Data-driven solutions have been proposed, but these are typically room- or domain-specific and lack explainability in their design design considerations used in producing the final layout. In this paper, we investigate if large language models (LLMs) can be directly utilized for interior design. While we find that LLMs are not yet capable of generating complete layouts, they can be effectively leveraged in a structured manner, inspired by the workflow of interior designers. By systematically probing LLMs, we can reliably generate a list of objects along with relevant constraints that guide their placement. We translate this information into a design layout graph, which is then solved using an off-the-shelf constrained optimization setup to generate the final layouts. We benchmark our algorithm in various design configurations against existing LLM-based methods and human designs, and evaluate the results using a variety of quantitative and qualitative metrics along with user studies. In summary, we demonstrate that LLMs, when used in a structured manner, can effectively generate diverse high-quality layouts, making them a viable solution for creating large-scale virtual scenes. Code is available via the project webpage.