Search Results

Now showing 1 - 10 of 19
  • Item
    ACM/EG Expressive Symposium 2025: Frontmatter
    (The Eurographics Association, 2025) Catalano, Chiara Eva; Parakkat, Amal Dev; Catalano, Chiara Eva; Parakkat, Amal Dev
  • Item
    ACM/EG Expressive Symposium 2025 Posters and Demos: Frontmatter
    (The Eurographics Association, 2025) Berio, Daniel; Bruckert, Alexandre; Berio, Daniel; Bruckert, Alexandre
  • Item
    Revisiting Analog Stereoscopic Film
    (The Eurographics Association, 2025) Freude, Christian; Jauernik, Christina; Lurf, Johann; Suppin, Rüdiger; Wimmer, Michael; Catalano, Chiara Eva; Parakkat, Amal Dev
    We present approaches for the simulation of an analog autostereoscopic (glasses-free) display and the visualization of analog color film at micro scales. These techniques were developed during an artistic research project and the creation of an accompanying art installation, which exhibits an analog stereo short film projected on a re-creation of a cyclostéréoscope, a historic device developed around 1952. We describe how computer graphics helped to understand the cyclostéréoscope, supported its physical re-creation, and enabled the visualization of the projection and material structure of analog film using physically based Monte Carlo light simulation.
  • Item
    DepthLight: a Single Image Lighting Pipeline for Seamless Integration of Virtual Objects into Real Scenes
    (The Eurographics Association, 2025) Manus, Raphael; Christie, Marc; Boivin, Samuel; Guehl, Pascal; Catalano, Chiara Eva; Parakkat, Amal Dev
    We present DepthLight, a method to estimate spatial lighting for photorealistic Visual Effects (VFX) using a single image as input. Previous techniques rely either on estimated or captured light representations that fail to account for localized lighting effects, or use simplified lights that do not fully capture the complexity of the illumination process. DepthLight addresses these limitations by using a single LDR image with a limited field of view (LFOV) as an input to compute an emissive texture mesh around the image (a mesh which generates spatial lighting in the scene), producing a simple and lightweight 3D representation for photorealistic object relighting. First, an LDR panorama is generated around the input image using a photorealistic diffusion-based inpainting technique, conditioned on the input image. An LDR to HDR network then reconstructs the full HDR panorama, while an off-the-shelf depth estimation technique generates a mesh representation to finally build a 3D emissive mesh. This emissive mesh approximates the bidirectional light interactions between the scene and the virtual objects that is used to relight virtual objects placed in the scene. We also exploit this mesh to cast shadows from the virtual objects on the emissive mesh, and add these shadows to the original LDR image. This flexible pipeline can be easily integrated into different VFX production workflows. In our experiments, DepthLight shows that virtual objects are seamlessly integrated into real scenes with a visually plausible estimation of the lighting.We compared our results to the ground truth lighting using Unreal Engine, as well as to state-of-the-art approaches that use pure HDRi lighting techniques (see Figure 1). Finally, we validated our approach conducting a user evaluation over 52 participants as well as a comparison to existing techniques.
  • Item
    PerceptualLift: Using hatches to infer a 3D organic shape from a sketch
    (The Eurographics Association, 2025) Butler, Tara; Guehl, Pascal; Parakkat, Amal Dev; Cani, Marie-Paule; Catalano, Chiara Eva; Parakkat, Amal Dev
    In this work, we investigate whether artistic hatching, popular in pen-and-ink sketches, can be consistently perceived as a depth cue. We illustrate our results by presenting PerceptualLift, a modeling system that exploits hatching to create curved 3D shapes from a single sketch. We first describe a perceptual user study conducted across a diverse group of participants, which confirms the relevance of hatches as consistent clues for inferring curvature in the depth direction from a sketch. It enables us to extract geometrical rules that link 2D hatch characteristics, such as their direction, frequency, and magnitude, to the changes of depth in the depicted 3D shape. Built on these rules, we introduce PerceptualLift, a flexible tool to model 3D organic shapes by simply hatching over 2D hand-drawn contour sketches.
  • Item
    Modeling Crochet Patterns with a Force-directed Graph Layout
    (The Eurographics Association, 2025) Greer, Émile; Mould, David; Catalano, Chiara Eva; Parakkat, Amal Dev
    Designing crochet patterns is a difficult, time-consuming task. Typically, an initial pattern is created and crocheted; after seeing how the object comes out, the pattern is modified and some amount of stitches are undone and remade, through some number of iterations. This process involves a lot of guesswork and the manual labor of physically crocheting. In this paper, we present a way of creating a 3D representation of a crochet pattern using a written pattern as input: we translate the written pattern into a graph and obtain a force-directed graph layout. The result is a 3D model that looks like the hand-crocheted pattern in shape and size, with the advantage that the designer does not need to physically crochet the pattern and can make adjustments based on the digital model. Our intended audience includes both professional designers as well as beginners, helping designers visualize their crochet pattern before investing the time and effort to physically make it. While our application is oriented towards amigurumi, it could be extended to work with clothing or other similar styles of crochet.
  • Item
    MVAE : Motion-conditioned Variational Auto-Encoder for tailoring character animations
    (The Eurographics Association, 2025) Bordier, Jean-Baptiste; Christie, Marc; Catalano, Chiara Eva; Parakkat, Amal Dev
    The design of character animations with enough diversity is a time-consuming task in many productions such as video games or animated films, and drives the need for more simple and effective authoring systems. This paper introduces a novel approach, a motion-conditioned variational autoencoder (VAE) with Virtual reality as a motion capture device. Our model generates diverse humanoid character animations only based on a gesture captured from two Virtual reality controllers, allowing for precise control of motion characteristics such as rhythm, speed and amplitude, and providing variability through noise sampling. From a dataset comprising paired controller-character motions, we design and train our VAE to (i) identify global motion characteristics from the movement, in order to discern the type of animation desired by the user, and (ii) identify local motion characteristics including rhythm, velocity, and amplitude to adapt the animation to these characteristics. Unlike many text-tomotion approaches, our method faces the challenge of interpreting high-dimensional, non-discrete user inputs. Our model maps these inputs into the higher-dimensional space of character animation while leveraging motion characteristics (such as height, speed, walking step frequency, and amplitude) to fine-tune the generated motion. We demonstrate the relevance of the approach on a number of examples and illustrate how changes in rhythm and amplitude of the input motions are transferred to coherent changes in the animated character, while offering a diversity of results using different noise samples.
  • Item
    View-Dependent Deformation Fields for 2D Editing of 3D Models
    (The Eurographics Association, 2025) Mqirmi, Martin El; Aigerman, Noam; Catalano, Chiara Eva; Parakkat, Amal Dev
    We propose a method for authoring non-realistic 3D objects (represented as either 3D Gaussian Splats or meshes), that comply with 2D edits from specific viewpoints. Namely, given a 3D object, a user chooses different viewpoints and interactively deforms the object in the 2D image plane of each view. The method then produces a ''deformation field'' - an interpolation between those 2D deformations in a smooth manner as the viewpoint changes. Our core observation is that the 2D deformations do not need to be tied to an underlying object, nor share the same deformation space. We use this observation to devise a method for authoring view-dependent deformations, holding several technical contributions: first, a novel way to compositionality-blend between the 2D deformations after lifting them to 3D - this enables the user to ''stack'' the deformations similarly to layers in an editing software, each deformation operating on the results of the previous; second, a novel method to apply the 3D deformation to 3D Gaussian Splats; third, an approach to author the 2D deformations, by deforming a 2D mesh encapsulating a rendered image of the object. We show the versatility and efficacy of our method by adding cartoonish effects to objects, providing means to modify human characters, fitting 3D models to given 2D sketches and caricatures, resolving occlusions, and recreating classic non-realistic paintings as 3D models.
  • Item
    Sketching Interactive Experiences: Can Co-creation with Artificial Generative Systems Enhance the Communication of Cultural Heritage?
    (The Eurographics Association, 2025) Veggi, Manuele; Catalano, Chiara Eva; Pescarin, Sofia; Catalano, Chiara Eva; Parakkat, Amal Dev
    Generative AI has opened up new and largely unexplored opportunities to address the challenges of communicating cultural heritage through interactive experiences. This study explores the potential of text-to-image models to support the early stages of interactive media design for cultural heritage applications. We conducted a qualitative survey using four generative AI services (Stable Diffusion, Adobe Firefly, MidJourney, and DALL-E) to address a real design challenge in the cultural domain. While the study does not cover the full scope of traditional interactive media design workflows or provide a comprehensive performance evaluation, it highlights key benefits of generative AI in early design phases. The survey reveals that these systems boost creativity by introducing unexpected elements, they have the ability to help consolidate initial ideas and communicate them effectively to colleagues or stakeholders, and even help less experienced designers understand design requirements.
  • Item
    Robotic Painting using Semantic Image Abstraction
    (The Eurographics Association, 2025) Stroh, Michael; Paetzold, Patrick; Berio, Daniel; Leymarie, Frederic Fol; Kehlbeck, Rebecca; Deussen, Oliver; Berio, Daniel; Bruckert, Alexandre
    We present a novel image segmentation and abstraction pipeline tailored to robot painting applications. We address the unique challenges of realizing digital abstractions as physical artistic renderings. Our approach generates adaptive, semantics-based abstractions that balance aesthetic appeal, structural coherence, and practical constraints inherent to robotic systems. By integrating panoptic segmentation with color-based over-segmentation, we partition images into meaningful regions corresponding to semantic objects while providing customizable abstraction levels we optimize for robotic realization. We employ saliency maps and color difference metrics to support automatic parameter selection to guide a merging process that detects and preserves critical object boundaries while simplifying less salient areas. Graph-based community detection further refines the abstraction by grouping regions based on local connectivity and semantic coherence. These abstractions enable robotic systems to create paintings on real canvases with a controlled level of detail and abstraction.