Search Results

Now showing 1 - 2 of 2
  • Item
    DepthLight: a Single Image Lighting Pipeline for Seamless Integration of Virtual Objects into Real Scenes
    (The Eurographics Association, 2025) Manus, Raphael; Christie, Marc; Boivin, Samuel; Guehl, Pascal; Catalano, Chiara Eva; Parakkat, Amal Dev
    We present DepthLight, a method to estimate spatial lighting for photorealistic Visual Effects (VFX) using a single image as input. Previous techniques rely either on estimated or captured light representations that fail to account for localized lighting effects, or use simplified lights that do not fully capture the complexity of the illumination process. DepthLight addresses these limitations by using a single LDR image with a limited field of view (LFOV) as an input to compute an emissive texture mesh around the image (a mesh which generates spatial lighting in the scene), producing a simple and lightweight 3D representation for photorealistic object relighting. First, an LDR panorama is generated around the input image using a photorealistic diffusion-based inpainting technique, conditioned on the input image. An LDR to HDR network then reconstructs the full HDR panorama, while an off-the-shelf depth estimation technique generates a mesh representation to finally build a 3D emissive mesh. This emissive mesh approximates the bidirectional light interactions between the scene and the virtual objects that is used to relight virtual objects placed in the scene. We also exploit this mesh to cast shadows from the virtual objects on the emissive mesh, and add these shadows to the original LDR image. This flexible pipeline can be easily integrated into different VFX production workflows. In our experiments, DepthLight shows that virtual objects are seamlessly integrated into real scenes with a visually plausible estimation of the lighting.We compared our results to the ground truth lighting using Unreal Engine, as well as to state-of-the-art approaches that use pure HDRi lighting techniques (see Figure 1). Finally, we validated our approach conducting a user evaluation over 52 participants as well as a comparison to existing techniques.
  • Item
    MVAE : Motion-conditioned Variational Auto-Encoder for tailoring character animations
    (The Eurographics Association, 2025) Bordier, Jean-Baptiste; Christie, Marc; Catalano, Chiara Eva; Parakkat, Amal Dev
    The design of character animations with enough diversity is a time-consuming task in many productions such as video games or animated films, and drives the need for more simple and effective authoring systems. This paper introduces a novel approach, a motion-conditioned variational autoencoder (VAE) with Virtual reality as a motion capture device. Our model generates diverse humanoid character animations only based on a gesture captured from two Virtual reality controllers, allowing for precise control of motion characteristics such as rhythm, speed and amplitude, and providing variability through noise sampling. From a dataset comprising paired controller-character motions, we design and train our VAE to (i) identify global motion characteristics from the movement, in order to discern the type of animation desired by the user, and (ii) identify local motion characteristics including rhythm, velocity, and amplitude to adapt the animation to these characteristics. Unlike many text-tomotion approaches, our method faces the challenge of interpreting high-dimensional, non-discrete user inputs. Our model maps these inputs into the higher-dimensional space of character animation while leveraging motion characteristics (such as height, speed, walking step frequency, and amplitude) to fine-tune the generated motion. We demonstrate the relevance of the approach on a number of examples and illustrate how changes in rhythm and amplitude of the input motions are transferred to coherent changes in the animated character, while offering a diversity of results using different noise samples.