Search Results

Now showing 1 - 3 of 3
  • Item
    2D Neural Fields with Learned Discontinuities
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Liu, Chenxi; Wang, Siqi; Fisher, Matthew; Aneja, Deepali; Jacobson, Alec; Bousseau, Adrien; Day, Angela
    Effective representation of 2D images is fundamental in digital image processing, where traditional methods like raster and vector graphics struggle with sharpness and textural complexity, respectively. Current neural fields offer high fidelity and resolution independence but require predefined meshes with known discontinuities, restricting their utility. We observe that by treating all mesh edges as potential discontinuities, we can represent the discontinuity magnitudes as continuous variables and optimize. We further introduce a novel discontinuous neural field model that jointly approximates the target image and recovers discontinuities. Through systematic evaluations, our neural field outperforms other methods that fit unknown discontinuities with discontinuous representations, exceeding Field of Junction and Boundary Attention by over 11dB in both denoising and super-resolution tasks and achieving 3.5× smaller Chamfer distances than Mumford-Shah-based methods. It also surpasses InstantNGP with improvements of more than 5dB (denoising) and 10dB (super-resolution). Additionally, our approach shows remarkable capability in approximating complex artistic and natural images and cleaning up diffusion-generated depth maps.
  • Item
    Image Vectorization via Gradient Reconstruction
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Chakraborty, Souymodip; Batra, Vineet; Phogat, Ankit; Jain, Vishwas; Ranawat, Jaswant Singh; Dhingra, Sumit; Wampler, Kevin; Fisher, Matthew; Lukác, Michal; Bousseau, Adrien; Day, Angela
    We present a fully automated technique that segments raster images into smooth shaded regions and reconstructs them using an optimal mix of solid fills, linear gradients, and radial gradients. Our method leverages a novel discontinuity-aware segmentation strategy and gradient reconstruction algorithm to accurately capture intricate shading details and produce compact Bézier curve representations. Extensive evaluations on both designer-created art and generative images demonstrate that our approach achieves high visual fidelity with minimal geometric complexity and fast processing times. This work offers a robust and versatile solution for converting detailed raster images into scalable vector graphics, addressing the evolving needs of modern design workflows.
  • Item
    How to Train Your Dragon: Automatic Diffusion-Based Rigging for Characters with Diverse Topologies
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Gu, Zeqi; Liu, Difan; Langlois, Timothy; Fisher, Matthew; Davis, Abe; Bousseau, Adrien; Day, Angela
    Recent diffusion-based methods have achieved impressive results on animating images of human subjects. However, most of that success has built on human-specific body pose representations and extensive training with labeled real videos. In this work, we extend the ability of such models to animate images of characters with more diverse skeletal topologies. Given a small number (3-5) of example frames showing the character in different poses with corresponding skeletal information, our model quickly infers a rig for that character that can generate images corresponding to new skeleton poses. We propose a procedural data generation pipeline that efficiently samples training data with diverse topologies on the fly. We use it, along with a novel skeleton representation, to train our model on articulated shapes spanning a large space of textures and topologies. Then during fine-tuning, our model rapidly adapts to unseen target characters and generalizes well to rendering new poses, both for realistic and more stylized cartoon appearances. To better evaluate performance on this novel and challenging task, we create the first 2D video dataset that contains both humanoid and non-humanoid subjects with per-frame keypoint annotations. With extensive experiments, we demonstrate the superior quality of our results.