Lefèvre, Jeanne-EmmaCheynel, ThéoEl Khalifi, OmarDaniel, ThomasBellot-Gurlet, BaptisteMusialski, PrzemyslawLim, Isaak2026-04-202026-04-202026978-3-03868-299-82309-5059https://diglib.eg.org/handle/10.2312/egs20261001https://doi.org/10.2312/egs.20261001Automatic rigging transforms static meshes into articulated characters by predicting skeletal structure. However, rigging is inherently subjective: artists develop personal preferences for joint placement. Current approaches omit this aspect, learning only the average “style” of their training data. We quantify inter-artist variance through a user study and dataset analysis, demonstrating this notion of “rigging style”. We propose a voxel-based model leveraging pretrained 3D backbones that outperforms state-of-the-art methods. We also introduce a one-shot style adaptation method based on volumetric optimal transport: given a single artist-rigged example, we transfer its stylistic joint placements to any new character. This improves any rigging model and supports different bone counts or hierarchies, reconciling automatic rigging with artistic variability.CC-BY-4.0RiggingComputer GraphicsGeometric Deep LearningOptimal TransportVolumetric Shape MatchingARTIST: Adaptive Humanoid Rigging by Transferring Individual Style with Optimal Transport10.2312/egs.202610014 pages