44-Issue 6
Permanent URI for this collection
Browse
Browsing 44-Issue 6 by Subject "Computing methodologies→Motion capture"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item LEAD: Latent Realignment for Human Motion Diffusion(The Eurographics Association and John Wiley & Sons Ltd., 2025) Andreou, Nefeli; Wang, Xi; Fernández Abrevaya, Victoria; Cani, Marie-Paule; Chrysanthou, Yiorgos; Kalogeiton, Vicky; Wimmer, Michael; Alliez, Pierre; Westermann, RüdigerOur goal is to generate realistic human motion from natural language. Modern methods often face a trade-off between model expressiveness and text-to-motion (T2M) alignment. Some align text and motion latent spaces but sacrifice expressiveness; others rely on diffusion models producing impressive motions but lacking semantic meaning in their latent space. This may compromise realism, diversity and applicability. Here, we address this by combining latent diffusion with a realignment mechanism, producing a novel, semantically structured space that encodes the semantics of language. Leveraging this capability, we introduce the task of textual motion inversion to capture novel motion concepts from a few examples. For motion synthesis, we evaluate LEAD on HumanML3D and KIT-ML and show comparable performance to the state-of-the-art in terms of realism, diversity and textmotion consistency. Our qualitative analysis and user study reveal that our synthesised motions are sharper, more human-like and comply better with the text compared to modern methods. For motion textual inversion (MTI), our method demonstrates improvements in capturing out-of-distribution characteristics in comparison to traditional VAEs.Item Real-Time and Controllable Reactive Motion Synthesis via Intention Guidance(The Eurographics Association and John Wiley & Sons Ltd., 2025) Zhang, Xiaotang; Chang, Ziyi; Men, Qianhui; Shum, Hubert P. H.; Wimmer, Michael; Alliez, Pierre; Westermann, RüdigerWe propose a real-time method for reactive motion synthesis based on the known trajectory of an input character, predicting instant reactions using only historical, user-controlled motions. Our method handles the uncertainty of future movements by introducing an intention predictor, which forecasts key joint intentions to make pose prediction more deterministic from the historical interaction. The intention is later encoded into the latent space of its reactive motion, matched with a codebook that represents mappings between input and output. It samples from the categorical distribution for pose generation and strengthens model robustness through adversarial training. Unlike previous offline approaches, the system can recursively generate intentions and reactive motions using feedback from earlier steps, enabling real-time, long-term realistic interactive synthesis. Both quantitative and qualitative experiments show our approach outperforms other matching-based motion synthesis approaches, delivering superior stability and generalisability. In our method, the user can also actively influence the outcome by controlling the moving directions, creating a personalised interaction path that deviates from predefined trajectories.