Zhang, XiaotangChang, ZiyiMen, QianhuiShum, Hubert P. H.Wimmer, MichaelAlliez, PierreWestermann, Rüdiger2025-11-072025-11-0720251467-8659https://doi.org/10.1111/cgf.70222https://diglib.eg.org/handle/10.1111/cgf70222We propose a real-time method for reactive motion synthesis based on the known trajectory of an input character, predicting instant reactions using only historical, user-controlled motions. Our method handles the uncertainty of future movements by introducing an intention predictor, which forecasts key joint intentions to make pose prediction more deterministic from the historical interaction. The intention is later encoded into the latent space of its reactive motion, matched with a codebook that represents mappings between input and output. It samples from the categorical distribution for pose generation and strengthens model robustness through adversarial training. Unlike previous offline approaches, the system can recursively generate intentions and reactive motions using feedback from earlier steps, enabling real-time, long-term realistic interactive synthesis. Both quantitative and qualitative experiments show our approach outperforms other matching-based motion synthesis approaches, delivering superior stability and generalisability. In our method, the user can also actively influence the outcome by controlling the moving directions, creating a personalised interaction path that deviates from predefined trajectories.Attribution 4.0 International Licenseanimation systemshuman simulationComputing methodologies→Motion captureMotion processingReal-Time and Controllable Reactive Motion Synthesis via Intention Guidance10.1111/cgf.7022212 pages