Volume 44 (2025)
Permanent URI for this community
Browse
Browsing Volume 44 (2025) by Subject "Activity recognition and understanding"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item 4-LEGS: 4D Language Embedded Gaussian Splatting(The Eurographics Association and John Wiley & Sons Ltd., 2025) Fiebelman, Gal; Cohen, Tamir; Morgenstern, Ayellet; Hedman, Peter; Averbuch-Elor, Hadar; Bousseau, Adrien; Day, AngelaThe emergence of neural representations has revolutionized our means for digitally viewing a wide range of 3D scenes, enabling the synthesis of photorealistic images rendered from novel views. Recently, several techniques have been proposed for connecting these low-level representations with the high-level semantics understanding embodied within the scene. These methods elevate the rich semantic understanding from 2D imagery to 3D representations, distilling high-dimensional spatial features onto 3D space. In our work, we are interested in connecting language with a dynamic modeling of the world. We show how to lift spatio-temporal features to a 4D representation based on 3D Gaussian Splatting. This enables an interactive interface where the user can spatiotemporally localize events in the video from text prompts. We demonstrate our system on public 3D video datasets of people and animals performing various actions.Item LEAD: Latent Realignment for Human Motion Diffusion(The Eurographics Association and John Wiley & Sons Ltd., 2025) Andreou, Nefeli; Wang, Xi; Fernández Abrevaya, Victoria; Cani, Marie-Paule; Chrysanthou, Yiorgos; Kalogeiton, Vicky; Wimmer, Michael; Alliez, Pierre; Westermann, RüdigerOur goal is to generate realistic human motion from natural language. Modern methods often face a trade-off between model expressiveness and text-to-motion (T2M) alignment. Some align text and motion latent spaces but sacrifice expressiveness; others rely on diffusion models producing impressive motions but lacking semantic meaning in their latent space. This may compromise realism, diversity and applicability. Here, we address this by combining latent diffusion with a realignment mechanism, producing a novel, semantically structured space that encodes the semantics of language. Leveraging this capability, we introduce the task of textual motion inversion to capture novel motion concepts from a few examples. For motion synthesis, we evaluate LEAD on HumanML3D and KIT-ML and show comparable performance to the state-of-the-art in terms of realism, diversity and textmotion consistency. Our qualitative analysis and user study reveal that our synthesised motions are sharper, more human-like and comply better with the text compared to modern methods. For motion textual inversion (MTI), our method demonstrates improvements in capturing out-of-distribution characteristics in comparison to traditional VAEs.