4 results
Search Results
Now showing 1 - 4 of 4
Item EUROGRAPHICS 2022: Short Papers Frontmatter(The Eurographics Association, 2022) Pelechano, Nuria; Vanderhaeghe, David; Pelechano, Nuria; Vanderhaeghe, DavidItem Authoring Virtual Crowds: A Survey(The Eurographics Association and John Wiley & Sons Ltd., 2022) Lemonari, Marilena; Blanco, Rafael; Charalambous, Panayiotis; Pelechano, Nuria; Avraamides, Marios; Pettré, Julien; Chrysanthou, Yiorgos; Meneveaux, Daniel; Patanè, GiuseppeRecent advancements in crowd simulation unravel a wide range of functionalities for virtual agents, delivering highly-realistic, natural virtual crowds. Such systems are of particular importance to a variety of applications in fields such as: entertainment (e.g., movies, computer games); architectural and urban planning; and simulations for sports and training. However, providing their capabilities to untrained users necessitates the development of authoring frameworks. Authoring virtual crowds is a complex and multi-level task, varying from assuming control and assisting users to realise their creative intents, to delivering intuitive and easy to use interfaces, facilitating such control. In this paper, we present a categorisation of the authorable crowd simulation components, ranging from high-level behaviours and path-planning to local movements, as well as animation and visualisation. We provide a review of the most relevant methods in each area, emphasising the amount and nature of influence that the users have over the final result. Moreover, we discuss the currently available authoring tools (e.g., graphical user interfaces, drag-and-drop), identifying the trends of early and recent work. Finally, we suggest promising directions for future research that mainly stem from the rise of learning-based methods, and the need for a unified authoring framework.Item AvatarGo: Plug and Play self-avatars for VR(The Eurographics Association, 2022) Ponton, Jose Luis; Monclús, Eva; Pelechano, Nuria; Pelechano, Nuria; Vanderhaeghe, DavidThe use of self-avatars in a VR application can enhance presence and embodiment which leads to a better user experience. In collaborative VR it also facilitates non-verbal communication. Currently it is possible to track a few body parts with cheap trackers and then apply IK methods to animate a character. However, the correspondence between trackers and avatar joints is typically fixed ad-hoc, which is enough to animate the avatar, but causes noticeable mismatches between the user's body pose and the avatar. In this paper we present a fast and easy to set up system to compute exact offset values, unique for each user, which leads to improvements in avatar movement. Our user study shows that the Sense of Embodiment increased significantly when using exact offsets as opposed to fixed ones. We also allowed the users to see a semitransparent avatar overlaid with their real body to objectively evaluate the quality of the avatar movement with our technique.Item DragPoser: Motion Reconstruction from Variable Sparse Tracking Signals via Latent Space Optimization(The Eurographics Association and John Wiley & Sons Ltd., 2025) Ponton, Jose Luis; Pujol, Eduard; Aristidou, Andreas; Andujar, Carlos; Pelechano, Nuria; Bousseau, Adrien; Day, AngelaHigh-quality motion reconstruction that follows the user's movements can be achieved by high-end mocap systems with many sensors. However, obtaining such animation quality with fewer input devices is gaining popularity as it brings mocap closer to the general public. The main challenges include the loss of end-effector accuracy in learning-based approaches, or the lack of naturalness and smoothness in IK-based solutions. In addition, such systems are often finely tuned to a specific number of trackers and are highly sensitive to missing data, e.g., in scenarios where a sensor is occluded or malfunctions. In response to these challenges, we introduce DragPoser, a novel deep-learning-based motion reconstruction system that accurately represents hard and dynamic constraints, attaining real-time high end-effectors position accuracy. This is achieved through a pose optimization process within a structured latent space. Our system requires only one-time training on a large human motion dataset, and then constraints can be dynamically defined as losses, while the pose is iteratively refined by computing the gradients of these losses within the latent space. To further enhance our approach, we incorporate a Temporal Predictor network, which employs a Transformer architecture to directly encode temporality within the latent space. This network ensures the pose optimization is confined to the manifold of valid poses and also leverages past pose data to predict temporally coherent poses. Results demonstrate that DragPoser surpasses both IK-based and the latest data-driven methods in achieving precise end-effector positioning, while it produces natural poses and temporally coherent motion. In addition, our system showcases robustness against on-the-fly constraint modifications, and exhibits adaptability to various input configurations and changes. The complete source code, trained model, animation databases, and supplementary material used in this paper can be found at https://upc-virvig.github.io/DragPoser