Ponton, Jose LuisMonclús, EvaPelechano, NuriaPelechano, NuriaVanderhaeghe, David2022-04-222022-04-222022978-3-03868-169-41017-4656https://doi.org/10.2312/egs.20221037https://diglib.eg.org:443/handle/10.2312/egs20221037The use of self-avatars in a VR application can enhance presence and embodiment which leads to a better user experience. In collaborative VR it also facilitates non-verbal communication. Currently it is possible to track a few body parts with cheap trackers and then apply IK methods to animate a character. However, the correspondence between trackers and avatar joints is typically fixed ad-hoc, which is enough to animate the avatar, but causes noticeable mismatches between the user's body pose and the avatar. In this paper we present a fast and easy to set up system to compute exact offset values, unique for each user, which leads to improvements in avatar movement. Our user study shows that the Sense of Embodiment increased significantly when using exact offsets as opposed to fixed ones. We also allowed the users to see a semitransparent avatar overlaid with their real body to objectively evaluate the quality of the avatar movement with our technique.Attribution 4.0 International LicenseCCS Concepts: Human-centered computing --> User models; Virtual realityHuman centered computingUser modelsVirtual realityAvatarGo: Plug and Play self-avatars for VR10.2312/egs.2022103777-804 pages