Caputo, ArielBartolomioli, RiccardoGiachetti, AndreaBanterle, FrancescoCaggianese, GiuseppeCapece, NicolaErra, UgoLupinetti, KatiaManfredi, Gilda2023-11-122023-11-122023978-3-03868-235-62617-4855https://doi.org/10.2312/stag.20231290https://diglib.eg.org:443/handle/10.2312/stag20231290Deviceless manipulation of virtual objects in mixed reality (MR) environments is technically achievable with the current generation of Head-Mounted Displays (HMDs), as they track finger movements and allow you to use gestures to control the transformation. However, when the object manipulation is performed at some distance, and when the transform includes scaling, it is not obvious how to remap the hand motions over the degrees of freedom of the object. Different solutions have been implemented in software toolkits, but there are still usability issues and a lack of clear guidelines for the interaction design. We present a user study evaluating three solutions for the remote translation, rotation, and scaling of virtual objects in the real environment without using handheld devices. We analyze their usability on the practical task of docking virtual cubes on a tangible shelf from varying distances. The outcomes of our study show that the usability of the methods is strongly affected by the use of separate or integrated control of the degrees of freedom, by the use of the hands in a symmetric or specialized way, by the visual feedback, and by the previous experience of the users.Attribution 4.0 International LicenseCCS Concepts: Human-centered computing -> Gestural input; Interaction devicesHuman centered computingGestural inputInteraction devicesRemote and Deviceless Manipulation of Virtual Objects in Mixed Reality10.2312/stag.202312901-1111 pages