Search Results

Now showing 1 - 5 of 5
  • Item
    User Interaction Feedback in a Hand-Controlled Interface for Robot Team Tele-operation Using Wearable Augmented Reality
    (The Eurographics Association, 2017) Cannavò, Alberto; Lamberti, Fabrizio; Andrea Giachetti and Paolo Pingi and Filippo Stanco
    Continuous advancements in the field of robotics and its increasing spread across heterogeneous application scenarios make the development of ever more effective user interfaces for human-robot interaction (HRI) an extremely relevant research topic. In particular, Natural User Interfaces (NUIs), e.g., based on hand and body gestures, proved to be an interesting technology to be exploited for designing intuitive interaction paradigms in the field of HRI. However, the more sophisticated the HRI interfaces become, the more important is to provide users with an accurate feedback about the state of the robot as well as of the interface itself. In this work, an Augmented Reality (AR)-based interface is deployed on a head-mounted display to enable tele-operation of a remote robot team using hand movements and gestures. A user study is performed to assess the advantages of wearable AR compared to desktop-based AR in the execution of specific tasks.
  • Item
    A 3 Cent Recognizer: Simple and Effective Retrieval and Classification of Mid-air Gestures from Single 3D Traces
    (The Eurographics Association, 2017) Caputo, Fabio Marco; Prebianca, Pietro; Carcangiu, Alessandro; Spano, Lucio D.; Giachetti, Andrea; Andrea Giachetti and Paolo Pingi and Filippo Stanco
    In this paper we present a simple 3D gesture recognizer based on trajectory matching, showing its good performances in classification and retrieval of command gestures based on single hand trajectories. We demonstrate that further simplifications in porting the classic "1 dollar" algorithm approach from the 2D to the 3D gesture recognition and retrieval problems can result in very high classification accuracy and retrieval scores even on datasets with a large number of different gestures executed by different users. Furthermore, recognition can be good even with heavily subsampled path traces and with incomplete gestures.
  • Item
    STRONGER: Simple TRajectory-based ONline GEsture Recognizer
    (The Eurographics Association, 2021) Emporio, Marco; Caputo, Ariel; Giachetti, Andrea; Frosini, Patrizio and Giorgi, Daniela and Melzi, Simone and RodolĂ , Emanuele
    In this paper, we present STRONGER, a client-server solution for the online gesture recognition from captured hands' joints sequences. The system leverages a CNN-based recognizer improving current state-of-the-art solutions for segmented gestures classification, trained and tested for the online gesture recognition task on a recent benchmark including heterogeneous gestures. The recognizer provides good classification accuracy and a limited number of false positives on most of the gesture classes of the benchmark used and has been used to create a demo application in a Mixed Reality scenario using an Hololens 2 optical see through Head-Mounted Display with hand tracking capability.
  • Item
    Remote and Deviceless Manipulation of Virtual Objects in Mixed Reality
    (The Eurographics Association, 2023) Caputo, Ariel; Bartolomioli, Riccardo; Giachetti, Andrea; Banterle, Francesco; Caggianese, Giuseppe; Capece, Nicola; Erra, Ugo; Lupinetti, Katia; Manfredi, Gilda
    Deviceless manipulation of virtual objects in mixed reality (MR) environments is technically achievable with the current generation of Head-Mounted Displays (HMDs), as they track finger movements and allow you to use gestures to control the transformation. However, when the object manipulation is performed at some distance, and when the transform includes scaling, it is not obvious how to remap the hand motions over the degrees of freedom of the object. Different solutions have been implemented in software toolkits, but there are still usability issues and a lack of clear guidelines for the interaction design. We present a user study evaluating three solutions for the remote translation, rotation, and scaling of virtual objects in the real environment without using handheld devices. We analyze their usability on the practical task of docking virtual cubes on a tangible shelf from varying distances. The outcomes of our study show that the usability of the methods is strongly affected by the use of separate or integrated control of the degrees of freedom, by the use of the hands in a symmetric or specialized way, by the visual feedback, and by the previous experience of the users.
  • Item
    Single-Handed vs. Two Handed Manipulation in Virtual Reality: A Novel Metaphor and Experimental Comparisons
    (The Eurographics Association, 2017) Caputo, Fabio Marco; Emporio, Marco; Giachetti, Andrea; Andrea Giachetti and Paolo Pingi and Filippo Stanco
    In this paper we present a novel solution for single-handed deviceless object manipulation (e.g. picking/translating, rotating and scaling) in immersive visualization environments. The new method is based on degree of freedom (DOF) separation and on the idea of activating unambiguous gesture recognition when the hand is close to the object, giving visual feedback about gesture realization and available transitions. Furthermore, it introduces a novel metaphor, the "knob", to map hand rotation onto object rotation around selected axes. The solution was tested with users on a classical visualization task related to finding a point of interest in a 3D object and compared with the well known "Handlebar" metaphor. The metaphor shows a reasonable usability, even if not comparable with the bi- manual solution, particularly suited for the tested task. However, given the relevant improvements with task repetitions and the technical issues that can be solved improving the performances, the method seems to be a viable solution for deviceless single-hand manipulation.