80 results
Search Results
Now showing 1 - 10 of 80
Item User Interaction Feedback in a Hand-Controlled Interface for Robot Team Tele-operation Using Wearable Augmented Reality(The Eurographics Association, 2017) Cannavò, Alberto; Lamberti, Fabrizio; Andrea Giachetti and Paolo Pingi and Filippo StancoContinuous advancements in the field of robotics and its increasing spread across heterogeneous application scenarios make the development of ever more effective user interfaces for human-robot interaction (HRI) an extremely relevant research topic. In particular, Natural User Interfaces (NUIs), e.g., based on hand and body gestures, proved to be an interesting technology to be exploited for designing intuitive interaction paradigms in the field of HRI. However, the more sophisticated the HRI interfaces become, the more important is to provide users with an accurate feedback about the state of the robot as well as of the interface itself. In this work, an Augmented Reality (AR)-based interface is deployed on a head-mounted display to enable tele-operation of a remote robot team using hand movements and gestures. A user study is performed to assess the advantages of wearable AR compared to desktop-based AR in the execution of specific tasks.Item Tablet Fish Tank Virtual Reality: a Usability Study(The Eurographics Association, 2017) Kongsilp, Sirisilp; Ruensuk, Mintra; Dailey, Matthew N.; Komuro, Takashi; Tony Huang and Arindam DeyIn this paper, we describe the development a tablet FTVR prototype that incorporates both motion parallax and stereo cues with the use of easy-to-find hardware. We also present findings of a usability study based on the prototype.Item Evaluating and Comparing Game-controller based Virtual Locomotion Techniques(The Eurographics Association, 2017) Sarupuri, Bhuvaneswari; Hoermann, Simon; Whitton, Mary C.; Lindeman, Robert W.; Robert W. Lindeman and Gerd Bruder and Daisuke IwaiThe incremental hardware costs of virtual locomotion are minimized when the technique uses interaction capabilities available in controllers and devices that are already part of the VE system, e.g., gamepads, keyboards, and multi-function controllers. We used a different locomotion technique for each of these three devices: gamepad thumb-stick (joystick walking), a customized hybrid keyboard for gaming (speedpad walking), and an innovative technique that uses the orientation and triggers of the HTC Vive controllers (TriggerWalking). We explored the efficacy of locomotion techniques using these three devices in a hide and seek task in an indoor environment. We measured task performance, simulator sickness, system usability, perceived workload, and preference. We found that users had a strong preference for TriggerWalking, which also had the least increase in simulator sickness, the highest performance score, and highest perceived usability. However, participants using TriggerWalking also had the most object and wall-collisions. Overall we found that TriggerWalking is an effective locomotion technique and that is has significant and important benefits. Future research will explore if TriggerWalking can be used with equal benefits in other virtual-environments, on different tasks, and types of movement.Item Assessing the Relevance of Eye Gaze Patterns During Collision Avoidance in Virtual Reality(The Eurographics Association, 2017) Varma, Kamala; Guy, Stephen J.; Interrante, Victoria; Robert W. Lindeman and Gerd Bruder and Daisuke IwaiTo increase presence in virtual reality environments requires a meticulous imitation of human behavior in virtual agents. In the specific case of collision avoidance, agents' interaction will feel more natural if they are able to both display and respond to non-verbal cues. This study informs their behavior by analyzing participants' reaction to nonverbal cues. Its aim is to confirm previous work that shows head orientation to be a primary factor in collision avoidance negotiation, and to extend this to investigate the additional contribution of eye gaze direction as a cue. Fifteen participants were directed to walk towards an oncoming agent in a virtual hallway, who would exhibit various combinations of head orientation and eye gaze direction based cues. Closely prior to the potential collision the display turned black and the participant had to move in avoidance of the agent as if she were still present. Meanwhile, their own eye gaze was tracked to identify where their focus was directed and how it related to their response. Results show that the natural tendency was to avoid the agent by moving right. However, participants showed a greater compulsion to move leftward if the agent cued her own movement to the participant's right, whether through head orientation cues (consistent with previous work) or through eye gaze direction cues (extending previous work). The implications of these findings are discussed.Item VibVid: VIBration Estimation from VIDeo by using Neural Network(The Eurographics Association, 2017) Yoshida, Kentaro; Inoue, Seki; Makino, Yasutoshi; Shinoda, Hiroyuki; Robert W. Lindeman and Gerd Bruder and Daisuke IwaiAlong with advances in video technology in recent years, there is an increasing need for adding tactile sensation to it. Many researches on models for estimating appropriate tactile information from images and sounds contained in videos have been reported. In this paper, we propose a method named VibVid that uses machine learning for estimating the tactile signal from video with audio that can deal with the kind of video where video and tactile information are not so obviously related. As an example, we evaluated by estimating and imparting the vibration transmitted to the tennis racket from the first-person view video of tennis. As a result, the waveform generated by VibVid was almost in line with the actual vibration waveform. Then we conducted a subject experiment including 20 participants, and it showed good results in four evaluation criteria of harmony, fun, immersiveness, and realism etc.Item 3D Ground Reaction Force Visualization onto Training Video for Sprint Training Support System(The Eurographics Association, 2017) Taketomi, Takafumi; Yoshitake, Yasuhide; Yamamoto, Goshiro; Sandor, Christian; Kato, Hirokazu; Tony Huang and Arindam DeyWe propose a method for visualizing 3D ground reaction forces for sprint training. Currently, sprinters can check their 3D ground force data using a 2D graph representation. In order to check the relationship between 3D ground force and their sprint form, they must check the 2D graph and a training video repeatedly. To allow simultaneous observation of the 2D graph and the training video, we use a mixed reality technology to overlay 3D ground reaction force onto the training video. In this study, we focus on 2D-3D registration between the image sequence and 3D ground reaction data. We achieved 2D-3D registration by using a constrained bundle adjustment approach. In the experiment, we apply our method to the training videos. The results confirm that our method can correctly overly 3D ground reaction force onto the videos.Item User Interface Agents for Guiding Interaction with Augmented Virtual Mirrors(The Eurographics Association, 2017) Lee, Gun A.; Rudhru, Omprakash; Park, Hye Sun; Kim, Ho Won; Billinghurst, Mark; Robert W. Lindeman and Gerd Bruder and Daisuke IwaiThis research investigates using user interface (UI) agents for guiding gesture based interaction with Augmented Virtual Mirrors. Compared to prior work in gesture interaction, where graphical symbols are used for guiding user interaction, we propose using UI agents. We explore two approaches for using UI agents: 1) using a UI agent as a delayed cursor and 2) using a UI agent as an interactive button. We conducted two user studies to evaluate the proposed designs. The results from the user studies show that UI agents are effective for guiding user interactions in a similar way as a traditional graphical user interface providing visual cues, while they are useful in emotionally engaging with users.Item Visual Navigation Support for Liver Applicator Placement using Interactive Map Displays(The Eurographics Association, 2017) Hettig, Julian; Mistelbauer, Gabriel; Rieder, Christian; Lawonn, Kai; Hansen, Christian; Stefan Bruckner and Anja Hennemuth and Bernhard Kainz and Ingrid Hotz and Dorit Merhof and Christian RiederNavigated placement of an ablation applicator in liver surgery would benefit from an effective intraoperative visualization of delicate 3D anatomical structures. In this paper, we propose an approach that facilitates surgery with an interactive as well as an animated map display to support navigated applicator placement in the liver. By reducing the visual complexity of 3D anatomical structures, we provide only the most important information on and around a planned applicator path. By employing different illustrative visualization techniques, the applicator path and its surrounding critical structures, such as blood vessels, are clearly conveyed in an unobstructed way. To retain contextual information around the applicator path and its tip, we desaturate these structures with increasing distance. To alleviate time-consuming and tedious interaction during surgery, our visualization is controlled solely by the position and orientation of a tracked applicator. This enables a direct interaction with the map display without interruption of the intervention. Based on our requirement analysis, we conducted a pilot study with eleven participants and an interactive user study with six domain experts to assess the task completion time, error rate, visual parameters and the usefulness of the animation. The outcome of our pilot study shows that our map display facilitates significantly faster decision making (11.8 s vs. 40.9 s) and significantly fewer false assessments of structures at risk (7.4 % vs. 10.3 %) compared to a currently employed 3D visualization. Furthermore, the animation supports timely perception of the course and depth of upcoming blood vessels, and helps to detect possible areas at risk along the path in advance. Hence, the obtained results demonstrate that our proposed interactive map displays exhibit potential to improve the outcome of navigated liver interventions.Item Estimation of 3D Finger Postures with wearable device measuring Skin Deformation on Back of Hand(The Eurographics Association, 2017) Kuno, Wakaba; Sugiura, Yuta; Asano, Nao; Kawai, Wataru; Sugimoto, Maki; Tony Huang and Arindam DeyWe propose a method for reconstructing hand posture by measuring the deformation of the back of the hand with a wearable device. Our method constructs a regression model by using the data on hand posture captured by a depth camera and data on the skin deformation of the back of the hand captured by several photo-reflective sensors attached to the wearable device. By using this regression model, the posture of the hand is reconstructed from the data of the photo-reflective sensors in real-time. The posture of fingers can be estimated without hindering the natural movement of the fingers since the deformation of the back of the hand is measured without directly measuring the position of the fingers. In our demonstration, users can reflect his / her own finger posture in a virtual environment.Item Fast and Accurate Simulation of Gravitational Field of Irregular-shaped Bodies using Polydisperse Sphere Packings(The Eurographics Association, 2017) Srinivas, Abhishek; Weller, Rene; Zachmann, Gabriel; Robert W. Lindeman and Gerd Bruder and Daisuke IwaiCurrently, interest in space missions to small bodies (e.g., asteroids) is increasing, both scientifically and commercially. One of the important aspects of these missions is to test the navigation, guidance, and control algorithms. The most cost and time efficient way to do this is to simulate the missions in virtual testbeds. To do so, a physically-based simulation of the small bodies' physical properties is essential. One of the most important physical properties, especially for landing operations, is the gravitational field, which can be quite irregular, depending on the shape and mass distribution of the body. In this paper, we present a novel algorithm to simulate gravitational fields for small bodies like asteroids. The main idea is to represent the small body's mass by a polydisperse sphere packing. This allows for an easy and efficient parallelization. Our GPU-based implementation outperforms traditional methods by more than two orders of magnitude while achieving a similar accuracy.