11 results
Search Results
Now showing 1 - 10 of 11
Item Evaluating the Effects of Hand-gesture-based Interaction with Virtual Content in a 360° Movie(The Eurographics Association, 2017) Khan, Humayun; Lee, Gun A.; Hoermann, Simon; Clifford, Rory M. S.; Billinghurst, Mark; Lindeman, Robert W.; Robert W. Lindeman and Gerd Bruder and Daisuke IwaiHead-mounted displays are becoming increasingly popular as home entertainment devices for viewing 360° movies. This paper explores the effects of adding gesture interaction with virtual content and two different hand-visualisation modes for 360° movie watching experience. The system in the study comprises of a Leap Motion sensor to track the user's hand and finger motions, in combination with a SoftKinetic RGB-D camera to capture the texture of the hands and arms. A 360° panoramic movie with embedded virtual objects was used as content. Four conditions, displaying either a point-cloud of the real hand or a rigged computer-generated hand, with and without interaction, were evaluated. Presence, agency, embodiment, and ownership, as well as the overall participant preference were measured. Results showed that participants had a strong preference for the conditions with interactive virtual content, and they felt stronger embodiment and ownership. The comparison of the two hand visualisations showed that the display of the real hand elicited stronger ownership. There was no overall difference for presence between the four conditions. These findings suggest that adding interaction with virtual content could be beneficial to the overall user experience, and that interaction should be performed using the real hand visualisation instead of the virtual hand if higher ownership is desired.Item User Interface Agents for Guiding Interaction with Augmented Virtual Mirrors(The Eurographics Association, 2017) Lee, Gun A.; Rudhru, Omprakash; Park, Hye Sun; Kim, Ho Won; Billinghurst, Mark; Robert W. Lindeman and Gerd Bruder and Daisuke IwaiThis research investigates using user interface (UI) agents for guiding gesture based interaction with Augmented Virtual Mirrors. Compared to prior work in gesture interaction, where graphical symbols are used for guiding user interaction, we propose using UI agents. We explore two approaches for using UI agents: 1) using a UI agent as a delayed cursor and 2) using a UI agent as an interactive button. We conducted two user studies to evaluate the proposed designs. The results from the user studies show that UI agents are effective for guiding user interactions in a similar way as a traditional graphical user interface providing visual cues, while they are useful in emotionally engaging with users.Item Collaborative View Configurations for Multi-user Interaction with a Wall-size Display(The Eurographics Association, 2017) Kim, Hyungon; Kim, Yeongmi; Lee, Gun A.; Billinghurst, Mark; Bartneck, Christoph; Robert W. Lindeman and Gerd Bruder and Daisuke IwaiThis paper explores the effects of different collaborative view configuration on face-to-face collaboration using a wall-size display and the relationship between view configuration and multi-user interaction. Three different view configurations (shared view, split screen, and split screen with navigation information) for multi-user collaboration with a wall-size display were introduced and evaluated in a user study. From the experiment results, several insights for designing a virtual environment with a wall-size display were discussed. The shared view configuration does not disturb collaboration despite control conflict and can provide an effective collaboration. The split screen view configuration can provide independent collaboration while it can take users' attention. The navigation information can reduce the interaction required for the navigational task while an overall interaction performance may not increase.Item WeightSync: Proprioceptive and Haptic Stimulation for Virtual Physical Perception(The Eurographics Association, 2020) Teo, Theophilus; Nakamura, Fumihiko; Sugimoto, Maki; Verhulst, Adrien; Lee, Gun A.; Billinghurst, Mark; Adcock, Matt; Argelaguet, Ferran and McMahan, Ryan and Sugimoto, MakiIn virtual environments, we are able to have an augmented embodiment with various virtual avatars. Also, in physical environments, we can extend the embodiment experience using Supernumerary Robotic Limbs (SRLs) by attaching them to the body of a person. It is also important to consider for the feedback to the operator who controls the avatar (virtual) and SRLs (physical). In this work, we use a servo motor and Galvanic Vestibular Stimulation to provide feedback from a virtual interaction that simulates remotely controlling SRLs. Our technique transforms information about the virtual objects into haptic and proprioceptive feedback that provides different sensations to an operator.Item A Gaze-depth Estimation Technique with an Implicit and Continuous Data Acquisition for OST-HMDs(The Eurographics Association, 2017) Lee, Youngho; Piumsomboon, Thammathip; Ens, Barrett; Lee, Gun A.; Dey, Arindam; Billinghurst, Mark; Tony Huang and Arindam DeyThe rapid development of machine learning algorithms can be leveraged for potential software solutions in many domains including techniques for depth estimation of human eye gaze. In this paper, we propose an implicit and continuous data acquisition method for 3D gaze depth estimation for an optical see-Through head mounted display (OST-HMD) equipped with an eye tracker. Our method constantly monitoring and generating user gaze data for training our machine learning algorithm. The gaze data acquired through the eye-tracker include the inter-pupillary distance (IPD) and the gaze distance to the real and virtual target for each eye.Item Virtual See-through Displays: Interactive Visualization Method in Ubiquitous Computing Environments(The Eurographics Association, 2006) Shin, Seonhyung; Lee, Gun A.; Yang, Ungyeon; Son, Wookho; Dieter Fellner and Charles HansenCooperation between multiple information devices is necessary in ubiquitous computing environments. Consequently, visual display interfaces also need to cooperate with each other to help users to understand virtual information in a consistent way. In this point of view, we propose a concept of 'virtual seethrough' visualization which supports not only context consistency between multiple visual displays, but also supports personalized and quality supplemented visualizations. We also describe various interaction methods available with virtual see-through displays. By adding information and participant management, we expect 'virtual see-through' will be one of the popular visualization methods for visual display interfaces in ubiquitous computing environments.Item Real-time Visual Representations for Mixed Reality Remote Collaboration(The Eurographics Association, 2017) Gao, Lei; Bai, Huidong; Piumsomboon, Thammathip; Lee, Gun A.; Lindeman, Robert W.; Billinghurst, Mark; Robert W. Lindeman and Gerd Bruder and Daisuke IwaiWe present a prototype Mixed Reality (MR) system with a hybrid interface to support remote collaboration between a local worker and a remote expert in a large-scale work space. By combining a low-resolution 3D point-cloud of the environment surrounding the local worker with a high-resolution real-time view of small focused details, the remote expert can see a virtual copy of the local workspace with an independent viewpoint control. Meanwhile, the export can also check the current actions of the local worker through a real-time feedback view. We conducted a pilot study to evaluate the usability of our system by comparing the performance of three different interface designs (showing the real-time view in forms of 2D first-person view, a 2D third-person view and a 3D point cloud view). We found no difference in average task performance time between the three interfaces, but there was a difference in user preference.Item The Effect of User Embodiment in AV Cinematic Experience(The Eurographics Association, 2017) Chen, Joshua; Lee, Gun A.; Billinghurst, Mark; Lindeman, Robert W.; Bartneck, Christoph; Robert W. Lindeman and Gerd Bruder and Daisuke IwaiVirtual Reality (VR) is becoming a popular medium for viewing immersive cinematic experiences using 360° panoramic movies and head mounted displays. There are previous research on user embodiment in real-time rendered VR, but not in relation to cinematic VR based on 360° panoramic video. In this paper we explore the effects of introducing the user's real body into cinematic VR experiences. We conducted a study evaluating how the type of movie and user embodiment affects the sense of presence and user engagement. We found that when participants were able to see their own body in the VR movie, there was significant increase in the sense of Presence, yet user engagement was not significantly affected. We discuss on the implications of the results and how it can be expanded in the future.Item Social Dining Experience using Mixed Reality for Older Adults(The Eurographics Association, 2017) Hart, Jonathon Derek; Lee, Gun A.; Smith, Ashleigh E.; Hull, Melissa; Haren, Matthew T.; Paquet, Catherine; Hill, Julie-Ann; Lomax, Zack; Ashworth, Travis; Smith, Ross T.; Tony Huang and Arindam DeyThis project investigates a novel method of engaging older adults in meaningful mealtime social interactions through the use of Mixed Reality technology. We propose a novel dining system that aims to facilitate interpersonal interactions and enhance meal consumption in socially isolated older adults. We created a prototype which allowed the target audience to test and discuss our concept so we can iteratively improve a user oriented design.Item Improving Collaboration in Augmented Video Conference using Mutually Shared Gaze(The Eurographics Association, 2017) Lee, Gun A.; Kim, Seungwon; Lee, Youngho; Dey, Arindam; Piumsomboon, Thammathip; Norman, Mitchell; Billinghurst, Mark; Robert W. Lindeman and Gerd Bruder and Daisuke IwaiTo improve remote collaboration in video conferencing systems, researchers have been investigating augmenting visual cues onto a shared live video stream. In such systems, a person wearing a head-mounted display (HMD) and camera can share her view of the surrounding real-world with a remote collaborator to receive assistance on a real-world task. While this concept of augmented video conferencing (AVC) has been actively investigated, there has been little research on how sharing gaze cues might affect the collaboration in video conferencing. This paper investigates how sharing gaze in both directions between a local worker and remote helper in an AVC system affects the collaboration and communication. Using a prototype AVC system that shares the eye gaze of both users, we conducted a user study that compares four conditions with different combinations of eye gaze sharing between the two users. The results showed that sharing each other's gaze significantly improved collaboration and communication.