17 results
Search Results
Now showing 1 - 10 of 17
Item Evaluating the Effects of Hand-gesture-based Interaction with Virtual Content in a 360° Movie(The Eurographics Association, 2017) Khan, Humayun; Lee, Gun A.; Hoermann, Simon; Clifford, Rory M. S.; Billinghurst, Mark; Lindeman, Robert W.; Robert W. Lindeman and Gerd Bruder and Daisuke IwaiHead-mounted displays are becoming increasingly popular as home entertainment devices for viewing 360° movies. This paper explores the effects of adding gesture interaction with virtual content and two different hand-visualisation modes for 360° movie watching experience. The system in the study comprises of a Leap Motion sensor to track the user's hand and finger motions, in combination with a SoftKinetic RGB-D camera to capture the texture of the hands and arms. A 360° panoramic movie with embedded virtual objects was used as content. Four conditions, displaying either a point-cloud of the real hand or a rigged computer-generated hand, with and without interaction, were evaluated. Presence, agency, embodiment, and ownership, as well as the overall participant preference were measured. Results showed that participants had a strong preference for the conditions with interactive virtual content, and they felt stronger embodiment and ownership. The comparison of the two hand visualisations showed that the display of the real hand elicited stronger ownership. There was no overall difference for presence between the four conditions. These findings suggest that adding interaction with virtual content could be beneficial to the overall user experience, and that interaction should be performed using the real hand visualisation instead of the virtual hand if higher ownership is desired.Item User Interface Agents for Guiding Interaction with Augmented Virtual Mirrors(The Eurographics Association, 2017) Lee, Gun A.; Rudhru, Omprakash; Park, Hye Sun; Kim, Ho Won; Billinghurst, Mark; Robert W. Lindeman and Gerd Bruder and Daisuke IwaiThis research investigates using user interface (UI) agents for guiding gesture based interaction with Augmented Virtual Mirrors. Compared to prior work in gesture interaction, where graphical symbols are used for guiding user interaction, we propose using UI agents. We explore two approaches for using UI agents: 1) using a UI agent as a delayed cursor and 2) using a UI agent as an interactive button. We conducted two user studies to evaluate the proposed designs. The results from the user studies show that UI agents are effective for guiding user interactions in a similar way as a traditional graphical user interface providing visual cues, while they are useful in emotionally engaging with users.Item Holo Worlds Infinite: Procedural Spatial Aware AR Content(The Eurographics Association, 2017) Lawrence, Louise M.; Hart, Jonathon Derek; Billinghurst, Mark; Tony Huang and Arindam DeyWe developed an Augmented Reality (AR) application that procedurally generates content which is programmatically placed on the floor. It uses its awareness of its spatial surroundings to generate and place virtual content. We created a prototype that can be used as the basis of a city simulation game that can be played on the floor of any room space, but the approach could also be used for many other applications.Item Comparative Evaluation of Sensor Devices for Micro-Gestures(The Eurographics Association, 2017) Simmons, H.; Devi, R.; Ens, Barrett; Billinghurst, Mark; Tony Huang and Arindam DeyThis paper presents a comparative evaluation of two gesture recognition sensors and their ability to detect small, movements known as micro-gestures. In this work we explore the capabilities of these devices by testing if users can reliably use the sensors to select a target using a simple 1D user interface element. We implemented three distinct gestures, including a large gesture of moving the whole hand up and down; a smaller gesture of moving a finger up and down and; and a small movement of the thumb against the forefinger to represent a virtual slider. Demo participants will be able to experience these three gestures with to sensing devices, a Leap Motion and Google Soli.Item Comparative Evaluation of Sensor Devices for Micro-Gestures(The Eurographics Association, 2017) Simmons, H.; Devi, R.; Ens, Barrett; Billinghurst, Mark; Tony Huang and Arindam DeyThis paper presents a comparative evaluation of two hand gesture recognition sensors and their ability to detect small, sub millimeter movement. We explore the capabilities of these devices by testing if users can reliably use the sensors to select a simple user interface element in 1D space using three distinct gestures a small movement of the thumb and forefinger representing a slider, the slightly larger movement of moving a finger up and down and a large gesture of moving the whole hand up and down. Results of our preliminary study reveal that the palm provides the fastest and most reliable input. While not conclusive, data from our initial study indicates that the Leap sensor provides lower error, difficulty and fatigue than the Soli sensor with our test gesture set.Item Collaborative View Configurations for Multi-user Interaction with a Wall-size Display(The Eurographics Association, 2017) Kim, Hyungon; Kim, Yeongmi; Lee, Gun A.; Billinghurst, Mark; Bartneck, Christoph; Robert W. Lindeman and Gerd Bruder and Daisuke IwaiThis paper explores the effects of different collaborative view configuration on face-to-face collaboration using a wall-size display and the relationship between view configuration and multi-user interaction. Three different view configurations (shared view, split screen, and split screen with navigation information) for multi-user collaboration with a wall-size display were introduced and evaluated in a user study. From the experiment results, several insights for designing a virtual environment with a wall-size display were discussed. The shared view configuration does not disturb collaboration despite control conflict and can provide an effective collaboration. The split screen view configuration can provide independent collaboration while it can take users' attention. The navigation information can reduce the interaction required for the navigational task while an overall interaction performance may not increase.Item Exploring Pupil Dilation in Emotional Virtual Reality Environments(The Eurographics Association, 2017) Chen, Hao; Dey, Arindam; Billinghurst, Mark; Lindeman, Robert W.; Robert W. Lindeman and Gerd Bruder and Daisuke IwaiPrevious investigations have shown that pupil dilation can be affected by emotive pictures, audio clips, and videos. In this paper, we explore how emotive Virtual Reality (VR) content can also cause pupil dilation. VR has been shown to be able to evoke negative and positive arousal in users when they are immersed in different virtual scenes. In our research, VR scenes were used as emotional triggers. Five emotional VR scenes were designed in our study and each scene had five emotion segments; happiness, fear, anxiety, sadness, and disgust. When participants experienced the VR scenes, their pupil dilation and the brightness in the headset were captured. We found that both the negative and positive emotion segments produced pupil dilation in the VR environments. We also explored the effect of showing heart beat cues to the users, and if this could cause difference in pupil dilation. In our study, three different heart beat cues were shown to users using a combination of three channels; haptic, audio, and visual. The results showed that the haptic-visual cue caused the most significant pupil dilation change from the baseline.Item A Gaze-depth Estimation Technique with an Implicit and Continuous Data Acquisition for OST-HMDs(The Eurographics Association, 2017) Lee, Youngho; Piumsomboon, Thammathip; Ens, Barrett; Lee, Gun A.; Dey, Arindam; Billinghurst, Mark; Tony Huang and Arindam DeyThe rapid development of machine learning algorithms can be leveraged for potential software solutions in many domains including techniques for depth estimation of human eye gaze. In this paper, we propose an implicit and continuous data acquisition method for 3D gaze depth estimation for an optical see-Through head mounted display (OST-HMD) equipped with an eye tracker. Our method constantly monitoring and generating user gaze data for training our machine learning algorithm. The gaze data acquired through the eye-tracker include the inter-pupillary distance (IPD) and the gaze distance to the real and virtual target for each eye.Item An AR Network Cabling Tutoring System for Wiring a Rack(The Eurographics Association, 2017) Herbert, B. M.; Weerasinghe, A.; Ens, Barrett; Billinghurst, Mark; Wigley, G.; Tony Huang and Arindam DeyWe present a network cabling tutoring system that guides learners through cabling a network topology by overlaying virtual icons and arrows on the ports. The system determines the network state by parsing switch output and does not depend on network protocols being functional. A server provides a web-based user interface and communicates with an external intelligent tutoring system called The Generalized Intelligent Framework for Tutoring. Users use a tablet to view AR annotations, though support for HoloLens HMD will be added soon.Item An Augmented Reality and Virtual Reality Pillar for Exhibitions: A Subjective Exploration(The Eurographics Association, 2017) See, Zi Siang; Sunar, Mohd Shahrizal; Billinghurst, Mark; Dey, Arindam; Santano, Delas; Esmaeili, Human; Thwaites, Harold; Robert W. Lindeman and Gerd Bruder and Daisuke IwaiThis paper presents the development of an Augmented Reality (AR) and Virtual Reality (AR) pillar, a novel approach for showing AR and VR content in a public setting. A pillar in a public exhibition venue was converted to a four-sided AR and VR showcase, and a cultural heritage exhibit of ''Boatbuilders of Pangkor'' was shown. Multimedia tablets and mobile AR head-mountdisplays (HMDs) were provided for visitors to experience multisensory AR and VR content demonstrated on the pillar. The content included AR-based videos, maps, images and text, and VR experiences that allowed visitors to view reconstructed 3D subjects and remote locations in a 360° virtual environment. In this paper, we describe the prototype system, a user evaluation study and directions for future work.