118 results
Search Results
Now showing 1 - 10 of 118
Item Velocity-Based LOD Reduction in Virtual Reality: A Psychophysical Approach(The Eurographics Association, 2023) Petrescu, David; Warren, Paul A.; Montazeri, Zahra; Pettifer, Steve; Babaei, Vahid; Skouras, MelinaVirtual Reality headsets enable users to explore the environment by performing self-induced movements. The retinal velocity produced by such motion reduces the visual system's ability to resolve fine detail. We measured the impact of self-induced head rotations on the ability to detect quality changes of a realistic 3D model in an immersive virtual reality environment. We varied the Level of Detail (LOD) as a function of rotational head velocity with different degrees of severity. Using a psychophysical method, we asked 17 participants to identify which of the two presented intervals contained the higher quality model under two different maximum velocity conditions. After fitting psychometric functions to data relating the percentage of correct responses to the aggressiveness of LOD manipulations, we identified the threshold severity for which participants could reliably (75%) detect the lower LOD model. Participants accepted an approximately four-fold LOD reduction even in the low maximum velocity condition without a significant impact on perceived quality, suggesting that there is considerable potential for optimisation when users are moving (increased range of perceptual uncertainty). Moreover, LOD could be degraded significantly more (around 84%) in the maximum head velocity condition, suggesting these effects are indeed speed-dependent.Item LoBSTr: Real-time Lower-body Pose Prediction from Sparse Upper-body Tracking Signals(The Eurographics Association and John Wiley & Sons Ltd., 2021) Yang, Dongseok; Kim, Doyeon; Lee, Sung-Hee; Mitra, Niloy and Viola, IvanWith the popularization of games and VR/AR devices, there is a growing need for capturing human motion with a sparse set of tracking data. In this paper, we introduce a deep neural network (DNN) based method for real-time prediction of the lowerbody pose only from the tracking signals of the upper-body joints. Specifically, our Gated Recurrent Unit (GRU)-based recurrent architecture predicts the lower-body pose and feet contact states from a past sequence of tracking signals of the head, hands, and pelvis. A major feature of our method is that the input signal is represented by the velocity of tracking signals. We show that the velocity representation better models the correlation between the upper-body and lower-body motions and increases the robustness against the diverse scales and proportions of the user body than position-orientation representations. In addition, to remove foot-skating and floating artifacts, our network predicts feet contact state, which is used to post-process the lower-body pose with inverse kinematics to preserve the contact. Our network is lightweight so as to run in real-time applications. We show the effectiveness of our method through several quantitative evaluations against other architectures and input representations with respect to wild tracking data obtained from commercial VR devices.Item Experiencing High-Speed Slash Action in Virtual Reality Environment(The Eurographics Association, 2022) Yamamoto, Toranosuke; Fukuchi, Kentaro; Theophilus Teo; Ryota KondoWhen a user uses a hand controller to swing a virtual sword in a virtual space, the sword movement seems slow if the its trajectory reflects the input directly. We hypothesize that this is because we are accustomed to seeing fast and instantaneous motion through movies or animations, and thus we perceive their motion as relatively slow. To address this issue, we propose a novel method of displaying exaggerated sword motions to allow a virtual reality user to enjoy a fast slash action. This method displays an arc-shaped motion blur effect along the predicted motion when the system detects the start of the slashing motion until the hand controller motion stops. Graphics of the sword are not displayed during this time. Therefore, the user is unaware of the actual trajectory of their input and how far it differs from the exaggerated motion blur effect.Item Variational Pose Prediction with Dynamic Sample Selection from Sparse Tracking Signals(The Eurographics Association and John Wiley & Sons Ltd., 2023) Milef, Nicholas; Sueda, Shinjiro; Kalantari, Nima Khademi; Myszkowski, Karol; Niessner, MatthiasWe propose a learning-based approach for full-body pose reconstruction from extremely sparse upper body tracking data, obtained from a virtual reality (VR) device. We leverage a conditional variational autoencoder with gated recurrent units to synthesize plausible and temporally coherent motions from 4-point tracking (head, hands, and waist positions and orientations). To avoid synthesizing implausible poses, we propose a novel sample selection and interpolation strategy along with an anomaly detection algorithm. Specifically, we monitor the quality of our generated poses using the anomaly detection algorithm and smoothly transition to better samples when the quality falls below a statistically defined threshold. Moreover, we demonstrate that our sample selection and interpolation method can be used for other applications, such as target hitting and collision avoidance, where the generated motions should adhere to the constraints of the virtual environment. Our system is lightweight, operates in real-time, and is able to produce temporally coherent and realistic motions.Item Immersive Analytics of Heterogeneous Biological Data Informed through Need-finding Interviews(The Eurographics Association, 2021) Ripken, Christine; Tusk, Sebastian; Tominski, Christian; Vrotsou, Katerina and Bernard, JürgenThe goal of this work is to improve existing biological analysis processes by means of immersive analytics. In a first step, we conducted need-finding interviews with 12 expert biologists to understand the limits of current practices and identify the requirements for an enhanced immersive analysis. Based on the gained insights, a novel immersive analytics solution is being developed that enables biologists to explore highly interrelated biological data, including genomes, transcriptomes, and phenomes. We use an abstract tabular representation of heterogeneous data projected onto a curved virtual wall. Several visual and interactive mechanisms are offered to allow biologists to get an overview of large data, to access details and additional information on the fly, to compare selected parts of the data, and to navigate up to about 5 million data values in real-time. Although a formal user evaluation is still pending, initial feedback indicates that our solution can be useful to expert biologists.Item Effect of Avatar Anthropomorphism on Body Ownership, Attractiveness and Collaboration in Immersive Virtual Environments(The Eurographics Association, 2020) Gorisse, Geoffrey; Dubosc, Charlotte; Christmann, Olivier; Fleury, Sylvain; Poinsot, Killian; Richir, Simon; Argelaguet, Ferran and McMahan, Ryan and Sugimoto, MakiEffective collaboration in immersive virtual environments requires to be able to communicate flawlessly using both verbal and non-verbal communication. We present an experiment investigating the impact of anthropomorphism on the sense of body ownership, avatar attractiveness and performance in an asymmetric collaborative task. Using three avatars presenting different facial properties, participants have to solve a construction game according to their partner's instructions. Results reveal no significant difference in terms of body ownership, but demonstrate significant differences concerning attractiveness and completion duration of the collaborative task. However the relative verbal interaction duration seems not impacted by the anthropomorphism level of the characters, meaning that participants were able to interact verbally independently of the way their character physically express their words in the virtual environment. Unexpectedly, correlation analyses also reveal a link between attractiveness and performance. The more attractive the avatar, the shorter the completion duration of the game. One could argue that, in the context of this experiment, avatar attractiveness could have led to an improvement in non-verbal communication as users could be more prone to observe their partner which translates into better performance in collaborative tasks. Other experiments must be conduced using gaze tracking to support this new hypothesis.Item Immersive WebXR Data Visualisation Tool(The Eurographics Association, 2023) Ogbonda, Ebube Glory; Vangorp, Peter; Hunter, DavidThis paper presents a study of a WebXR data visualisation tool designed for the immersive exploration of complex datasets in a 3D environment. The application developed using AFrame, D3.js, and JavaScript enables an interactive, device-agnostic platform compatible with various devices and systems. A user study is proposed to assess the tool's usability, user experience, and mental workload using the NASA Task Load Index (NASA TLX). The evaluation is planned to employ questionnaires, task completion times, and open-ended questions to gather feedback and insights. The anticipated results aim to provide insights into the effectiveness of the application in supporting users in understanding and extracting insights from complex data while delivering an engaging and intuitive experience. Future work will refine and expand the tool's capabilities by exploring interaction guidance, visualisation layout optimisation, and long-term user experience assessment. This research contributes to the growing field of immersive data visualisation and informs future tool design.Item Towards Improving Educational Virtual Reality by Classifying Distraction using Deep Learning(The Eurographics Association, 2022) Khokhar, Adil; Borst, Christoph W.; Hideaki Uchiyama; Jean-Marie NormandDistractions can cause students to miss out on critical information in educational Virtual Reality (VR) environments. Our work uses generalized features (angular velocities, positional velocities, pupil diameter, and eye openness) extracted from VR headset sensor data (head-tracking, hand-tracking, and eye-tracking) to train a deep CNN-LSTM classifier to detect distractors in our educational VR environment. We present preliminary results demonstrating a 94.93% accuracy for our classifier, an improvement in both the accuracy and generality of features used over two recent approaches. We believe that our work can be used to improve educational VR by providing a more accurate and generalizable approach for distractor detection.Item Investigating Students' Motivation and Cultural Heritage Learning in a Gamified Versus Non-gamified VR Environment(The Eurographics Association, 2023) Souropetsis, Markos; Kyza, Eleni A.; Nisiotis, Louis; Georgiou, Yiannis; Giorgalla, Varnavia; Pelechano, Nuria; Liarokapis, Fotis; Rohmer, Damien; Asadipour, AliThis empirical study investigated how the use of a gamified versus a non-gamified Virtual Reality (VR) learning environment impacted student motivation and learning outcomes in the context of a virtual visit at a cultural heritage site. For this purpose, we adopted an experimental research design to analyse the experience of 46 undergraduate university students; 23 of them used a gamified version of the VR learning environment, while 23 of them used the same VR environment without the gamification elements. Data were collected using pre and post learning assessments, motivation questionnaires, as well as individual semistructured interviews. The data analyses showed that students who experienced the gamified VR learning environment had greater learning gains and perceived competence, as compared to their counterparts who used the VR environment without the gamification elements. The findings of this research contribute to the principled design of VR environments to optimize students' knowledge acquisition and learning experience.Item Empathy with Human's and Robot's Embarrassments in Virtual Environments(The Eurographics Association, 2020) Sugiura, Maruta; Higashihata, Kento; Sato, Atsushi; Itakura, Shoji; Kitazaki, Michiteru; Kulik, Alexander and Sra, Misha and Kim, Kangsoo and Seo, Byung-KukWe feel embarrassed not only when we are embarrassed but also when we are watching others embarrassed. Humans show empathy for pain not only human others but also robots. However, it has not been investigated whether humans show empathy for robot's embarrassment. Thus, we aimed to test whether humans can empathize with robot's embarrassment in virtual environments. Four situations both of non-embarrassing and embarrassing stimuli were presented on an HMD, and participants were asked to rate their own feeling of embarrassment and the actor's feeling of embarrassment. We found that the own feeling of embarrassment was higher in human than robot actors, and higher in embarrassing than non-embarrassing conditions. The actor's feeling of embarrassment was rated higher in embarrassing than non-embarrassing conditions, and the effect was much larger in human than robot actors. These results suggest that participants could show empathy with both for human and robot in the embarrassing situations, but they infer that the robot feels less embarrassed than humans.