Search Results

Now showing 1 - 10 of 33
  • Item
    Experiencing High-Speed Slash Action in Virtual Reality Environment
    (The Eurographics Association, 2022) Yamamoto, Toranosuke; Fukuchi, Kentaro; Theophilus Teo; Ryota Kondo
    When a user uses a hand controller to swing a virtual sword in a virtual space, the sword movement seems slow if the its trajectory reflects the input directly. We hypothesize that this is because we are accustomed to seeing fast and instantaneous motion through movies or animations, and thus we perceive their motion as relatively slow. To address this issue, we propose a novel method of displaying exaggerated sword motions to allow a virtual reality user to enjoy a fast slash action. This method displays an arc-shaped motion blur effect along the predicted motion when the system detects the start of the slashing motion until the hand controller motion stops. Graphics of the sword are not displayed during this time. Therefore, the user is unaware of the actual trajectory of their input and how far it differs from the exaggerated motion blur effect.
  • Item
    Towards Improving Educational Virtual Reality by Classifying Distraction using Deep Learning
    (The Eurographics Association, 2022) Khokhar, Adil; Borst, Christoph W.; Hideaki Uchiyama; Jean-Marie Normand
    Distractions can cause students to miss out on critical information in educational Virtual Reality (VR) environments. Our work uses generalized features (angular velocities, positional velocities, pupil diameter, and eye openness) extracted from VR headset sensor data (head-tracking, hand-tracking, and eye-tracking) to train a deep CNN-LSTM classifier to detect distractors in our educational VR environment. We present preliminary results demonstrating a 94.93% accuracy for our classifier, an improvement in both the accuracy and generality of features used over two recent approaches. We believe that our work can be used to improve educational VR by providing a more accurate and generalizable approach for distractor detection.
  • Item
    Effect of Spatial and Temporal Dilation of a Brand Logo Printed on a VR Shopping Bag
    (The Eurographics Association, 2022) Horii, Moeki; Yamazaki, Azusa; Kuratomo, Noko; Zempo, Keiichi; Theophilus Teo; Ryota Kondo
    Interaction opportunities in virtual environments (VE), such as stores and events using virtual reality (VR), are increasing, and advertisements for brand recognition may be introduced to VE in the future. The objective of this research is to find a way to display advertisements in a VE that is more likely to attract the attention of consumers. Additionally, this study investigates the differences in consumer reactions depending on the advertisement's display method. In the experiment, VR customer avatars who hold shopping bags were walking in a store. Fictitious Logos were attached to each shopping bag. Each type of logo was presented in four different ways. From the results of the questionnaires, the display method that attracted the participants' eyes using temporal dilation, such as footprints that disappear over time, was significantly more memorable than the methods that did not use it. In addition, the mean of the impressive value of the blue logo, which is close to the color of the floor, was smaller than that of the other color's logos at the one percent level of significance in 3D + footprint display. This also revealed that it is necessary to use a conspicuous color that will not be buried in background colors, such as the floor color.
  • Item
    Interactive Segmentation of Textured Point Clouds
    (The Eurographics Association, 2022) Schmitz, Patric; Suder, Sebastian; Schuster, Kersten; Kobbelt, Leif; Bender, Jan; Botsch, Mario; Keim, Daniel A.
    We present a method for the interactive segmentation of textured 3D point clouds. The problem is formulated as a minimum graph cut on a k-nearest neighbor graph and leverages the rich information contained in high-resolution photographs as the discriminative feature. We demonstrate that the achievable segmentation accuracy is significantly improved compared to using an average color per point as in prior work. The method is designed to work efficiently on large datasets and yields results at interactive rates. This way, an interactive workflow can be realized in an immersive virtual environment, which supports the segmentation task by improved depth perception and the use of tracked 3D input devices. Our method enables to create high-quality segmentations of textured point clouds fast and conveniently.
  • Item
    From Capture to Immersive Viewing of 3D HDR Point Clouds
    (The Eurographics Association, 2022) Loscos, Celine; Souchet, Philippe; Barrios, Théo; Valenzise, Giuseppe; Cozot, Rémi; Hahmann, Stefanie; Patow, Gustavo A.
    The collaborators of the ReVeRY project address the design of a specific grid of cameras, a cost-efficient system that acquires at once several viewpoints, possibly under several exposures and the converting of multiview, multiexposed, video stream into a high quality 3D HDR point cloud. In the last two decades, industries and researchers proposed significant advances in media content acquisition systems in three main directions: increase of resolution and image quality with the new ultra-high-definition (UHD) standard; stereo capture for 3D content; and high-dynamic range (HDR) imaging. Compression, representation, and interoperability of these new media are active research fields in order to reduce data size and be perceptually accurate. The originality of the project is to address both HDR and depth through the entire pipeline. Creativity is enhanced by several tools, which answer challenges at the different stages of the pipeline: camera setup, data processing, capture visualisation, virtual camera controller, compression, perceptually guided immersive visualisation. It is the experience acquired by the researchers of the project that is exposed in this tutorial.
  • Item
    Could you Relax in an Artistic Co-creative Virtual Reality Experience?
    (The Eurographics Association, 2022) Lomet, Julien; Gaugne, Ronan; Gouranton, Valérie; Hideaki Uchiyama; Jean-Marie Normand
    Our work contributes to the design and study of artistic collaborative virtual environments through the presentation of immersive and interactive digital artwork installation and the evaluation of the impact of the experience on visitor's emotional state. The experience is centered on a dance performance, involves collaborative spectators who are engaged to the experience through full-body movements, and is structured in three times, a time of relaxation and discovery of the universe, a time of co-creation and a time of co-active contemplation. The collaborative artwork ''Creative Harmony'', was designed within a multidisciplinary team of artists, researchers and computer scientists from different laboratories. The aesthetic of the artistic environment is inspired by the German Romantism painting from 19th century. In order to foster co-presence, each participant of the experience is associated to an avatar that aims to represent both its body and movements. The music is an original composition designed to develop a peaceful and meditative ambiance to the universe of ''Creative Harmony''. The evaluation of the impact on visitor's mood is based on "Brief Mood Introspection Scale" (BMIS), a standard tool widely used in psychological and medical context. We also present an assessment of the experience through the analysis of questionnaires filled by the visitors. We observed a positive increase in the Positive-Tired indicator and a decrease in the Negative-Relaxed indicator, demonstrating the relaxing capabilities of the immersive virtual environment.
  • Item
    Geometric Deformation for Reducing Optic Flow and Cybersickness Dose Value in VR
    (The Eurographics Association, 2022) Lou, Ruding; So, Richard H. Y.; Bechmann, Dominique; Sauvage, Basile; Hasic-Telalovic, Jasminka
    Today virtual reality technologies is becoming more and more widespread and has found strong applications in various domains. However, the fear to experience motion sickness is still an important barrier for VR users. Instead of moving physically, VR users experience virtual locomotion but their vestibular systems do not sense the self-motion that are visually induced by immersive displays. The mismatch in visual and vestibular senses causes sickness. Previous solutions actively reduce user's field-of-view and alter their navigation. In this paper we propose a passive approach that temporarily deforms geometrically the virtual environment according to user navigation. Two deformation methods have been prototyped and tested. The first one reduces the perceived optic flow which is the main cause of visually induced motion sickness. The second one encourages users to adopt smoother trajectories and reduce the cybersickness dose value. Both methods have the potential to be applied generically.
  • Item
    AmplifiedCoaster: Amplifying the Perception of Ascent and Descent in Virtual-Reality-Equipped Electric Wheelchair in an Electric Wheeled Ramp
    (The Eurographics Association, 2022) Ito, Shunta; Nakanishi, Yasuto; Theophilus Teo; Ryota Kondo
    We introduce a novel virtual reality (VR) ride system consisting of a head-mounted display (HMD), an electric wheelchair and an electric wheeled ramp as an amusement park attraction. The electric wheeled ramp which is a customized electric wheelchair can carry a user wearing an HMD and is able to control its speed, acceleration, and orientation. This system presents the sense of continuous moving on a slope and the sense of movement on a slope with varying curvatures in the ascending and descending experiences, thus amplifying the perception of virtual ascent and descent.
  • Item
    NeuLF: Efficient Novel View Synthesis with Neural 4D Light Field
    (The Eurographics Association, 2022) Li, Zhong; Song, Liangchen; Liu, Celong; Yuan, Junsong; Xu, Yi; Ghosh, Abhijeet; Wei, Li-Yi
    In this paper, we present an efficient and robust deep learning solution for novel view synthesis of complex scenes. In our approach, a 3D scene is represented as a light field, i.e., a set of rays, each of which has a corresponding color when reaching the image plane. For efficient novel view rendering, we adopt a two-plane parameterization of the light field, where each ray is characterized by a 4D parameter. We then formulate the light field as a function that indexes rays to corresponding color values. We train a deep fully connected network to optimize this implicit function and memorize the 3D scene. Then, the scene-specific model is used to synthesize novel views. Different from previous light field approaches which require dense view sampling to reliably render novel views, our method can render novel views by sampling rays and querying the color for each ray from the network directly, thus enabling high-quality light field rendering with a sparser set of training images. Per-ray depth can be optionally predicted by the network, thus enabling applications such as auto refocus. Our novel view synthesis results are comparable to the state-of-the-arts, and even superior in some challenging scenes with refraction and reflection. We achieve this while maintaining an interactive frame rate and a small memory footprint.
  • Item
    NeodiagVR: Virtual Reality Apgar test environment
    (The Eurographics Association, 2022) Ferry, Lucas; Gimeno, Jesús; Estañ, Francisco Javier; Núñez, Francisco; Balaguer, Evelin; Fernández, Marcos; Portalés, Cristina; Posada, Jorge; Serrano, Ana
    Owing to the lack of accessibility of postpartum rooms nowadays, which are needed to teach the correct assessment of newborn health status to medical students, virtual reality and simulation are increasingly used for teaching and assessing visual perception tests that evaluate the condition of the newborn. This paper aims to explain the operation of the Apgar test evaluation simulator in a virtual reality environment. This virtual environment can be manipulated externally from a web browser to visualize and control the course of the simulation in real-time. In addition, an offline version would allow initialization and visualization of the Apgar test parameters without the need for synchronization with the virtual environment.