JVRC 13: Joint Virtual Reality Conference of EGVE - EuroVR

Permanent URI for this collection


Evaluation of Surround-View and Self-Rotation in the OCTAVIS VR-System

Dyck, Eugen
Pfeiffer, Thies
Botsch, Mario

Visual Attention to Wayfinding Aids in Virtual Environments

Bertrand, Jeffrey
Ebrahimi, Elham
Wachter, Aliceann
Luo, Jun
Babu, Sabarish V.
Duchowski, Andrew T.
Meehan, Nancy
Gramopadhye, Anand K.

LightSkin: Real-Time Global Illumination for Virtual and Mixed Reality

Lensing, Philipp
Broll, Wolfgang

Ray-Traced Collision Detection: Interpenetration Control and Multi-GPU Performance

Lehericey, Francois
Gouranton, Valérie
Arnaldi, Bruno

Personalized Animatable Avatars from Depth Data

Mashalkar, Jai
Bagwe, Niket
Chaudhuri, Parag

Semantic Modelling of Interactive 3D Content

Flotynski, Jakub
Walczak, Krzysztof

Exploring Distant Objects with Augmented Reality

Tatzgern, Markus
Grasset, Raphael
Veas, Eduardo
Kalkofen, Denis
Seichter, Hartmut
Schmalstieg, Dieter

Background Motion, Clutter, and the Impact on Virtual Object Motion Perception in Augmented Reality

Ferrer, Vicente
Yang, Yifan
Perdomo, Alex
Quarles, John

Methodology for Immersive Emotional Assessment of Virtual Product Design by Customers

Katicic, Jurica
Häfner, Polina
Ovtcharova, Jivka

The Impact of Altered Gravitation on Performance and Workload of Augmented Reality Hand-Eye-Coordination: Inside vs. Outside of Human Body Frame of Reference

Markov-Vetter, Daniela
Zander, Vanja
Latsch, Joachim
Staadt, Oliver

Free-form Implicit Haptic Rendering

Moustakas, Konstantinos

Towards Enabling More Effective Locomotion in VR Using a Wheelchair-based Motion Platform

Fiore, Loren Puchalla
Coben, Ella
Merritt, Samantha
Liu, Peng
Interrante, Victoria

''It''+''I'': Virtual Embodiments as Hybrid Experiences

Giraud, Tom
Paljic, Alexis
Leroy, Laure


BibTeX (JVRC 13: Joint Virtual Reality Conference of EGVE - EuroVR)
@inproceedings{
:10.2312/EGVE.JVRC13.001-008,
booktitle = {
Joint Virtual Reality Conference of EGVE - EuroVR},
editor = {
Betty Mohler and Bruno Raffin and Hideo Saito and Oliver Staadt
}, title = {{
Evaluation of Surround-View and Self-Rotation in the OCTAVIS VR-System}},
author = {
Dyck, Eugen
and
Pfeiffer, Thies
and
Botsch, Mario
}, year = {
2013},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-905674-47-7},
DOI = {
/10.2312/EGVE.JVRC13.001-008}
}
@inproceedings{
:10.2312/EGVE.JVRC13.009-016,
booktitle = {
Joint Virtual Reality Conference of EGVE - EuroVR},
editor = {
Betty Mohler and Bruno Raffin and Hideo Saito and Oliver Staadt
}, title = {{
Visual Attention to Wayfinding Aids in Virtual Environments}},
author = {
Bertrand, Jeffrey
and
Ebrahimi, Elham
and
Wachter, Aliceann
and
Luo, Jun
and
Babu, Sabarish V.
and
Duchowski, Andrew T.
and
Meehan, Nancy
and
Gramopadhye, Anand K.
}, year = {
2013},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-905674-47-7},
DOI = {
/10.2312/EGVE.JVRC13.009-016}
}
@inproceedings{
:10.2312/EGVE.JVRC13.017-024,
booktitle = {
Joint Virtual Reality Conference of EGVE - EuroVR},
editor = {
Betty Mohler and Bruno Raffin and Hideo Saito and Oliver Staadt
}, title = {{
LightSkin: Real-Time Global Illumination for Virtual and Mixed Reality}},
author = {
Lensing, Philipp
and
Broll, Wolfgang
}, year = {
2013},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-905674-47-7},
DOI = {
/10.2312/EGVE.JVRC13.017-024}
}
@inproceedings{
:10.2312/EGVE.JVRC13.033-040,
booktitle = {
Joint Virtual Reality Conference of EGVE - EuroVR},
editor = {
Betty Mohler and Bruno Raffin and Hideo Saito and Oliver Staadt
}, title = {{
Ray-Traced Collision Detection: Interpenetration Control and Multi-GPU Performance}},
author = {
Lehericey, Francois
and
Gouranton, Valérie
and
Arnaldi, Bruno
}, year = {
2013},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-905674-47-7},
DOI = {
/10.2312/EGVE.JVRC13.033-040}
}
@inproceedings{
:10.2312/EGVE.JVRC13.025-032,
booktitle = {
Joint Virtual Reality Conference of EGVE - EuroVR},
editor = {
Betty Mohler and Bruno Raffin and Hideo Saito and Oliver Staadt
}, title = {{
Personalized Animatable Avatars from Depth Data}},
author = {
Mashalkar, Jai
and
Bagwe, Niket
and
Chaudhuri, Parag
}, year = {
2013},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-905674-47-7},
DOI = {
/10.2312/EGVE.JVRC13.025-032}
}
@inproceedings{
:10.2312/EGVE.JVRC13.041-048,
booktitle = {
Joint Virtual Reality Conference of EGVE - EuroVR},
editor = {
Betty Mohler and Bruno Raffin and Hideo Saito and Oliver Staadt
}, title = {{
Semantic Modelling of Interactive 3D Content}},
author = {
Flotynski, Jakub
and
Walczak, Krzysztof
}, year = {
2013},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-905674-47-7},
DOI = {
/10.2312/EGVE.JVRC13.041-048}
}
@inproceedings{
:10.2312/EGVE.JVRC13.049-056,
booktitle = {
Joint Virtual Reality Conference of EGVE - EuroVR},
editor = {
Betty Mohler and Bruno Raffin and Hideo Saito and Oliver Staadt
}, title = {{
Exploring Distant Objects with Augmented Reality}},
author = {
Tatzgern, Markus
and
Grasset, Raphael
and
Veas, Eduardo
and
Kalkofen, Denis
and
Seichter, Hartmut
and
Schmalstieg, Dieter
}, year = {
2013},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-905674-47-7},
DOI = {
/10.2312/EGVE.JVRC13.049-056}
}
@inproceedings{
:10.2312/EGVE.JVRC13.057-064,
booktitle = {
Joint Virtual Reality Conference of EGVE - EuroVR},
editor = {
Betty Mohler and Bruno Raffin and Hideo Saito and Oliver Staadt
}, title = {{
Background Motion, Clutter, and the Impact on Virtual Object Motion Perception in Augmented Reality}},
author = {
Ferrer, Vicente
and
Yang, Yifan
and
Perdomo, Alex
and
Quarles, John
}, year = {
2013},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-905674-47-7},
DOI = {
/10.2312/EGVE.JVRC13.057-064}
}
@inproceedings{
:10.2312/EGVE.JVRC13.077-082,
booktitle = {
Joint Virtual Reality Conference of EGVE - EuroVR},
editor = {
Betty Mohler and Bruno Raffin and Hideo Saito and Oliver Staadt
}, title = {{
Methodology for Immersive Emotional Assessment of Virtual Product Design by Customers}},
author = {
Katicic, Jurica
and
Häfner, Polina
and
Ovtcharova, Jivka
}, year = {
2013},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-905674-47-7},
DOI = {
/10.2312/EGVE.JVRC13.077-082}
}
@inproceedings{
:10.2312/EGVE.JVRC13.065-072,
booktitle = {
Joint Virtual Reality Conference of EGVE - EuroVR},
editor = {
Betty Mohler and Bruno Raffin and Hideo Saito and Oliver Staadt
}, title = {{
The Impact of Altered Gravitation on Performance and Workload of Augmented Reality Hand-Eye-Coordination: Inside vs. Outside of Human Body Frame of Reference}},
author = {
Markov-Vetter, Daniela
and
Zander, Vanja
and
Latsch, Joachim
and
Staadt, Oliver
}, year = {
2013},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-905674-47-7},
DOI = {
/10.2312/EGVE.JVRC13.065-072}
}
@inproceedings{
:10.2312/EGVE.JVRC13.073-076,
booktitle = {
Joint Virtual Reality Conference of EGVE - EuroVR},
editor = {
Betty Mohler and Bruno Raffin and Hideo Saito and Oliver Staadt
}, title = {{
Free-form Implicit Haptic Rendering}},
author = {
Moustakas, Konstantinos
}, year = {
2013},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-905674-47-7},
DOI = {
/10.2312/EGVE.JVRC13.073-076}
}
@inproceedings{
:10.2312/EGVE.JVRC13.083-090,
booktitle = {
Joint Virtual Reality Conference of EGVE - EuroVR},
editor = {
Betty Mohler and Bruno Raffin and Hideo Saito and Oliver Staadt
}, title = {{
Towards Enabling More Effective Locomotion in VR Using a Wheelchair-based Motion Platform}},
author = {
Fiore, Loren Puchalla
and
Coben, Ella
and
Merritt, Samantha
and
Liu, Peng
and
Interrante, Victoria
}, year = {
2013},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-905674-47-7},
DOI = {
/10.2312/EGVE.JVRC13.083-090}
}
@inproceedings{
:10.2312/EGVE.JVRC13.091-094,
booktitle = {
Joint Virtual Reality Conference of EGVE - EuroVR},
editor = {
Betty Mohler and Bruno Raffin and Hideo Saito and Oliver Staadt
}, title = {{
''It''+''I'': Virtual Embodiments as Hybrid Experiences}},
author = {
Giraud, Tom
and
Paljic, Alexis
and
Leroy, Laure
}, year = {
2013},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-905674-47-7},
DOI = {
/10.2312/EGVE.JVRC13.091-094}
}

Browse

Recent Submissions

Now showing 1 - 13 of 13
  • Item
    Evaluation of Surround-View and Self-Rotation in the OCTAVIS VR-System
    (The Eurographics Association, 2013) Dyck, Eugen; Pfeiffer, Thies; Botsch, Mario; Betty Mohler and Bruno Raffin and Hideo Saito and Oliver Staadt
    In this paper we evaluate spatial presence and orientation in the OCTAVIS system, a novel virtual reality platform aimed at training and rehabilitation of visual-spatial cognitive abilities. It consists of eight touch-screen displays surrounding the user, thereby providing a 360! horizontal panorama view. A rotating office chair and a joystick in the armrest serve as input devices to easily navigate through the virtual environment. We conducted a two-step experiment to investigate spatial orientation capabilities with our device. First, we examined whether the extension of the horizontal field of view from 135! (three displays) to 360! (eight displays) has an effect on spatial presence and on the accuracy in a pointing task. Second, driving the full eight screens, we explored the effect of embodied self-rotation using the same measures. In particular we compare navigation by rotating the world while the user is sitting stable to a stable world and a self-rotating user.
  • Item
    Visual Attention to Wayfinding Aids in Virtual Environments
    (The Eurographics Association, 2013) Bertrand, Jeffrey; Ebrahimi, Elham; Wachter, Aliceann; Luo, Jun; Babu, Sabarish V.; Duchowski, Andrew T.; Meehan, Nancy; Gramopadhye, Anand K.; Betty Mohler and Bruno Raffin and Hideo Saito and Oliver Staadt
    In an empirical evaluation, we examined participants' visual attention allocation to a dynamic wayfinding map in a complex simulation meant to educate medical practitioners in a hand hygiene protocol. Complex virtual environments (VEs) are novel types of virtual worlds that embody large spaces, interactive virtual humans, static and dynamic virtual entities, and intricate tasks that simulate real-world settings. Previous investigations of wayfinding aids have focused on the evaluation of spatial orientation, knowledge acquisition, and usage. We employed an eye tracker and created visualization tools to quantitatively and qualitatively analyze participants' visual attention to the wayfinding aid in our simulation. Results suggest that the proportion of time of gaze, total gaze count, and gaze transitions between various elements of the VE are altered with the use of the wayfinding aid. Participants also tend to employ innovative visual strategies in order to efficiently plan routes and accomplish tasks in the VE.
  • Item
    LightSkin: Real-Time Global Illumination for Virtual and Mixed Reality
    (The Eurographics Association, 2013) Lensing, Philipp; Broll, Wolfgang; Betty Mohler and Bruno Raffin and Hideo Saito and Oliver Staadt
    Synthesizing global illumination effects is a vast field of research for both offline and real-time rendering. While the most important goals for offline rendering are realism and physical correctness, real-time rendering approaches additionally need to be sufficiently fast. In this paper we present a fast and novel global illumination approach ca-pable to realize indirect illumination for diffuse and glossy surfaces based on thousands of virtual area lights even for dynamic scenes. To achieve real-time performance we calculate indirect light influence only on sparse scene points in model-space and interpolate the results for the entire visible scene. A novel shading technique is proposed to support high-frequency indirect lighting effects such as view-dependent glossy reflections without introducing temporal incoherence in dynamic scenes. Since our approach does not require any pre-computation it may be ap-plied to Mixed Reality applications improving the visual integration of virtual content.
  • Item
    Ray-Traced Collision Detection: Interpenetration Control and Multi-GPU Performance
    (The Eurographics Association, 2013) Lehericey, Francois; Gouranton, Valérie; Arnaldi, Bruno; Betty Mohler and Bruno Raffin and Hideo Saito and Oliver Staadt
    We proposed in [LGA13] an iterative ray-traced collision detection algorithm (IRTCD) that exploits spatial and temporal coherency and proved to be computationally efficient but at the price of some geometrical approximations that allow more interpenetration than needed. In this paper, we present two methods to efficiently control and reduce the interpenetration without noticeable computation overhead. The first method predicts the next potentially colliding vertices. These predictions are used to make our IRTCD algorithm more robust to the above-mentioned approximations, therefore reducing the errors up to 91%. We also present a ray re-projection algorithm that improves the physical response of ray-traced collision detection algorithm. This algorithm also reduces, up to 52%, the interpenetration between objects in a virtual environment. Our last contribution shows that our algorithm, when implemented on multi-GPUs architectures, is far faster.
  • Item
    Personalized Animatable Avatars from Depth Data
    (The Eurographics Association, 2013) Mashalkar, Jai; Bagwe, Niket; Chaudhuri, Parag; Betty Mohler and Bruno Raffin and Hideo Saito and Oliver Staadt
    Abstract We present a method to create virtual character models of real users from noisy depth data. We use a combination of four depth sensors to capture a point cloud model of the person. Direct meshing of this data often creates meshes with topology that is unsuitable for proper character animation. We develop our mesh model by fitting a single template mesh to the point cloud in a two-stage process. The first stage fitting involves piecewise smooth deformation of the mesh, whereas the second stage does a finer fit using an iterative Laplacian framework. We complete the model by adding properly aligned and blended textures to the final mesh and show that it can be easily animated using motion data from a single depth camera. Our process maintains the topology of the original mesh and the proportions of the final mesh match the proportions of the actual user, thus validating the accuracy of the process. Other than the depth sensor, the process does not require any specialized hardware for creating the mesh. It is efficient, robust and is mostly automatic.
  • Item
    Semantic Modelling of Interactive 3D Content
    (The Eurographics Association, 2013) Flotynski, Jakub; Walczak, Krzysztof; Betty Mohler and Bruno Raffin and Hideo Saito and Oliver Staadt
    Interactive three-dimensional content is the primary element of virtual reality (VR) and augmented reality (AR) systems. The increasing complexity and the use of VR/AR systems in various application domains requires efficient methods of creating, searching and combining interactive 3D content, which could be used by people with different specialities, who are not required to be IT-experts. The Semantic Web approach enables description of web resources with common semantic concepts. However, the use of semantic concepts may also facilitate creation of 3D content. The main contribution of this paper is a method of semantic modelling of interactive 3D content. The method leverages semantic constraints between different components of 3D content as well as representations of 3D content at different levels of abstraction. It can be used with a multitude of domain-specific ontologies and knowledge bases to simplify creating and searching of reusable semantic 3D content components and assembling complex 3D scenes from independent distributed elements.
  • Item
    Exploring Distant Objects with Augmented Reality
    (The Eurographics Association, 2013) Tatzgern, Markus; Grasset, Raphael; Veas, Eduardo; Kalkofen, Denis; Seichter, Hartmut; Schmalstieg, Dieter; Betty Mohler and Bruno Raffin and Hideo Saito and Oliver Staadt
    Augmented reality (AR) enables users to retrieve additional information about the real world objects and locations. Exploring such location-based information in AR requires physical movement to different viewpoints, which may be tiring and even infeasible when viewpoints are out of reach. In this paper, we present object-centric exploration techniques for handheld AR that allow users to access information freely using a virtual copy metaphor to explore large real world objects. We evaluated our interfaces in controlled conditions and collected first experiences in a real world pilot study. Based on our findings, we put forward design recommendations that should be considered by future generations of location-based AR browsers, 3D tourist guides, or in situated urban planning.
  • Item
    Background Motion, Clutter, and the Impact on Virtual Object Motion Perception in Augmented Reality
    (The Eurographics Association, 2013) Ferrer, Vicente; Yang, Yifan; Perdomo, Alex; Quarles, John; Betty Mohler and Bruno Raffin and Hideo Saito and Oliver Staadt
    Background motion and visual clutter are present in almost all augmented reality applications. However, there is minimal prior work that has investigated the effects that background motion and clutter (e.g., a busy city street) can have on the perception of virtual object motion in augmented reality. To investigate these issues, we conducted an experiment in which participants' perceptions of changes in overlaid virtual object velocity were tested with several levels of background motion, background clutter, virtual object motion, and virtual object clutter. Our experiment offers a novel approach to assessing virtual object motion perception and gives new insights into the impact that background clutter and motion has on perception in augmented reality.
  • Item
    Methodology for Immersive Emotional Assessment of Virtual Product Design by Customers
    (The Eurographics Association, 2013) Katicic, Jurica; Häfner, Polina; Ovtcharova, Jivka; Betty Mohler and Bruno Raffin and Hideo Saito and Oliver Staadt
    This paper presents a novel, integrated methodology for customer-centered emotional assessment of future products during the early conceptual design stages. This methodology integrates the technologies of Virtual Reality and emotion recognition in the simultaneous, interconnected processes of product development and market research. Its goal is to provide relevant emotional customer feedback during the interactive experience of only virtually existing conceptual product designs at early development stages. In this way, the company can identify which product designs would be suitable for future products. The novelty aspect of the methodology lies in the structured integration of experts from various disciplines with specific roles. It enables the often neglected holistic approach to the task. Each participant can identify best solutions to problems from their area of expertise and contribute to solving interface problems in a synergetic manner. The presented validation study proved the coherence of the methodology and showed clear preferences for concrete technological solutions regarding the state of the art and the future potentials.
  • Item
    The Impact of Altered Gravitation on Performance and Workload of Augmented Reality Hand-Eye-Coordination: Inside vs. Outside of Human Body Frame of Reference
    (The Eurographics Association, 2013) Markov-Vetter, Daniela; Zander, Vanja; Latsch, Joachim; Staadt, Oliver; Betty Mohler and Bruno Raffin and Hideo Saito and Oliver Staadt
    We investigated new interface technologies to ease astronaut0s work under altered gravity. By bridging the gap between the physical reality and digital information, Augmented Reality keeps the focus on the task to fulfill. It is important that the operation of such Augmented Reality supported assistant systems is adequate preserved in weightlessness. By distinguishing the interface alignment to the body and outside of the body, this paper presents a user study conducted to quantify and qualify the impact of altered gravity on sensorimotor hand-eye coordination related to the human body frame of reference. Taking the advantages of parabolic flights, we compared the performance of this alignment methods under normo- and altered gravity. Beside of verified effects of altered gravity on aimed pointing movements, the study showed a higher efficiency and decreased workload for the body aligned condition.
  • Item
    Free-form Implicit Haptic Rendering
    (The Eurographics Association, 2013) Moustakas, Konstantinos; Betty Mohler and Bruno Raffin and Hideo Saito and Oliver Staadt
    This paper presents a framework for haptic rendering in virtual environments based on distance maps over implicit support plane mappings. Initially, a rigid 3D object is modelled using support plane mappings so as to efficiently perform collision detection. Then, and after the collision queries are resolved, the surface of the 3D object can be directly reconstructed in constant time, using the equations of the support planes and the discrete distance map that encodes the distance of the object surface from the support plane. As a result analytical formulae can be extracted that provide the force feedback only as a function of the 3D object spatial transformation and position of the haptic probe. Experimental evaluation and computational complexity analysis demonstrates that the proposed approach can reduce significantly the computational cost when compared to existing collision detection and haptic rendering methods
  • Item
    Towards Enabling More Effective Locomotion in VR Using a Wheelchair-based Motion Platform
    (The Eurographics Association, 2013) Fiore, Loren Puchalla; Coben, Ella; Merritt, Samantha; Liu, Peng; Interrante, Victoria; Betty Mohler and Bruno Raffin and Hideo Saito and Oliver Staadt
    This paper addresses two questions relevant to the design of effective locomotion methods for VR using a novel wheelchair motion simulation interface. First, we investigate the extent to which people's ability to keep track of where they are in an immersive virtual environment can be facilitated by actual physical movement (rotation and translation) in the context of vehicular travel. Second, we quantitatively analyze various characteristics of the travel paths produced by different types of locomotion control systems to gain insight into the aspects of control that can evoke or impede natural patterns of movement through a virtual environment. In a within-subjects experiment, we asked 35 volunteers to virtually search through 16 identical-looking boxes randomly placed within a realistically rendered, circularly symmetric virtual room to find 8 hidden targets. Participants performed this task under four different conditions of integrated visual and physical movement, controlled via a joystick interface attached to a motorized wheelchair. In all four cases participants 'drove' their virtual viewpoint using the joystick, but the nature of the accompanying physical movement varied between the conditions. The four conditions were: no physical movement; full physical rotation only; full physical translation and rotation; and ''partial'' physical translation and rotation, wherein the extent of the actual physical movement was proportionally reduced relative to the visually-indicated movement. Analysis of the search results did not find a statistically significant main effect of the physical movement condition on total distance traveled or total number of revisits to previously searched locations. However we did see a trend towards greater search accuracy in the full physical motion condition, with a greater proportion of perfect trials, a smaller proportion of failed searches, fewer boxes revisited on average, and more novel boxes searched before the first revisit in that condition than in the others. Analyzing the paths traveled, we found that the velocity and curvature profiles of the virtual motion paths enabled by our novel joystick-controlled wheelchair motion simulation interface were more qualitatively similar to those produced by natural walking than were travel paths we had previously observed when more basic joystick locomotion control methods were used. This suggests potential benefits in adopting a vehicle-simulation movement control method for joystick locomotion control in VR.
  • Item
    ''It''+''I'': Virtual Embodiments as Hybrid Experiences
    (The Eurographics Association, 2013) Giraud, Tom; Paljic, Alexis; Leroy, Laure; Betty Mohler and Bruno Raffin and Hideo Saito and Oliver Staadt
    A dichotomy exists in the way virtual embodiments are currently studied: embodied entities are considered by conversational approaches as other selves whereas avatar approaches study them as users' hosts. Virtual reality applications such as in our case study often propose a different, in between embodiment experience. In the context of a virtual house for sale visit, this paper aims at examining the user's self-reported embodiment perception resulting from such a hybrid experience. To induce variability in this embodiment experience, we manipulated avatar representations (high versus low anthropomorphism) and frame of reference (egocentric versus exocentric). Results show the importance of the entity humanness to foster both experiences. When controlled by humanness, having a conversational experience appears uncorrelated to an avatar experience. This highlights the need to study these hybrid experiences as a combination of both approaches.