ICAT-EGVE2019

Permanent URI for this collection

The University of Tokyo - Hongo Campus, Japan, September 11 – 13, 2019
Sensing and Interaction
Random-Forest-Based Initializer for Real-time Optimization-based 3D Motion Tracking Problems
Jiawei Huang, Ryo Sugawara, Taku Komura, and Yoshifumi Kitamura
Automatic Labeling of Training Data by Vowel Recognition for Mouth Shape Recognition with Optical Sensors Embedded in Head-Mounted Display
Fumihiko Nakamura, Katsuhiro Suzuki, Katsutoshi Masai, Yuta Itoh, Yuta Sugiura, and Maki Sugimoto
FaceDrive: Facial Expression Driven Operation to Control Virtual Supernumerary Robotic Arms
Masaaki Fukuoka, Adrien Verhulst, Fumihiko Nakamura, Ryo Takizawa, Katsutoshi Masai, and Maki Sugimoto
Tracking and Positioning
Evaluation of Embodied Agent Positioning and Moving Interfaces for an AR Virtual Guide
Nattaon Techasarntikul, Photchara Ratsamee, Jason Orlosky, Tomohiro Mashita, Yuki Uranishi, Kiyoshi Kiyokawa, and Haruo Takemura
Evaluation of Virtual Reality Tracking Systems Underwater
Raphael Costa, Rongkai Guo, and John Quarles
Evaluation of Proxemics in Dynamic Interaction with a Mixed Reality Avatar Robot
Jingxin Zhang, Omar Janeh, Nikolaos Katzakis, Dennis Krupke, and Frank Steinicke
Perception and Human Augmentation
Rendering of Walking Sensation for a Sitting User by Lower Limb Motion Display
Kentaro Yamaoka, Ren Koide, Tomohiro Amemiya, Michiteru Kitazaki, Vibol Yem, and Yasushi Ikei
Visuo-Haptic Interface to Augment Player's Perception in Multiplayer Ball Game
Yuji Sano, Koya Sato, Ryoichiro Shiraishi, Mai Otsuki, and Koichi Mizutani
Real Time Remapping of a Third Arm in Virtual Reality
Adam Drogemuller, Adrien Verhulst, Benjamin Volmer, Bruce H. Thomas, Masahiko Inami, and Maki Sugimoto
Simulation and Visualization
Virtual Ability Simulation: Applying Rotational Gain to the Leg to Increase Confidence During Physical Rehabilitation
Tanvir Irfan Chowdhury, Sharif Mohammad Shahnewaz Ferdous, Tabitha Peck, and John Quarles
Interactive and Immersive Tools for Point Clouds in Archaeology
Ronan Gaugne, Quentin Petit, Jean-Baptiste Barreau, and Valérie Gouranton
Evaluation of a Mixed Reality based Method for Archaeological Excavation Support
Ronan Gaugne, Théophane Nicolas, Quentin Petit, Mai Otsuki, and Valérie Gouranton
Design and Programming
Authoring AR Interaction by AR
Flavien Lécuyer, Valérie Gouranton, Adrien Reuzeau, Ronan Gaugne, and Bruno Arnaldi
Model and Tools for Integrating IoT into Mixed Reality Environments: Towards a Virtual-Real Seamless Continuum
Jeremy Lacoche, Morgan Le Chenechal, Eric Villain, and Anthony Foulonneau
ReallifeEngine: A Mixed Reality-Based Visual Programming System for SmartHomes
Ryohei Suzuki, Katsutoshi Masai, and Maki Sugimoto

BibTeX (ICAT-EGVE2019)
@inproceedings{
10.2312:egve.20191273,
booktitle = {
ICAT-EGVE 2019 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments},
editor = {
Kakehi, Yasuaki and Hiyama, Atsushi
}, title = {{
Random-Forest-Based Initializer for Real-time Optimization-based 3D Motion Tracking Problems}},
author = {
Huang, Jiawei
and
Sugawara, Ryo
and
Komura, Taku
and
Kitamura, Yoshifumi
}, year = {
2019},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-03868-083-3},
DOI = {
10.2312/egve.20191273}
}
@inproceedings{
10.2312:egve.20191274,
booktitle = {
ICAT-EGVE 2019 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments},
editor = {
Kakehi, Yasuaki and Hiyama, Atsushi
}, title = {{
Automatic Labeling of Training Data by Vowel Recognition for Mouth Shape Recognition with Optical Sensors Embedded in Head-Mounted Display}},
author = {
Nakamura, Fumihiko
and
Suzuki, Katsuhiro
and
Masai, Katsutoshi
and
Itoh, Yuta
and
Sugiura, Yuta
and
Sugimoto, Maki
}, year = {
2019},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-03868-083-3},
DOI = {
10.2312/egve.20191274}
}
@inproceedings{
10.2312:egve.20191277,
booktitle = {
ICAT-EGVE 2019 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments},
editor = {
Kakehi, Yasuaki and Hiyama, Atsushi
}, title = {{
Evaluation of Virtual Reality Tracking Systems Underwater}},
author = {
Costa, Raphael
and
Guo, Rongkai
and
Quarles, John
}, year = {
2019},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-03868-083-3},
DOI = {
10.2312/egve.20191277}
}
@inproceedings{
10.2312:egve.20191278,
booktitle = {
ICAT-EGVE 2019 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments},
editor = {
Kakehi, Yasuaki and Hiyama, Atsushi
}, title = {{
Evaluation of Proxemics in Dynamic Interaction with a Mixed Reality Avatar Robot}},
author = {
Zhang, Jingxin
and
Janeh, Omar
and
Katzakis, Nikolaos
and
Krupke, Dennis
and
Steinicke, Frank
}, year = {
2019},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-03868-083-3},
DOI = {
10.2312/egve.20191278}
}
@inproceedings{
10.2312:egve.20191275,
booktitle = {
ICAT-EGVE 2019 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments},
editor = {
Kakehi, Yasuaki and Hiyama, Atsushi
}, title = {{
FaceDrive: Facial Expression Driven Operation to Control Virtual Supernumerary Robotic Arms}},
author = {
Fukuoka, Masaaki
and
verhulst, adrien
and
Nakamura, Fumihiko
and
Takizawa, Ryo
and
Masai, Katsutoshi
and
Sugimoto, Maki
}, year = {
2019},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-03868-083-3},
DOI = {
10.2312/egve.20191275}
}
@inproceedings{
10.2312:egve.20191276,
booktitle = {
ICAT-EGVE 2019 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments},
editor = {
Kakehi, Yasuaki and Hiyama, Atsushi
}, title = {{
Evaluation of Embodied Agent Positioning and Moving Interfaces for an AR Virtual Guide}},
author = {
Techasarntikul, Nattaon
and
Ratsamee, Photchara
and
Orlosky, Jason
and
Mashita, Tomohiro
and
Uranishi, Yuki
and
Kiyokawa, Kiyoshi
and
Takemura, Haruo
}, year = {
2019},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-03868-083-3},
DOI = {
10.2312/egve.20191276}
}
@inproceedings{
10.2312:egve.20191280,
booktitle = {
ICAT-EGVE 2019 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments},
editor = {
Kakehi, Yasuaki and Hiyama, Atsushi
}, title = {{
Visuo-Haptic Interface to Augment Player's Perception in Multiplayer Ball Game}},
author = {
Sano, Yuji
and
Sato, Koya
and
Shiraishi, Ryoichiro
and
Otsuki, Mai
and
Mizutani, Koichi
}, year = {
2019},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-03868-083-3},
DOI = {
10.2312/egve.20191280}
}
@inproceedings{
10.2312:egve.20191279,
booktitle = {
ICAT-EGVE 2019 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments},
editor = {
Kakehi, Yasuaki and Hiyama, Atsushi
}, title = {{
Rendering of Walking Sensation for a Sitting User by Lower Limb Motion Display}},
author = {
Yamaoka, Kentaro
and
Koide, Ren
and
Amemiya, Tomohiro
and
Kitazaki, Michiteru
and
Yem, Vibol
and
Ikei, Yasushi
}, year = {
2019},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-03868-083-3},
DOI = {
10.2312/egve.20191279}
}
@inproceedings{
10.2312:egve.20191281,
booktitle = {
ICAT-EGVE 2019 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments},
editor = {
Kakehi, Yasuaki and Hiyama, Atsushi
}, title = {{
Real Time Remapping of a Third Arm in Virtual Reality}},
author = {
Drogemuller, Adam
and
verhulst, adrien
and
Volmer, Benjamin
and
Thomas, Bruce
and
Inami, Masahiko
and
Sugimoto, Maki
}, year = {
2019},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-03868-083-3},
DOI = {
10.2312/egve.20191281}
}
@inproceedings{
10.2312:egve.20191282,
booktitle = {
ICAT-EGVE 2019 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments},
editor = {
Kakehi, Yasuaki and Hiyama, Atsushi
}, title = {{
Virtual Ability Simulation: Applying Rotational Gain to the Leg to Increase Confidence During Physical Rehabilitation}},
author = {
Chowdhury, Tanvir Irfan
and
Shahnewaz Ferdous, Sharif Mohammad
and
Peck, Tabitha
and
Quarles, John
}, year = {
2019},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-03868-083-3},
DOI = {
10.2312/egve.20191282}
}
@inproceedings{
10.2312:egve.20191283,
booktitle = {
ICAT-EGVE 2019 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments},
editor = {
Kakehi, Yasuaki and Hiyama, Atsushi
}, title = {{
Interactive and Immersive Tools for Point Clouds in Archaeology}},
author = {
Gaugne, Ronan
and
Petit, Quentin
and
BARREAU, Jean-Baptiste
and
Gouranton, Valérie
}, year = {
2019},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-03868-083-3},
DOI = {
10.2312/egve.20191283}
}
@inproceedings{
10.2312:egve.20191287,
booktitle = {
ICAT-EGVE 2019 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments},
editor = {
Kakehi, Yasuaki and Hiyama, Atsushi
}, title = {{
ReallifeEngine: A Mixed Reality-Based Visual Programming System for SmartHomes}},
author = {
Suzuki, Ryohei
and
Masai, Katsutoshi
and
Sugimoto, Maki
}, year = {
2019},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-03868-083-3},
DOI = {
10.2312/egve.20191287}
}
@inproceedings{
10.2312:egve.20191285,
booktitle = {
ICAT-EGVE 2019 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments},
editor = {
Kakehi, Yasuaki and Hiyama, Atsushi
}, title = {{
Authoring AR Interaction by AR}},
author = {
Lécuyer, Flavien
and
Gouranton, Valérie
and
Reuzeau, Adrien
and
Gaugne, Ronan
and
Arnaldi, Bruno
}, year = {
2019},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-03868-083-3},
DOI = {
10.2312/egve.20191285}
}
@inproceedings{
10.2312:egve.20191286,
booktitle = {
ICAT-EGVE 2019 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments},
editor = {
Kakehi, Yasuaki and Hiyama, Atsushi
}, title = {{
Model and Tools for Integrating IoT into Mixed Reality Environments: Towards a Virtual-Real Seamless Continuum}},
author = {
Lacoche, Jérémy
and
Le Chénéchal, Morgan
and
Villain, Eric
and
Foulonneau, Anthony
}, year = {
2019},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-03868-083-3},
DOI = {
10.2312/egve.20191286}
}
@inproceedings{
10.2312:egve.20191284,
booktitle = {
ICAT-EGVE 2019 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments},
editor = {
Kakehi, Yasuaki and Hiyama, Atsushi
}, title = {{
Evaluation of a Mixed Reality based Method for Archaeological Excavation Support}},
author = {
Gaugne, Ronan
and
Petit, Quentin
and
Otsuki, Mai
and
Gouranton, Valérie
and
Nicolas, Théophane
}, year = {
2019},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-03868-083-3},
DOI = {
10.2312/egve.20191284}
}
@inproceedings{
10.2312:egve.20192023,
booktitle = {
ICAT-EGVE 2019 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments},
editor = {
Kakehi, Yasuaki and Hiyama, Atsushi
}, title = {{
Forty Years of Telexistence —From Concept to TELESAR VI (Invited Talk)}},
author = {
Tachi, Susumu
}, year = {
2019-09-11},
publisher = {
Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-03868-083-3},
DOI = {
10.2312/egve.20192023}
}

Browse

Recent Submissions

Now showing 1 - 17 of 17
  • Item
    ICAT-EGVE 2019: Frontmatter
    (Eurographics Association, 2019) Kakehi, Yasuaki; Hiyama, Atsushi; Kakehi, Yasuaki and Hiyama, Atsushi
  • Item
    Random-Forest-Based Initializer for Real-time Optimization-based 3D Motion Tracking Problems
    (The Eurographics Association, 2019) Huang, Jiawei; Sugawara, Ryo; Komura, Taku; Kitamura, Yoshifumi; Kakehi, Yasuaki and Hiyama, Atsushi
    Many motion tracking systems require solving inverse problem to compute the tracking result from original sensor measurements, such as images from cameras and signals from receivers. For real-time motion tracking, such typical solutions as the Gauss-Newton method for solving their inverse problems need an initial value to optimize the cost function through iterations. A powerful initializer is crucial to generate a proper initial value for every time instance and, for achieving continuous accurate tracking without errors and rapid tracking recovery even when it is temporally interrupted. An improper initial value easily causes optimization divergence, and cannot always lead to reasonable solutions. Therefore, we propose a new initializer based on random-forest to obtain proper initial values for efficient real-time inverse problem computation. Our method trains a random-forest model with varied massive inputs and corresponding outputs and uses it as an initializer for runtime optimization. As an instance, we apply our initializer to IM3D, which is a real-time magnetic 3D motion tracking system with multiple tiny, identifiable, wireless, occlusion-free passive markers (LC coils). During run-time, a proper initial value is obtained from the initializer based on sensor measurements, and the system computes each position of the actual markers and poses by solving the inverse problem through an optimization process in real-time. We conduct four experiments to evaluate reliability and performance of the initializer. Compared with traditional or naive initializers (i.e., using a static value or random values), our results show that our proposed method provides recovery from tracking loss in a wider range of tracking space, and the entire process (initialization and optimization) can run in real-time.
  • Item
    Automatic Labeling of Training Data by Vowel Recognition for Mouth Shape Recognition with Optical Sensors Embedded in Head-Mounted Display
    (The Eurographics Association, 2019) Nakamura, Fumihiko; Suzuki, Katsuhiro; Masai, Katsutoshi; Itoh, Yuta; Sugiura, Yuta; Sugimoto, Maki; Kakehi, Yasuaki and Hiyama, Atsushi
    Facial expressions enrich communication via avatars. However, in common immersive virtual reality (VR) systems, facial occlusions by head-mounted displays (HMD) lead to difficulties in capturing users' faces. In particular, the mouth plays an important role in facial expressions because it is essential for rich interaction. In this paper, we propose a technique that classifies mouth shapes into six classes using optical sensors embedded in HMD and gives labels automatically to the training dataset by vowel recognition. We experiment with five subjects to compare the recognition rates of machine learning under manual and automated labeling conditions. Results show that our method achieves average classification accuracy of 99.9% and 96.3% under manual and automated labeling conditions, respectively. These findings indicate that automated labeling is competitive relative to manual labeling, although the former's classification accuracy is slightly higher than that of the latter. Furthermore, we develop an application that reflects the mouth shape on avatars. This application blends six mouth shapes and then applies the blended mouth shapes to avatars.
  • Item
    Evaluation of Virtual Reality Tracking Systems Underwater
    (The Eurographics Association, 2019) Costa, Raphael; Guo, Rongkai; Quarles, John; Kakehi, Yasuaki and Hiyama, Atsushi
    The objective of this research is to compare the effectiveness of various virtual reality tracking systems underwater. There have been few works in aquatic virtual reality (VR) - i.e., VR systems that can be used in a real underwater environment. Moreover, the works that have been done have noted limitations on tracking accuracy. Our initial test results suggest that inertial measurement units work well underwater for orientation tracking but a different approach is needed for position tracking. Towards this goal, we have waterproofed and evaluated several consumer tracking systems intended for gaming to determine the most effective approaches. First, we informally tested infrared systems and fiducial marker based systems, which demonstrated significant limitations of optical approaches. Next, we quantitatively compared inertial measurement units (IMU) and a magnetic tracking system both above water (as a baseline) and underwater. By comparing the devices' rotation data, we have discovered that the magnetic tracking system implemented by the Razer Hydra is approximately as accurate underwater as compared to a phone-based IMU. This suggests that magnetic tracking systems should be further explored as a possibility for underwater VR applications.
  • Item
    Evaluation of Proxemics in Dynamic Interaction with a Mixed Reality Avatar Robot
    (The Eurographics Association, 2019) Zhang, Jingxin; Janeh, Omar; Katzakis, Nikolaos; Krupke, Dennis; Steinicke, Frank; Kakehi, Yasuaki and Hiyama, Atsushi
    We present a mixed-reality avatar arm swing technique to subtly communicate the velocity of a robotic it is attached to. We designed and performed a series of studies to investigate the effectiveness of this method and the proxemics when humans have dynamic interaction with the avatar robot (Figure 3). Our results suggest that robot moving speed has a significant effect on the proxemics between human and mixed-reality avatar robot. Attaching an avatar to the robot did not have a significant influence on the proxemics compared to a baseline situation (robot only). Participants reported that this method helped improve perception and prediction on the robot state. Participants also commented favourably regarding its potential applications like noticing a tiny ground robot. Our work offers reference and guidelines for external expression of the robot state with mixed reality.
  • Item
    FaceDrive: Facial Expression Driven Operation to Control Virtual Supernumerary Robotic Arms
    (The Eurographics Association, 2019) Fukuoka, Masaaki; verhulst, adrien; Nakamura, Fumihiko; Takizawa, Ryo; Masai, Katsutoshi; Sugimoto, Maki; Kakehi, Yasuaki and Hiyama, Atsushi
    Supernumerary Robotic Limbs (SRLs) can make physical activities easier, but require cooperation with the operator. To improve cooperation between the SRLs and the operator, the SRLs can try to predict the operator's intentions. A way to predict the operator's intentions is to use his/her Facial Expressions (FEs). Here we investigate the mapping between FEs and Supernumerary Robotic Arms (SRAs) commands (e.g. grab, release). To measure FEs, we used a optical sensor-based approach (here inside a HMD). The sensors data are fed to a SVM able to predict FEs. The SRAs can then carry out commands by predicting the operator's FEs (and arguably, the operator's intention). We ran a data collection study (N=10) to know which FEs assign to which robotic arm commands in a Virtual reality Environment (VE). We researched the mapping patterns by (1) performing an object reaching - grasping - releasing task using ''any'' FEs; (2) analyzing sensors data and a self-reported FE questionnaire to find the most common FEs used for a given command; (3) classifying the FEs in FEs groups. We then ran another study (N=14) to find the most effective combination of FEs groups / SRAs commands by recording task completion time. As a result, we found that the optimum combinations are: (i) Eyes + Mouth for grabbing / releasing; and (ii) Mouth for extending / contracting the arms (i.e. a along the forward axis).
  • Item
    Evaluation of Embodied Agent Positioning and Moving Interfaces for an AR Virtual Guide
    (The Eurographics Association, 2019) Techasarntikul, Nattaon; Ratsamee, Photchara; Orlosky, Jason; Mashita, Tomohiro; Uranishi, Yuki; Kiyokawa, Kiyoshi; Takemura, Haruo; Kakehi, Yasuaki and Hiyama, Atsushi
    Augmented Reality (AR) has become a popular technology in museums, and many venues now provide AR applications inside gallery spaces. To improve museum tour experiences, we have developed an embodied agent AR guide system that aims to explain multi-section detailed information hidden in the painting. In this paper, we investigate the effect of different types of guiding interfaces that use this type of embodied agent when explaining large scale artwork. Our interfaces include two types of guiding positions: inside and outside the artwork area, and two types of agent movements: teleporting and flying. To test these interfaces, we conducted a within-subjects experiment to test Inside-Teleport, Inside-Flying, Outside-Teleport, and Outside- Flying with 28 participants. Results indicated that although the Inside-Flying interface often obstructed the painting, most of the participants preferred this type since it was perceived as natural and helped users find corresponding art details more easily.
  • Item
    Visuo-Haptic Interface to Augment Player's Perception in Multiplayer Ball Game
    (The Eurographics Association, 2019) Sano, Yuji; Sato, Koya; Shiraishi, Ryoichiro; Otsuki, Mai; Mizutani, Koichi; Kakehi, Yasuaki and Hiyama, Atsushi
    We developed a system that augments the player's perception and support situation awareness for motivating them to participate in multiplayer sports in the context of a soccer game. The positional relationship of the opponents was provided using visual feedback and the position of the opponent beyond the field of view was provided using haptic feedback. Through an experiment, we confirmed that using visuo-haptic feedback and independently using visual and haptic feedback could improve the ball control skill of the player. We also found that independently using visual and haptic feedback can reduce the mental workload than the case of no feedback based on NASA-TLX.
  • Item
    Rendering of Walking Sensation for a Sitting User by Lower Limb Motion Display
    (The Eurographics Association, 2019) Yamaoka, Kentaro; Koide, Ren; Amemiya, Tomohiro; Kitazaki, Michiteru; Yem, Vibol; Ikei, Yasushi; Kakehi, Yasuaki and Hiyama, Atsushi
    This paper describes the characteristics of presentation of a lower limb motion display designed to create a walking motion sensation for a sitting user. It has the function of lifting and translation independently applied to both legs to generate a walking sensation by moving the feet alternately as in the real walk. According to the results of the experiments, our system enables to render a walking sensation by drawing a trajectory with an amplitude of about 10% of the real walking. Although the backward amplitude was larger than the forward amplitude in real walking, our system created walking sensation to the sitting user better when the forward amplitude was larger than the backward amplitude having opposite characteristics to the real walking.
  • Item
    Real Time Remapping of a Third Arm in Virtual Reality
    (The Eurographics Association, 2019) Drogemuller, Adam; verhulst, adrien; Volmer, Benjamin; Thomas, Bruce; Inami, Masahiko; Sugimoto, Maki; Kakehi, Yasuaki and Hiyama, Atsushi
    We present an initial study investigating the usability of a system for users to use their own limbs (here the left arm, right arm, left leg, right leg and head) to remap and control a virtual third arm. The remapping was done by: pre-selecting the limb by gazing over it, then selecting it by voice activation (here we asked the participants to say ''switch''). The system was evaluated in Virtual Reality (VR), where we recorded the performance of participants (N=12, within-group design) in 2 box collection tasks. We found that participants self-reported: (i) significantly less body ownership in switching limbs than in not switching limbs; and (ii) less effort in switching limbs than not switching limbs. In addition, we found that dominant limbs do not significantly affect remap decisions in controlling the third arm.
  • Item
    Virtual Ability Simulation: Applying Rotational Gain to the Leg to Increase Confidence During Physical Rehabilitation
    (The Eurographics Association, 2019) Chowdhury, Tanvir Irfan; Shahnewaz Ferdous, Sharif Mohammad; Peck, Tabitha; Quarles, John; Kakehi, Yasuaki and Hiyama, Atsushi
    This paper investigates a concept called Virtual Ability Simulation (VAS) for people with disability due to Multiple Sclerosis (MS), in a virtual reality (VR) environment. In a VAS people with a disability perform tasks that are made easier in the virtual environment (VE) compared to the real world. We hypothesized that putting people with disabilities in a VAS will increase confidence and enable more efficient task completion. To investigate this hypothesis, we conducted a within-subjects experiment in which participants performed a virtual task called ''kick the ball'' in two different conditions: a no gain condition (i.e., same difficulty as in the real world) and a rotational gain condition (i.e., physically easier than the real world but visually the same). The results from our study suggest that VAS increased participants' confidence which in turn enables them to perceive the difficulty of the same task easier.
  • Item
    Interactive and Immersive Tools for Point Clouds in Archaeology
    (The Eurographics Association, 2019) Gaugne, Ronan; Petit, Quentin; BARREAU, Jean-Baptiste; Gouranton, Valérie; Kakehi, Yasuaki and Hiyama, Atsushi
    In this article, we present a framework for an immersive and interactive 3D manipulation of large point clouds, in the context of an archaeological study. The framework was designed in an interdisciplinary collaboration with archaeologists.We first applied this framework for the study of an 17th-century building of a Real Tennis court. We propose a display infrastructure associated with a set of tools that allows archaeologists to interact directly with the point cloud within their study process. The resulting framework allows an immersive navigation at scale 1:1 in a dense point cloud, the manipulation and production of cut plans and cross sections, and the positioning and visualisation of photographic views. We also apply the same framework to three other archaeological contexts with different purposes, a 13th century ruined chapel, a 19th-century wreck and a cremation urn from the Iron Age.
  • Item
    ReallifeEngine: A Mixed Reality-Based Visual Programming System for SmartHomes
    (The Eurographics Association, 2019) Suzuki, Ryohei; Masai, Katsutoshi; Sugimoto, Maki; Kakehi, Yasuaki and Hiyama, Atsushi
    The conveniences experienced by society have tremendously improved with the development of the Internet of Things (IoT). Among the affordances stemming from this innovation is an IoT concept called the SmartHome, which is already spreading even in general households. Despite this proliferation, however, ordinary users experience difficulty in performing the complex control and automation of IoT devices, thereby impeding their full exploitation of IoT benefits. These problems highlight the need for a system that enables general users to easily manipulate IoT devices. Correspondingly, this study constructed a visual programming system that facilitates IoT device operation. The system, which was developed on the basis of data obtained from various sensors in a SmartHome, employs mixed reality(MR) in enhancing the visualization of various data, eases the understanding of the positional relationship among devices, and smoothens the checking of execution results. We conducted an evaluation experiment wherein eight users were asked to test the proposed system, and we verified its usefulness on the basis of the time elapsed until the participants completed the programming of diverse IoT devices and a questionnaire intended to derive their subjective assessments. The result indicates that the proposed system makes it easy to understand the correspondence between the real world device and the node in the MR environment, and the connection between the sensors and the home appliances. On the other hand, it is negatively evaluated for operability.
  • Item
    Authoring AR Interaction by AR
    (The Eurographics Association, 2019) Lécuyer, Flavien; Gouranton, Valérie; Reuzeau, Adrien; Gaugne, Ronan; Arnaldi, Bruno; Kakehi, Yasuaki and Hiyama, Atsushi
    The demand for augmented reality applications is rapidly growing. In many domains, we observe a new interest for this technology, stressing the need for more efficient ways of producing augmented content. Similarly to virtual reality, interactive objects in augmented reality are a powerful means to improve the experience. While it is now well democratized for virtual reality, interactivity is still finding its way into augmented reality. To open the way to this interactive augmented reality, we designed a new methodology for the management of the interactions in augmented reality, supported by an authoring tool for the use by designers and domain experts. This tool makes the production of interactive augmented content faster, while being scalable to the needs of each application. Usually in the creation of applications, a large amount of time is spent through discussions between the designer (or the domain expert), carrying the needs of the application, and the developer, holding the knowledge to create it. Thanks to our tool, we reduce this time by allowing the designer to create an interactive application, without having to write a single line of code.
  • Item
    Model and Tools for Integrating IoT into Mixed Reality Environments: Towards a Virtual-Real Seamless Continuum
    (The Eurographics Association, 2019) Lacoche, Jérémy; Le Chénéchal, Morgan; Villain, Eric; Foulonneau, Anthony; Kakehi, Yasuaki and Hiyama, Atsushi
    This paper introduces a new software model and new tools for managing indoor smart environments (smart home, smart building, smart factories, etc.) thanks to MR technologies. Our fully-integrated solution is mainly based on a software modelization of connected objects used to manage them independently from their actual nature: these objects can be simulated or real. Based on this model our goal is to create a continuum between a real smart environment and its 3D digital twin in order to simulate and manipulate it. Therefore, two kinds of tools are introduced to leverage this model. First, we introduce two complementary tools, an AR and a VR one, for the creation of the digital twin of a given smart environment. Secondly, we propose 3D interactions and dedicated metaphors for the creation of automation scenarios in the same VR application. These scenarios are then converted to a Petri-net based model that can be edited later by expert users. Adjusting the parameters of our model allows to navigate on the continuum in order to use the digital twin for simulation, deployment and real/virtual synchronization purposes. These different contributions and their benefits are illustrated thanks to the automation configuration of a room in our lab.
  • Item
    Evaluation of a Mixed Reality based Method for Archaeological Excavation Support
    (The Eurographics Association, 2019) Gaugne, Ronan; Petit, Quentin; Otsuki, Mai; Gouranton, Valérie; Nicolas, Théophane; Kakehi, Yasuaki and Hiyama, Atsushi
    In the context of archaeology, most of the time, micro-excavation for the study of furniture (metal, ceramics...) or archaeological context (incineration, bulk sampling) is performed without complete knowledge of the internal content, with the risk of damaging nested artefacts during the process. The use of medical imaging coupled with digital 3D technologies, has led to significant breakthroughs by allowing to refine the reading of complex artifacts. However, archaeologists may have difficulties in constructing a mental image in 3 dimensions from the axial and longitudinal sections obtained during medical imaging, and in the same way to visualize and manipulate a complex 3D object on screen, and an inability to simultaneously manipulate and analyze a 3D image, and a real object. Thereby, if digital technologies allow a 3D visualization (stereoscopic screen, VR headset ...), they are not without limiting the natural, intuitive and direct 3D perception of the archaeologist on the material or context being studied. We therefore propose a visualization system based on optical see-through augmented reality that associates real visualization of archaeological material with data from medical imaging. This represents a relevant approach for composite or corroded objects or contexts associating several objects such as cremations. The results presented in the paper identify adequate visualization modalities to allow archaeologist to estimate, with an acceptable error, the position of an internal element in a particular archaeological material, an Iron-Age cremation block inside a urn.
  • Item
    Forty Years of Telexistence —From Concept to TELESAR VI (Invited Talk)
    (Eurographics Association, 2019-09-11) Tachi, Susumu; Kakehi, Yasuaki and Hiyama, Atsushi
    Telexistence is a human-empowerment concept that enables a human in one location to virtually exist in another location and to act freely there. The term also refers to the system of science and technology that enables realization of the concept. The concept was originally proposed by the author in 1980, and its feasibility has been demonstrated through the construction of alter-ego robot systems such as TELESAR, TELESAR V, and TELESAR VI, which were developed under the national research and development projects “MITI Advanced Robot Technology in Hazardous Environments,” the “CREST Haptic Telexistence Project,” and the “ACCEL Embodied Media Project,” respectively. Mutual telexistence systems, such as TELESAR II & IV, capable of generating the sensation of being in a remote place using a combination of alter-ego robotics and retro-reflective projection technology (RPT), have been developed, and the feasibility of mutual telexistence has been demonstrated. Forty years of telexistence development are historically reviewed in this keynote paper.