Punpongsanon, ParinyaGuy, EmilieBoubekeur, TamyIwai, DaisukeSato, KosukeYuki Hashimoto and Torsten Kuhlen and Ferran Argelaguet and Takayuki Hoshi and Marc Erich Latoschik2014-12-172014-12-172014978-3-905674-77-4https://doi.org/10.2312/ve.20141377With the growing interest in virtual reality, mid-air ground navigation is becoming a fundamental interaction for a large collection of application scenarios. While classical input devices (e.g., mouse/keyboard, gamepad, touchscreen) have their own ground navigation standards, mid-air techniques still lack natural mechanisms for travelling in the scene. In particular, for most applications, the user should navigate in the scene while still being able to interact with its content using her hands, and observe the displayed content moving her eyes and locally rotating her head. Since most ground navigation scenarios require only two degrees of freedom to move forward/backward and rotate the view to the left or right, we propose a mid-air ground navigation control model which lets the user's hands, eyes or local head orientation completely free, making use of the remaining tracked body elements to tailor the navigation.We also study its desired porperties, such as being easy to discover, control, socially acceptable, accurate and not tiring.Information Interfaces and Presentation (e.g.HCI) [H.5.2]User InterfacesGround Navigation in 3D Scenes using Simple Body Motions