Krum, David M.Omoteso, OlugbengaRibarsky, WilliamStarner, ThadHodges, Larry F.D. Ebert and P. Brunet and I. Navazo2014-01-302014-01-3020021-58113-536-X1727-5296https://doi.org/10.2312/VisSym/VisSym02/195-200A growing body of research shows several advantages to multimodal interfaces including increased expressiveness, flexibility, and user freedom. This paper investigates the design of such an interface that integrates speech and hand gestures. The interface has the additional property of operating relative to the user and can be used while the user is in motion or standing at a distance from the computer display. The paper then describes an implementation of the multimodal interface for a whole Earth 3D visualization which presents navigation interface challenges due to the large magnitude of scale and extended spaces that are available. The characteristics of the multimodal interface are examined, such as speed, recognizability of gestures, ease and accuracy of use, and learnability under likely conditions of use. This implementation shows that such a multimodal interface can be effective in a real environment and sets some parameters for the design and use of such interfaces.Speech and Gesture Multimodal Control of a Whole Earth 3D Visualization Environment