Hillaire, S.Breton, G.Ouarti, N.Cozot, R.Lecuyer, A.2015-02-232015-02-2320101467-8659https://doi.org/10.1111/j.1467-8659.2010.01651.xThis paper introduces the use of a visual attention model to improve the accuracy of gaze tracking systems. Visual attention models simulate the selective attention part of the human visual system. For instance, in a bottom-up approach, a saliency map is defined for the image and gives an attention weight to every pixel of the image as a function of its colour, edge or intensity.Our algorithm uses an uncertainty window, defined by the gaze tracker accuracy, and located around the gaze point given by the tracker. Then, using a visual attention model, it searches for the most salient points, or objects, located inside this uncertainty window, and determines a novel, and hopefully, better gaze point. This combination of a gaze tracker together with a visual attention model is considered as the main contribution of the paper.We demonstrate the promising results of our method by presenting two experiments conducted in two different contexts: (1) a free exploration of a visually rich 3D virtual environment without a specific task, and (2) a video game based on gaze tracking involving a selection task.Our approach can be used to improve real-time gaze tracking systems in many interactive 3D applications such as video games or virtual reality applications. The use of a visual attention model can be adapted to any gaze tracker and the visual attention model can also be adapted to the application in which it is used.Using a Visual Attention Model to Improve Gaze Tracking Systems in Interactive 3D Applications10.1111/j.1467-8659.2010.01651.x1830-1841