Dutra, Teofilo B.Marques, RicardoCavalcante-Neto, Joaquim BentoVidal, Creto A.Pettré, JulienLoic Barthe and Bedrich Benes2017-04-222017-04-2220171467-8659https://doi.org/10.1111/cgf.13130https://diglib.eg.org:443/handle/10.1111/cgf13130Most recent crowd simulation algorithms equip agents with a synthetic vision component for steering. They offer promising perspectives through a more realistic simulation of the way humans navigate according to their perception of the surrounding environment. In this paper, we propose a new perception/motion loop to steering agents along collision free trajectories that significantly improves the quality of vision-based crowd simulators. In contrast with solutions where agents avoid collisions in a purely reactive (binary) way, we suggest exploring the full range of possible adaptations and retaining the locally optimal one. To this end, we introduce a cost function, based on perceptual variables, which estimates an agent's situation considering both the risks of future collision and a desired destination. We then compute the partial derivatives of that function with respect to all possible motion adaptations. The agent then adapts its motion by following the gradient. This paper has thus two main contributions: the definition of a general purpose control scheme for steering synthetic vision-based agents; and the proposition of cost functions for evaluating the perceived danger of the current situation. We demonstrate improvements in several cases.I.3.7 [Computer Graphics]Three Dimensional Graphics and RealismAnimationI.6.5 [Simulation and Modeling]Types of SimulationAnimationGradient-based Steering for Vision-based Crowd Simulation Algorithms10.1111/cgf.13130337-348