Hanocka, R.Assa, J.Cohen‐Or, D.Giryes, R.Hauser, Helwig and Alliez, Pierre2022-03-252022-03-2520221467-8659https://doi.org/10.1111/cgf.14444https://diglib.eg.org:443/handle/10.1111/cgf14444In this work, we present our strategy for camera control in dynamic scenes with multiple people (sports teams). We learn a generic model of the player dynamics offline in simulation. We use only a few sparse demonstrations of a user's camera control policy to learn a reward function to drive camera motion in an ongoing dynamic scene. Key to our approach is the creation of a low‐dimensional representation of the scene dynamics which is independent of the environment action and rewards, which enables learning the reward function using only a small number of examples. We cast the user‐specific control objective as an inverse reinforcement learning problem, aiming to learn an expert's intention from a small number of demonstrations. The learned reward function is used in combination with a visual model predictive controller (MPC). We learn a generic scene dynamics model that is agnostic to the user‐specific reward, enabling reusing the same dynamics model for different camera control policies. We show the effectiveness of our method on simulated and real soccer matches.motion planninganimationcontrolLearning Camera Control in Dynamic Scenes from Limited Demonstrations10.1111/cgf.14444427-437