Learning Camera Control in Dynamic Scenes from Limited Demonstrations

Loading...
Thumbnail Image
Date
2022
Journal Title
Journal ISSN
Volume Title
Publisher
© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd
Abstract
In this work, we present our strategy for camera control in dynamic scenes with multiple people (sports teams). We learn a generic model of the player dynamics offline in simulation. We use only a few sparse demonstrations of a user's camera control policy to learn a reward function to drive camera motion in an ongoing dynamic scene. Key to our approach is the creation of a low‐dimensional representation of the scene dynamics which is independent of the environment action and rewards, which enables learning the reward function using only a small number of examples. We cast the user‐specific control objective as an inverse reinforcement learning problem, aiming to learn an expert's intention from a small number of demonstrations. The learned reward function is used in combination with a visual model predictive controller (MPC). We learn a generic scene dynamics model that is agnostic to the user‐specific reward, enabling reusing the same dynamics model for different camera control policies. We show the effectiveness of our method on simulated and real soccer matches.
Description

        
@article{
10.1111:cgf.14444
, journal = {Computer Graphics Forum}, title = {{
Learning Camera Control in Dynamic Scenes from Limited Demonstrations
}}, author = {
Hanocka, R.
and
Assa, J.
and
Cohen‐Or, D.
and
Giryes, R.
}, year = {
2022
}, publisher = {
© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd
}, ISSN = {
1467-8659
}, DOI = {
10.1111/cgf.14444
} }
Citation
Collections