Ma, Li-KeYang, ZeshiTong, XinGuo, BainingYin, KangKangMitra, Niloy and Viola, Ivan2021-04-092021-04-0920211467-8659https://doi.org/10.1111/cgf.142630https://diglib.eg.org:443/handle/10.1111/cgf142630Equipping characters with diverse motor skills is the current bottleneck of physics-based character animation. We propose a Deep Reinforcement Learning (DRL) framework that enables physics-based characters to learn and explore motor skills from reference motions. The key insight is to use loose space-time constraints, termed spacetime bounds, to limit the search space in an early termination fashion. As we only rely on the reference to specify loose spacetime bounds, our learning is more robust with respect to low quality references. Moreover, spacetime bounds are hard constraints that improve learning of challenging motion segments, which can be ignored by imitation-only learning. We compare our method with state-of-the-art tracking-based DRL methods. We also show how to guide style exploration within the proposed framework.Computing methodologiesAnimationPhysical simulationTheory of computationReinforcement learningLearning and Exploring Motor Skills with Spacetime Bounds10.1111/cgf.142630251-263