Bazin, Jean-CharlesPlüss (Kuster), ClaudiaYu, GuoMartin, TobiasJacobson, AlecGross, MarkusEitan Grinspun and Bernd Bickel and Yoshinori Dobashi2016-10-112016-10-1120161467-8659https://doi.org/10.1111/cgf.13039https://diglib.eg.org:443/handle/10.1111/cgf13039Convincing manipulation of objects in live action videos is a difficult and often tedious task. Skilled video editors achieve this with the help of modern professional tools, but complex motions might still lack physical realism since existing tools do not consider the laws of physics. On the other hand, physically based simulation promises a high degree of realism, but typically creates a virtual 3D scene animation rather than returning an edited version of an input live action video. We propose a framework that combines video editing and physics-based simulation. Our tool assists unskilled users in editing an input image or video while respecting the laws of physics and also leveraging the image content. We first fit a physically based simulation that approximates the object's motion in the input video. We then allow the user to edit the physical parameters of the object, generating a new physical behavior for it. The core of our work is the formulation of an image-aware constraint within physics simulations. This constraint manifests as external control forces to guide the object in a way that encourages proper texturing at every frame, yet producing physically plausible motions. We demonstrate the generality of our method on a variety of physical interactions: rigid motion, multi-body collisions, clothes and elastic bodies.I.3.3 [Computer Graphics]Picture/Image GenerationLine and curve generationPhysically Based Video Editing10.1111/cgf.13039421-429