Petitjean, AutomnePoirier-Ginter, YohanTewari, AyushCordonnier, GuillaumeDrettakis, GeorgeRitschel, TobiasWeidlich, Andrea2023-06-272023-06-2720231467-8659https://doi.org/10.1111/cgf.14888https://diglib.eg.org:443/handle/10.1111/cgf14888Recent advances in Neural Radiance Fields enable the capture of scenes with motion. However, editing the motion is hard; no existing method allows editing beyond the space of motion existing in the original video, nor editing based on physics. We present the first approach that allows physically-based editing of motion in a scene captured with a single hand-held video camera, containing vibrating or periodic motion. We first introduce a Lagrangian representation, representing motion as the displacement of particles, which is learned while training a radiance field. We use these particles to create a continuous representation of motion over the sequence, which is then used to perform a modal analysis of the motion thanks to a Fourier transform on the particle displacement over time. The resulting extracted modes allow motion synthesis, and easy editing of the motion, while inheriting the ability for free-viewpoint synthesis in the captured 3D scene from the radiance field.We demonstrate our new method on synthetic and real captured scenes.ModalNeRF: Neural Modal Analysis and Synthesis for Free-Viewpoint Navigation in Dynamically Vibrating Scenes10.1111/cgf.1488813 pages