Tzathas, PetrosHu, JeffreyMeuleman, AndreasCordonnier, GuillaumeDrettakis, GeorgeMasia, BelenThies, Justus2026-04-212026-04-2120261467-8659https://diglib.eg.org/handle/10.1111/cgf70410https://doi.org/10.1111/cgf.70410Our goal is to reconstruct scenes with stochastic, incoherent motion such as leaves moving in the wind. Previous dynamic 3D Gaussian Splatting solutions either represent motion implicitly with neural networks achieving good quality but lower framerate, or explicitly with functions often requiring longer training and producing lower quality. We propose an explicit method introducing adaptive space-time densification and smoother optimization. Our densification strategy relies on error moments guiding primitive splitting, while keyframes are refined based on variance of error. To improve optimization from monocular video we introduce a weighted Adam approach based on primitive visibility. Finally, we introduce an image-driven as-rigid-as-possible regularization to handle independent motion of similar-looking objects. Our method achieves higher quality than previous explicit approaches and significantly higher rendering framerate.CC-BY-4.03D Gaussian SplattingDynamic scenesComputer graphicsAdaptive Spatio-Temporal 3D Gaussian Splatting for Scenes with Oscillatory Motion10.1111/cgf.7041015 pages