Kao, Chih-ChenMakowski, GrzegorzFujieda, ShinHarada, Takahiro2026-04-202026-04-202026978-3-03868-299-82309-5059https://diglib.eg.org/handle/10.2312/egs20261026https://doi.org/10.2312/egs.20261026We extend the Locally-Subdivided Neural Intersection Function (LSNIF) to support parameterized deformable and animated geometry. Our approach introduces a rest-space and deformed-space formulation inspired by meshless rendering, allowing ray samples to be mapped back to a canonical space where a single neural network represents geometry consistently across poses without retraining. To maintain accuracy under deformation-aware training, we incorporate scale-invariant distance regression, uncertainty-weighted multi-task learning, and a hybrid positional-grid encoding. The resulting method preserves the compactness and efficiency of LSNIF while enabling robust neural intersection prediction for dynamic geometry.CC-BY-4.0Ray tracingNeural networksVoxel Deformation-Aware Neural Intersection Function10.2312/egs.202610264 pages