Stadter, LindaHofmann, NikolaiStamminger, MarcLinsen, LarsThies, Justus2024-09-092024-09-092024978-3-03868-247-9https://doi.org/10.2312/vmv.20241197https://diglib.eg.org/handle/10.2312/vmv20241197We introduce a neural level of detail pipeline for use in a GPU path tracer based on a sparse volumetric representation derived from neural radiance fields. We pre-compute lighting and occlusion to train a neural radiance field which faithfully captures appearance and shading via image-based optimization. By converting the resulting neural network into an efficiently rendered representation, we eliminate costly evaluations at runtime and keep performance competitive. When applying our representation to certain areas of the scene, we trade a slight bias from gradient-based optimization and lossy volumetric conversion for highly anti-aliased results at low sample counts. This enables virtually noise-free and temporally stable results at low computational cost and without any additional post-processing, such as denoising. We demonstrate the applicability of our method to both individual objects and a challenging outdoor scene composed of highly detailed foliage.Attribution 4.0 International LicenseCCS Concepts: Computing methodologies → Volumetric models; Neural networks; Antialiasing; Ray tracingComputing methodologies → Volumetric modelsNeural networksAntialiasingRay tracingNeural Volumetric Level of Detail for Path Tracing10.2312/vmv.2024119710 pages