Xu, ShunkangXu, XiangXu, YanningWang, LuChen, RenjieRitschel, TobiasWhiting, Emily2024-10-132024-10-132024978-3-03868-250-9https://doi.org/10.2312/pg.20241287https://diglib.eg.org/handle/10.2312/pg20241287Data-parallel ray tracing is an important method for rendering massive scenes that exceed local memory. Nevertheless, its efficacy is markedly contingent upon bandwidth owing to the substantial ray data transfer during the rendering process. In this paper, we advance the utilization of neural representation geometries in data-parallel rendering to reduce ray forwarding and intersection overheads. To this end, we introduce a lightweight geometric neural representation, denoted as a ''neural proxy.'' Utilizing our neural proxies, we propose an efficient data-parallel ray tracing framework that significantly minimizes ray transmission and intersection overheads. Compared to state-of-the-art approaches, our method achieved a 2.29∼ 3.36× speedup with an almost imperceptible image quality loss.Attribution 4.0 International LicenseCCS Concepts: Computing methodologies → Computer graphics; Ray tracing; Neural networksComputing methodologies → Computer graphicsRay tracingNeural networksData Parallel Ray Tracing of Massive Scenes based on Neural Proxy10.2312/pg.2024128713 pages