Stojanovic, RobertWeinrauch, AlexanderTatzgern, WolfgangKurz, AndreasSteinberger, MarkusBikker, JaccoGribble, Christiaan2023-06-252023-06-252023978-3-03868-229-52079-8687https://doi.org/10.2312/hpg.20231136https://diglib.eg.org:443/handle/10.2312/hpg20231136Achieving realism in modern games requires the integration of participating media effects, such as fog, dust, and smoke. However, due to the complex nature of scattering and partial occlusions within these media, real-time rendering of high-quality participating media remains a computational challenge. To address this challenge, traditional approaches of real-time participating media rendering involve storing temporary results in a view-aligned grid before ray marching through these cached values. In this paper, we investigate alternative hybrid worldand view-aligned caching methods that allow for the sharing of intermediate computations across cameras in a scene. This approach is particularly relevant for multi-camera setups, such as stereo rendering for VR and AR, local split-screen games, or cloud-based rendering for game streaming, where a large number of players may be in the same location. Our approach relies on a view-aligned grid for near-field computations, which enables us to capture high-frequency shadows in front of a viewer. Additionally, we use a world-space caching structure to selectively activate distant computations based on each viewer's visibility, allowing for the sharing of computations and maintaining high visual quality. The results of our evaluation demonstrate computational savings of up to 50% or more, without compromising visual quality.Attribution 4.0 International LicenseCCS Concepts: Computing methodologies -> Rendering; Ray tracingComputing methodologiesRenderingRay tracingEfficient Rendering of Participating Media for Multiple Viewpoints10.2312/hpg.2023113655-6410 pages