Search Results

Now showing 1 - 10 of 24
  • Item
    Guiding Light Trees for Many-Light Direct Illumination
    (The Eurographics Association, 2023) Hamann, Eric; Jung, Alisa; Dachsbacher, Carsten; Babaei, Vahid; Skouras, Melina
    Path guiding techniques reduce the variance in path tracing by reusing knowledge from previous samples to build adaptive sampling distributions. The Practical Path Guiding (PPG) approach stores and iteratively refines an approximation of the incident radiance field in a spatio-directional data structure that allows sampling the incident radiance. However, due to the limited resolution in both spatial and directional dimensions, this discrete approximation is not able to accurately capture a large number of very small lights. We present an emitter sampling technique to guide next event estimation (NEE) with a global light tree and adaptive tree cuts that integrates into the PPG framework. In scenes with many lights our technique significantly reduces the RMSE compared to PPG with uniform NEE, while adding close to no overhead in scenes with few light sources. The results show that our technique can also aid the incident radiance learning of PPG in scenes with difficult visibility.
  • Item
    Rendering 2020 DL Track: Frontmatter
    (The Eurographics Association, 2020) Dachsbacher, Carsten; Pharr, Matt; Dachsbacher, Carsten and Pharr, Matt
  • Item
    Perceptually Guided Automatic Parameter Optimization for Interactive Visualization
    (The Eurographics Association, 2023) Opitz, Daniel; Zirr, Tobias; Dachsbacher, Carsten; Tessari, Lorenzo; Guthe, Michael; Grosch, Thorsten
    We propose a new reference-free method for automatically optimizing the parameters of visualization techniques such that the perception of visual structures is improved. Manual tuning may require domain knowledge not only in the field of the analyzed data, but also deep knowledge of the visualization techniques, and thus often becomes challenging as the number of parameters that impact the result grows. To avoid this laborious and difficult task, we first derive an image metric that models the loss of perceived information in the processing of a displayed image by a human observer; good visualization parameters minimize this metric. Our model is loosely based on quantitative studies in the fields of perception and biology covering visual masking, photo receptor sensitivity, and local adaptation. We then pair our metric with a generic parameter tuning algorithm to arrive at an automatic optimization method that is oblivious to the concrete relationship between parameter sets and visualization. We demonstrate our method for several volume visualization techniques, where visual clutter, visibility of features, and illumination are often hard to balance. Since the metric can be efficiently computed using image transformations, it can be applied to many visualization techniques and problem settings in a unified manner, including continuous optimization during interactive visual exploration. We also evaluate the effectiveness of our approach in a user study that validates the improved perception of visual features in results optimized using our model of perception.
  • Item
    Temporal Sample Reuse for Next Event Estimation and Path Guiding for Real-Time Path Tracing
    (The Eurographics Association, 2020) Dittebrandt, Addis; Hanika, Johannes; Dachsbacher, Carsten; Dachsbacher, Carsten and Pharr, Matt
    Good importance sampling is crucial for real-time path tracing where only low sample budgets are possible. We present two efficient sampling techniques tailored for massively-parallel GPU path tracing which improve next event estimation (NEE) for rendering with many light sources and sampling of indirect illumination. As sampling densities need to vary spatially, we use an octree structure in world space and introduce algorithms to continuously adapt the partitioning and distribution of the sampling budget. Both sampling techniques exploit temporal coherence by reusing samples from the previous frame: For NEE we collect sampled, unoccluded light sources and show how to deduplicate, but also diffuse this information to efficiently sample light sources in the subsequent frame. For sampling indirect illumination, we present a compressed directional quadtree structure which is iteratively adapted towards high-energy directions using samples from the previous frame. The updates and rebuilding of all data structures takes about 1ms in our test scenes, and adds about 6ms at 1080p to the path tracing time compared to using state-of-the-art light hierarchies and BRDF sampling. We show that this additional effort reduces noise in terms of mean squared error by at least one order of magnitude in many situations.
  • Item
    Minimal Convolutional Neural Networks for Temporal Anti Aliasing
    (The Eurographics Association, 2023) Herveau, Killian; Piochowiak, Max; Dachsbacher, Carsten; Bikker, Jacco; Gribble, Christiaan
    Existing deep learning methods for performing temporal anti aliasing (TAA) in rendering are either closed source or rely on upsampling networks with a large operation count which are expensive to evaluate. We propose a simple deep learning architecture for TAA combining only a few common primitives, easy to assemble and to change for application needs. We use a fully-convolutional neural network architecture with recurrent temporal feedback, motion vectors and depth values as input and show that a simple network can produce satisfactory results. Our architecture template, for which we provide code, introduces a method that adapts to different temporal subpixel offsets for accumulation without increasing the operation count. To this end, convolutional layers cycle through a set of different weights per temporal subpixel offset while their operations remain fixed. We analyze the effect of this method on image quality and present different tradeoffs for adapting the architecture. We show that our simple network performs remarkably better than variance clipping TAA, eliminating both flickering and ghosting without performing upsampling.
  • Item
    Path Guiding with Vertex Triplet Distributions
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Schüßler, Vincent; Hanika, Johannes; Jung, Alisa; Dachsbacher, Carsten; Ghosh, Abhijeet; Wei, Li-Yi
    Good importance sampling strategies are decisive for the quality and robustness of photorealistic image synthesis with Monte Carlo integration. Path guiding approaches use transport paths sampled by an existing base sampler to build and refine a guiding distribution. This distribution then guides subsequent paths in regions that are otherwise hard to sample. We observe that all terms in the measurement contribution function sampled during path construction depend on at most three consecutive path vertices. We thus propose to build a 9D guiding distribution over vertex triplets that adapts to the full measurement contribution with a 9D Gaussian mixture model (GMM). For incremental path sampling, we query the model for the last two vertices of a path prefix, resulting in a 3D conditional distribution with which we sample the next vertex along the path. To make this approach scalable, we partition the scene with an octree and learn a local GMM for each leaf separately. In a learning phase, we sample paths using the current guiding distribution and collect triplets of path vertices. We resample these triplets online and keep only a fixed-size subset in reservoirs. After each progression, we obtain new GMMs from triplet samples by an initial hard clustering followed by expectation maximization. Since we model 3D vertex positions, our guiding distribution naturally extends to participating media. In addition, the symmetry in the GMM allows us to query it for paths constructed by a light tracer. Therefore our method can guide both a path tracer and light tracer from a jointly learned guiding distribution.
  • Item
    Bridge Sampling for Connections via Multiple Scattering Events
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Schüßler, Vincent; Hanika, Johannes; Dachsbacher, Carsten; Garces, Elena; Haines, Eric
    Explicit sampling of and connecting to light sources is often essential for reducing variance in Monte Carlo rendering. In dense, forward-scattering participating media, its benefit declines, as significant transport happens over longer multiple-scattering paths around the straight connection to the light. Sampling these paths is challenging, as their contribution is shaped by the product of reciprocal squared distance terms and the phase functions. Previous work demonstrates that sampling several of these terms jointly is crucial. However, these methods are tied to low-order scattering or struggle with highly-peaked phase functions. We present a method for sampling a bridge: a subpath of arbitrary vertex count connecting two vertices. Its probability density is proportional to all phase functions at inner vertices and reciprocal squared distance terms. To achieve this, we importance sample the phase functions first, and subsequently all distances at once. For the latter, we sample an independent, preliminary distance for each edge of the bridge, and afterwards scale the bridge such that it matches the connection distance. The scale factor can be marginalized out analytically to obtain the probability density of the bridge. This approach leads to a simple algorithm and can construct bridges of any vertex count. For the case of one or two inserted vertices, we also show an alternative without scaling or marginalization. For practical path sampling, we present a method to sample the number of bridge vertices whose distribution depends on the connection distance, the phase function, and the collision coefficient. While our importance sampling treats media as homogeneous we demonstrate its effectiveness on heterogeneous media.
  • Item
    Procedural Physically based BRDF for Real-Time Rendering of Glints
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Chermain, Xavier; Sauvage, Basile; Dischler, Jean-Michel; Dachsbacher, Carsten; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
    Physically based rendering of glittering surfaces is a challenging problem in computer graphics. Several methods have proposed off-line solutions, but none is dedicated to high-performance graphics. In this work, we propose a novel physically based BRDF for real-time rendering of glints. Our model can reproduce the appearance of sparkling materials (rocks, rough plastics, glitter fabrics, etc.). Compared to the previous real-time method [ZK16], which is not physically based, our BRDF uses normalized NDFs and converges to the standard microfacet BRDF [CT82] for a large number of microfacets. Our method procedurally computes NDFs with hundreds of sharp lobes. It relies on a dictionary of 1D marginal distributions: at each location two of them are randomly picked and multiplied (to obtain a NDF), rotated (to increase the variety), and scaled (to control standard deviation/roughness). The dictionary is multiscale, does not depend on roughness, and has a low memory footprint (less than 1 MiB)
  • Item
    SVDAG Compression for Segmentation Volume Path Tracing
    (The Eurographics Association, 2024) Werner, Mirco; Piochowiak, Max; Dachsbacher, Carsten; Linsen, Lars; Thies, Justus
    Many visualization techniques exist for interactive exploration of segmentation volumes, however, photorealistic renderings are usually computed using slow offline techniques. We present a novel compression technique for segmentation volumes which enables interactive path tracing-based visualization for datasets up to hundreds of gigabytes: For every label, we create a grid of fixed-size axis aligned bounding boxes (AABBs) which covers the occupied voxels. For each AABB we first construct a sparse voxel octree (SVO) representing the contained voxels of the respective label, and then build a sparse voxel directed acyclic graph (SVDAG) identifying identical sub-trees across all SVOs; the lowest tree levels are stored as an occupancy bit-field. As a last step, we build a bounding volume hierarchy for the AABBs as a spatial indexing structure. Our representation solves a compression rate limitation of related SVDAG works as labels only need to be stored along with each AABB and not in the graph encoding of their shape. Our compression is GPU-friendly as hardware raytracing efficiently finds AABB intersections which we then traverse using a custom accelerated SVDAG traversal. Our method is able to path-trace a 113 GB volume on a consumer-grade GPU with 1 sample per pixel with up to 32 bounces at 108 FPS in a lossless representation, or at up to 1017 FPS when using dynamic level of detail.
  • Item
    Out-of-the-loop Autotuning of Metropolis Light Transport with Reciprocal Probability Binning
    (The Eurographics Association, 2023) Herveau, Killian; Otsu, Hisanari; Dachsbacher, Carsten; Babaei, Vahid; Skouras, Melina
    The performance of Markov Chain Monte Carlo (MCMC) rendering methods depends heavily on the mutation strategies and their parameters. We treat the underlying mutation strategies as black-boxes and focus on their parameters. This avoids the need for tedious manual parameter tuning and enables automatic adaptation to the actual scene. We propose a framework for out-of-the-loop autotuning of these parameters. As a pilot example, we demonstrate our tuning strategy for small-step mutations in Primary Sample Space Metropolis Light Transport. Our σ-binning strategy introduces a set of mutation parameters chosen by a heuristic: the inverse probability of the local direction sampling, which captures some characteristics of the local sampling. We show that our approach can successfully control the parameters and achieve better performance compared to non-adaptive mutation strategies.