22 results
Search Results
Now showing 1 - 10 of 22
Item Line Integration for Rendering Heterogeneous Emissive Volumes(The Eurographics Association and John Wiley & Sons Ltd., 2017) Simon, Florian; Hanika, Johannes; Zirr, Tobias; Dachsbacher, Carsten; Zwicker, Matthias and Sander, PedroEmissive media are often challenging to render: in thin regions where only few scattering events occur the emission is poorly sampled, while sampling events for emission can be disadvantageous due to absorption in dense regions. We extend the standard path space measurement contribution to also collect emission along path segments, not only at vertices. We apply this extension to two estimators: extending paths via scattering and distance sampling, and next event estimation. In order to do so, we unify the two approaches and derive the corresponding Monte Carlo estimators to interpret next event estimation as a solid angle sampling technique. We avoid connecting paths to vertices hidden behind dense absorbing layers of smoke by also including transmittance sampling into next event estimation. We demonstrate the advantages of our line integration approach which generates estimators with lower variance since entire segments are accounted for. Also, our novel forward next event estimation technique yields faster run times compared to previous next event estimation as it penetrates less deeply into dense volumes.Item Improved Half Vector Space Light Transport(The Eurographics Association and John Wiley & Sons Ltd., 2015) Hanika, Johannes; Kaplanyan, Anton; Dachsbacher, Carsten; Jaakko Lehtinen and Derek NowrouzezahraiIn this paper, we present improvements to half vector space light transport (HSLT) [KHD14], which make this approach more practical, robust for difficult input geometry, and faster. Our first contribution is the computation of half vector space ray differentials in a different domain than the original work. This enables a more uniform stratification over the image plane during Markov chain exploration. Furthermore, we introduce a new multi chain perturbation in half vector space, which, if combined appropriately with half vector perturbation, makes the mutation strategy both more robust to geometric configurations with fine displacements and faster due to reduced number of ray casts. We provide and analyze the results of improved HSLT and discuss possible applications of our new half vector ray differentials.Item Sparse High-degree Polynomials for Wide-angle Lenses(The Eurographics Association and John Wiley & Sons Ltd., 2016) Schrade, Emanuel; Hanika, Johannes; Dachsbacher, Carsten; Elmar Eisemann and Eugene FiumeRendering with accurate camera models greatly increases realism and improves the match of synthetic imagery to real-life footage. Photographic lenses can be simulated by ray tracing, but the performance depends on the complexity of the lens system, and some operations required for modern algorithms, such as deterministic connections, can be difficult to achieve. We generalise the approach of polynomial optics, i.e. expressing the light field transformation from the sensor to the outer pupil using a polynomial, to work with extreme wide angle (fisheye) lenses and aspherical elements. We also show how sparse polynomials can be constructed from the large space of high-degree terms (we tested up to degree 15). We achieve this using a variant of orthogonal matching pursuit instead of a Taylor series when computing the polynomials. We show two applications: photorealistic rendering using Monte Carlo methods, where we introduce a new aperture sampling technique that is suitable for light tracing, and an interactive preview method suitable for rendering with deep images.Item Improving the Dwivedi Sampling Scheme(The Eurographics Association and John Wiley & Sons Ltd., 2016) Meng, Johannes; Hanika, Johannes; Dachsbacher, Carsten; Elmar Eisemann and Eugene FiumeDespite recent advances in Monte Carlo rendering techniques, dense, high-albedo participating media such as wax or skin still remain a difficult problem. In such media, random walks tend to become very long, but may still lead to a large contribution to the image. The Dwivedi sampling scheme, which is based on zero variance random walks, biases the sampling probability distributions to exit the medium as quickly as possible. This can reduce variance considerably under the assumption of a locally homogeneous medium with constant phase function. Prior work uses the normal at the Point of Entry as the bias direction. We demonstrate that this technique can fail in common scenarios such as thin geometry with a strong backlight. We propose two new biasing strategies, Closest Point and Incident Illumination biasing, and show that these techniques can speed up convergence by up to an order of magnitude. Additionally, we propose a heuristic approach for combining biased and classical sampling techniques using Multiple Importance Sampling.Item Path Guiding with Vertex Triplet Distributions(The Eurographics Association and John Wiley & Sons Ltd., 2022) Schüßler, Vincent; Hanika, Johannes; Jung, Alisa; Dachsbacher, Carsten; Ghosh, Abhijeet; Wei, Li-YiGood importance sampling strategies are decisive for the quality and robustness of photorealistic image synthesis with Monte Carlo integration. Path guiding approaches use transport paths sampled by an existing base sampler to build and refine a guiding distribution. This distribution then guides subsequent paths in regions that are otherwise hard to sample. We observe that all terms in the measurement contribution function sampled during path construction depend on at most three consecutive path vertices. We thus propose to build a 9D guiding distribution over vertex triplets that adapts to the full measurement contribution with a 9D Gaussian mixture model (GMM). For incremental path sampling, we query the model for the last two vertices of a path prefix, resulting in a 3D conditional distribution with which we sample the next vertex along the path. To make this approach scalable, we partition the scene with an octree and learn a local GMM for each leaf separately. In a learning phase, we sample paths using the current guiding distribution and collect triplets of path vertices. We resample these triplets online and keep only a fixed-size subset in reservoirs. After each progression, we obtain new GMMs from triplet samples by an initial hard clustering followed by expectation maximization. Since we model 3D vertex positions, our guiding distribution naturally extends to participating media. In addition, the symmetry in the GMM allows us to query it for paths constructed by a light tracer. Therefore our method can guide both a path tracer and light tracer from a jointly learned guiding distribution.Item Physically Meaningful Rendering using Tristimulus Colours(The Eurographics Association and John Wiley & Sons Ltd., 2015) Meng, Johannes; Simon, Florian; Hanika, Johannes; Dachsbacher, Carsten; Jaakko Lehtinen and Derek NowrouzezahraiIn photorealistic image synthesis the radiative transfer equation is often not solved by simulating every wavelength of light, but instead by computing tristimulus transport, for instance using sRGB primaries as a basis. This choice is convenient, because input texture data is usually stored in RGB colour spaces. However, there are problems with this approach which are often overlooked or ignored. By comparing to spectral reference renderings, we show how rendering in tristimulus colour spaces introduces colour shifts in indirect light, violation of energy conservation, and unexpected behaviour in participating media. Furthermore, we introduce a fast method to compute spectra from almost any given XYZ input colour. It creates spectra that match the input colour precisely. Additionally, like in natural reflectance spectra, their energy is smoothly distributed over wide wavelength bands. This method is both useful to upsample RGB input data when spectral transport is used and as an intermediate step for corrected tristimulus-based transport. Finally, we show how energy conservation can be enforced in RGB by mapping colours to valid reflectances.Item Bridge Sampling for Connections via Multiple Scattering Events(The Eurographics Association and John Wiley & Sons Ltd., 2024) Schüßler, Vincent; Hanika, Johannes; Dachsbacher, Carsten; Garces, Elena; Haines, EricExplicit sampling of and connecting to light sources is often essential for reducing variance in Monte Carlo rendering. In dense, forward-scattering participating media, its benefit declines, as significant transport happens over longer multiple-scattering paths around the straight connection to the light. Sampling these paths is challenging, as their contribution is shaped by the product of reciprocal squared distance terms and the phase functions. Previous work demonstrates that sampling several of these terms jointly is crucial. However, these methods are tied to low-order scattering or struggle with highly-peaked phase functions. We present a method for sampling a bridge: a subpath of arbitrary vertex count connecting two vertices. Its probability density is proportional to all phase functions at inner vertices and reciprocal squared distance terms. To achieve this, we importance sample the phase functions first, and subsequently all distances at once. For the latter, we sample an independent, preliminary distance for each edge of the bridge, and afterwards scale the bridge such that it matches the connection distance. The scale factor can be marginalized out analytically to obtain the probability density of the bridge. This approach leads to a simple algorithm and can construct bridges of any vertex count. For the case of one or two inserted vertices, we also show an alternative without scaling or marginalization. For practical path sampling, we present a method to sample the number of bridge vertices whose distribution depends on the connection distance, the phase function, and the collision coefficient. While our importance sampling treats media as homogeneous we demonstrate its effectiveness on heterogeneous media.Item Procedural Physically based BRDF for Real-Time Rendering of Glints(The Eurographics Association and John Wiley & Sons Ltd., 2020) Chermain, Xavier; Sauvage, Basile; Dischler, Jean-Michel; Dachsbacher, Carsten; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-LuePhysically based rendering of glittering surfaces is a challenging problem in computer graphics. Several methods have proposed off-line solutions, but none is dedicated to high-performance graphics. In this work, we propose a novel physically based BRDF for real-time rendering of glints. Our model can reproduce the appearance of sparkling materials (rocks, rough plastics, glitter fabrics, etc.). Compared to the previous real-time method [ZK16], which is not physically based, our BRDF uses normalized NDFs and converges to the standard microfacet BRDF [CT82] for a large number of microfacets. Our method procedurally computes NDFs with hundreds of sharp lobes. It relies on a dictionary of 1D marginal distributions: at each location two of them are randomly picked and multiplied (to obtain a NDF), rotated (to increase the variety), and scaled (to control standard deviation/roughness). The dictionary is multiscale, does not depend on roughness, and has a low memory footprint (less than 1 MiB)Item Real‐Time Isosurface Extraction With View‐Dependent Level of Detail and Applications(Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Scholz, Manuel; Bender, Jan; Dachsbacher, Carsten; Deussen, Oliver and Zhang, Hao (Richard)Volumetric scalar data sets are common in many scientific, engineering and medical applications where they originate from measurements or simulations. Furthermore, they can represent geometric scene content, e.g. as distance or density fields. Often isosurfaces are extracted, either for indirect volume visualization in the former category, or to simply obtain a polygonal representation in case of the latter. However, even moderately sized volume data sets can result in complex isosurfaces which are challenging to recompute in real time, e.g. when the user modifies the isovalue or when the data itself are dynamic. In this paper, we present a GPU‐friendly algorithm for the extraction of isosurfaces, which provides adaptive level of detail rendering with view‐dependent tessellation. It is based on a longest edge bisection scheme where the resulting tetrahedral cells are subdivided into four hexahedra, which then form the domain for the subsequent isosurface extraction step. Our algorithm generates meshes with good triangle quality even for highly non‐linear scalar data. In contrast to previous methods, it does not require any stitching between regions of different levels of detail. As all computation is performed at run time and no pre‐processing is required, the algorithm naturally supports dynamic data and allows us to change isovalues at any time.Volumetric scalar data sets are common in many scientific, engineering and medical applications where they originate from measurements or simulations. Furthermore, they can represent geometric scene content, e.g. as distance or density fields. Often isosurfaces are extracted, either for indirect volume visualization in the former category, or to simply obtain a polygonal representation in case of the latter. However, even moderately sized volume data sets can result in complex isosurfaces which are challenging to recompute in real time, e.g. when the user modifies the isovalue or when the data itself are dynamic.Item Re‐Weighting Firefly Samples for Improved Finite‐Sample Monte Carlo Estimates(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Zirr, Tobias; Hanika, Johannes; Dachsbacher, Carsten; Chen, Min and Benes, BedrichSamples with high contribution but low probability density, often called fireflies, occur in all practical Monte Carlo estimators and are part of computing unbiased estimates. For finite‐sample estimates, however, they can lead to excessive variance. Rejecting all samples classified as outliers, as suggested in previous work, leads to estimates that are too low and can cause undesirable artefacts. In this paper, we show how samples can be re‐weighted depending on their contribution and sampling frequency such that the finite‐sample estimate gets closer to the correct expected value and the variance can be controlled. For this, we first derive a theory for how samples should ideally be re‐weighted and that this would require the probability density function of the optimal sampling strategy. As this probability density function is generally unknown, we show how the discrepancy between the optimal and the actual sampling strategy can be estimated and used for re‐weighting in practice. We describe an efficient algorithm that allows for the necessary analysis of per‐pixel sample distributions in the context of Monte Carlo rendering without storing any individual samples, with only minimal changes to the rendering algorithm. It causes negligible runtime overhead, works in constant memory and is well suited for parallel and progressive rendering. The re‐weighting runs as a fast post‐process, can be controlled interactively and our approach is non‐destructive in that the unbiased result can be reconstructed at any time.Samples with high contribution but low probability density, often called fireflies, occur in all practical Monte Carlo estimators and are part of computing unbiased estimates. For finite‐sample estimates, however, they can lead to excessive variance. Rejecting all samples classified as outliers, as suggested in previous work, leads to estimates that are too low and can cause undesirable artefacts. In this paper, we show how samples can be re‐weighted depending on their contribution and sampling frequency such that the finite‐sample estimate gets closer to the correct expected value and the variance can be controlled. For this, we first derive a theory for how samples should ideally be re‐weighted and that this would require the probability density function of the optimal sampling strategy. As this probability density function is generally unknown, we show how the discrepancy between the optimal and the actual sampling strategy can be estimated and used for re‐weighting in practice. We describe an efficient algorithm that allows for the necessary analysis of per‐pixel sample distributions in the context of Monte Carlo rendering without storing any individual samples, with only minimal changes to the rendering algorithm.
- «
- 1 (current)
- 2
- 3
- »