Search Results

Now showing 1 - 10 of 10
  • Item
    Improving Shadow Map Filtering with Statistical Analysis
    (The Eurographics Association, 2011) Gumbau, Jesus; Szirmay-Kalos, László; Sbert, Mateu; Sellés, Miguel Chover; N. Avis and S. Lefebvre
    Shadow maps are widely used in real-time applications. Shadow maps cannot be filtered linearly as regular textures, thus undersampling leads to severe aliasing. This problem has been attacked by methods that transform the depth values to allow approximate linear filtering and to approaches based on statistical analysis, which suffer from light bleeding artifacts. In this paper we propose a new statistical filtering method for shadow maps, which approximates the cumulative distribution function (CDF) of depths with a power function. This approximation significantly reduces light bleeding artifacts, maintaining performance and spatial costs. Like existing techniques, the algorithm is easy to implement on the graphics hardware and is fairly scalable.
  • Item
    Interactive Volume Illustration Using Intensity Filtering
    (The Eurographics Association, 2010) Ruiz, Marc; Boada, Imma; Feixas, Miquel; Sbert, Mateu; Pauline Jepp and Oliver Deussen
    We propose a simple and interactive technique for volume illustration by using the difference between the original intensity values and a low-pass filtered copy. This difference, known as unsharped mask, provides us with a spatial importance map that captures salient and separability information about regions in the volume. We integrate this map in the visualization pipeline and use it to modulate the color and the opacity assigned by the transfer function to produce different illustrative effects. We also apply stipple rendering modulating the density of the dots with the spatial importance map. The core of our approach is the computation of a 3D Gaussian filter, which is equivalent to three consecutive 1D filters. This separability feature allows us to obtain interactive rates with a CUDA implementation. We show results of our approach for different data sets.
  • Item
    Statistical Characterization of Surface Reflectance
    (The Eurographics Association, 2014) Havran, Vlastimil; Sbert, Mateu; Reinhard Klein and Holly Rushmeier
    The classification of surface reflectance functions as diffuse, specular, and glossy has been introduced by Heckbert more than two decades ago. Many rendering algorithms are dependent on such a classification, as different kinds of light transport will be handled by specialized methods, for example caustics require specular bounce or refraction. Due to the increasing wealth of surface reflectance models including those based on measured data, it has not been possible to keep such a characterization simple. Each surface reflectance model is mostly handled separately, or alternatively, the rendering algorithm restricts itself to the use of some subset of reflectance models. We suggest a characterization for arbitrary surface reflectance representation by standard statistical tools, namely normalized variance known as Squared-Coefficient-of-Variation (SCV).We show by videos that there is even a weak perceptual correspondence with the proposed reflectance characterization, when we use monochromatic surface reflectance and the images are normalized so they have the unit albedo.
  • Item
    Information Theory in Visualization
    (The Eurographics Association, 2016) Chen, Min; Sbert, Mateu; Shen, Han-Wei; Viola, Ivan; Bardera, Anton; Feixas, Miquel; Augusto Sousa and Kadi Bouatouch
    In this half-day tutorial, we review a variety of applications of information theory in visualization. The holistic nature of information-theoretic reasoning has enabled many such applications, ranging from light placement to view selection, from feature highlighting to transfer function design, from data fusion to visual multiplexing, and so on. Perhaps the most exciting application is the potential for information theory to underpin the discipline of visualization, for example, mathematically confirming the benefit of visualization in data intelligence.
  • Item
    Toward Auvers Period: Evolution of van Gogh's Style
    (The Eurographics Association, 2010) Rigau, Jaume; Feixas, Miquel; Sbert, Mateu; Wallraven, Christian; Pauline Jepp and Oliver Deussen
    In this paper, we analyze the evolution of van Gogh's style toward the Auvers final period using informational measures. We will try to answer the following questions: Was van Gogh exploring new ways toward changing his style? Can informational measures support the claim of critics on the evolution of his palette and composition? How "far" was van Gogh's last period from the previous ones, can we find out an evolutionary trend? We will extend here the measures defined in our previous work with novel measures taking into account spatial information and will present a visual tool to examine the palette. Our results confirm the usefulness of an approach rooted in information theory for the aesthetic study of the work of a painter.
  • Item
    An Information-Theoretic Observation Channel for Volume Visualization
    (The Eurographics Association and Blackwell Publishing Ltd., 2013) Bramon, Roger; Ruiz, Marc; Bardera, Anton; Boada, Imma; Feixas, Miquel; Sbert, Mateu; B. Preim, P. Rheingans, and H. Theisel
    Different quality metrics have been proposed in the literature to evaluate how well a visualization represents the underlying data. In this paper, we present a new information-theoretic framework that quantifies the information transfer between the source data set and the rendered image. This approach is based on the definition of an observation channel whose input and output are given by the intensity values of the volumetric data set and the pixel colors, respectively. From this channel, the mutual information, a measure of information transfer or correlation between the input and the output, is used as a metric to evaluate the visualization quality. The usefulness of the proposed observation channel is illustrated with three fundamental visualization applications: selection of informative viewpoints, transfer function design, and light positioning.
  • Item
    Multiple Scattering in Inhomogeneous Participating Media Using Rao-Blackwellization and Control Variates
    (The Eurographics Association and John Wiley & Sons Ltd., 2018) Szirmay-Kalos, László; Magdics, Milán; Sbert, Mateu; Gutierrez, Diego and Sheffer, Alla
    Rendering inhomogeneous participating media requires a lot of volume samples since the extinction coefficient needs to be integrated along light paths. Ray marching makes small steps, which is time consuming and leads to biased algorithms. Woodcocklike approaches use analytic sampling and a random rejection scheme guaranteeing that the expectations will be the same as in the original model. These models and the application of control variates for the extinction have been successful to compute transmittance and single scattering but were not fully exploited in multiple scattering simulation. Our paper attacks the multiple scattering problem in heterogeneous media and modifies the light-medium interaction model to allow the use of simple analytic formulae while preserving the correct expected values. The model transformation reduces the variance of the estimates with the help of Rao-Blackwellization and control variates applied both for the extinction coefficient and the incident radiance. Based on the transformed model, efficient Monte Carlo rendering algorithms are obtained.
  • Item
    Variance Analysis of Multi-sample and One-sample Multiple Importance Sampling
    (The Eurographics Association and John Wiley & Sons Ltd., 2016) Sbert, Mateu; Havran, Vlastimil; Szirmay-Kalos, Laszlo; Eitan Grinspun and Bernd Bickel and Yoshinori Dobashi
    We reexamine in this paper the variance for the Multiple Importance Sampling (MIS) estimator for multi-sample and onesample model. As a result of our analysis we can obtain the optimal estimator for the multi-sample model for the case where the weights do not depend on the count of samples. We extend the analysis to include the cost of sampling. With these results in hand we find a better estimator than balance heuristic with equal count of samples. Further, we show that the variance for the one-sample model is larger or equal than for the multi-sample model, and that there are only two cases where the variance is the same. Finally, we study on four examples the difference of variances for equal count as used by Veach, our new estimator, and a recently introduced heuristic.
  • Item
    Robust Sample Budget Allocation for MIS
    (The Eurographics Association, 2022) Szirmay-Kalos, László; Sbert, Mateu; Pelechano, Nuria; Vanderhaeghe, David
    Multiple Importance Sampling (MIS) combines several sampling techniques. Its weighting scheme depends on how many samples are generated with each particular method. This paper examines the optimal determination of the number of samples allocated to each of the combined techniques taking into account that this decision can depend only on a relatively small number of previous samples. The proposed method is demonstrated with the combination of BRDF sampling and Light source sampling, and we show that due to its robustness, it can outperform the theoretically more accurate approaches.
  • Item
    Optimal Deterministic Mixture Sampling
    (The Eurographics Association, 2019) Sbert, Mateu; Havran, Vlastimil; Szirmay-Kalos, László; Cignoni, Paolo and Miguel, Eder
    Multiple Importance Sampling (MIS) can combine several sampling techniques preserving their advantages. For example, we can consider different Monte Carlo rendering methods generating light path samples proportionally only to certain factors of the integrand. MIS then becomes equivalent to the application of the mixture of individual sampling densities, thus can simultaneously mimic the densities of all considered techniques. The weights of the mixture sampling depends on how many samples are generated with each particular method. This paper examines the optimal determination of this parameter. The proposed method is demonstrated with the combination of BRDF sampling and Light source sampling, and we show that it not only outperforms the application of the two individual methods, but is superior to other recent combination strategies and is close to the theoretical optimum.