Search Results

Now showing 1 - 10 of 54
  • Item
    Partial Shape Matching Using Transformation Parameter Similarity
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Guerrero, Paul; Auzinger, Thomas; Wimmer, Michael; Jeschke, Stefan; Deussen, Oliver and Zhang, Hao (Richard)
    In this paper, we present a method for non‐rigid, partial shape matching in vector graphics. Given a user‐specified query region in a 2D shape, similar regions are found, even if they are non‐linearly distorted. Furthermore, a non‐linear mapping is established between the query regions and these matches, which allows the automatic transfer of editing operations such as texturing. This is achieved by a two‐step approach. First, pointwise correspondences between the query region and the whole shape are established. The transformation parameters of these correspondences are registered in an appropriate transformation space. For transformations between similar regions, these parameters form surfaces in transformation space, which are extracted in the second step of our method. The extracted regions may be related to the query region by a non‐rigid transform, enabling non‐rigid shape matching.In this paper, we present a method for non‐rigid, partial shape matching in vector graphics. Given a user‐specified query region in a 2D shape, similar regions are found, even if they are non‐linearly distorted. Furthermore, a non‐linear mapping is established between the query regions and these matches, which allows the automatic transfer of editing operations such as texturing. This is achieved by a two‐step approach. First, pointwise correspondences between the query region and the whole shape are established. The transformation parameters of these correspondences are registered in an appropriate transformation space. For transformations between similar regions, these parameters form surfaces in transformation space, which are extracted in the second step of our method. The extracted regions may be related to the query region by a non‐rigid transform, enabling non‐rigid shape matching.
  • Item
    Separable Subsurface Scattering
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Jimenez, Jorge; Zsolnai, Károly; Jarabo, Adrian; Freude, Christian; Auzinger, Thomas; Wu, Xian‐Chun; der Pahlen, Javier; Wimmer, Michael; Gutierrez, Diego; Deussen, Oliver and Zhang, Hao (Richard)
    In this paper, we propose two real‐time models for simulating subsurface scattering for a large variety of translucent materials, which need under 0.5 ms per frame to execute. This makes them a practical option for real‐time production scenarios. Current state‐of‐the‐art, real‐time approaches simulate subsurface light transport by approximating the radially symmetric non‐separable diffusion kernel with a sum of separable Gaussians, which requires multiple (up to 12) 1D convolutions. In this work we relax the requirement of radial symmetry to approximate a 2D diffuse reflectance profile by a single separable kernel. We first show that low‐rank approximations based on matrix factorization outperform previous approaches, but they still need several passes to get good results. To solve this, we present two different separable models: the first one yields a high‐quality diffusion simulation, while the second one offers an attractive trade‐off between physical accuracy and artistic control. Both allow rendering of subsurface scattering using only two 1D convolutions, reducing both execution time and memory consumption, while delivering results comparable to techniques with higher cost. Using our importance‐sampling and jittering strategies, only seven samples per pixel are required. Our methods can be implemented as simple post‐processing steps without intrusive changes to existing rendering pipelines.In this paper, we propose two real‐time models for simulating subsurface scattering of subsurface scattering for a large variety of translucent materials, which need under 0.5 ms per frame to execute. This makes them a practical option for real‐time production scenarios. Current state‐of‐the‐art, real‐time approaches simulate subsurface light transport by approximating the radially symmetric non‐separable diffusion kernel with a sum of separable Gaussians, which requires multiple (up to 12) 1D convolutions. In this work we relax the requirement of radial symmetry to approximate a 2D diffuse reflectance profile by a single separable kernel. We first show that low‐rank approximations based on matrix factorization outperform previous approaches, but they still need several passes to get good results. To solve this, we present two different separable models: the first one yields a high‐quality diffusion simulation, while the second one offers an attractive trade‐off between physical accuracy and artistic control. Both allow rendering of subsurface scattering using only two 1D convolutions, reducing both execution time and memory consumption, while delivering results comparable to techniques with higher cost. Using our importance‐sampling and jittering strategies, only seven samples per pixel are required.
  • Item
    Freeform Shadow Boundary Editing
    (The Eurographics Association and Blackwell Publishing Ltd., 2013) Mattausch, Oliver; Igarashi, Takeo; Wimmer, Michael; I. Navazo, P. Poulin
    We present an algorithm for artistically modifying physically based shadows. With our tool, an artist can directly edit the shadow boundaries in the scene in an intuitive fashion similar to freeform curve editing. Our algorithm then makes these shadow edits consistent with respect to varying light directions and scene configurations, by creating a shadow mesh from the new silhouettes. The shadow mesh helps a modified shadow volume algorithm cast shadows that conform to the artistic shadow boundary edits, while providing plausible interaction with dynamic environments, including animation of both characters and light sources. Our algorithm provides significantly more fine-grained local and direct control than previous artistic light editing methods, which makes it simple to adjust the shadows in a scene to reach a particular effect, or to create interesting shadow shapes and shadow animations. All cases are handled with a single intuitive interface, be it soft shadows, or (self-)shadows on arbitrary receivers.
  • Item
    Austrian Chapter Report
    (2024-04-22) Wimmer, Michael
  • Item
    Software Rasterization of 2 Billion Points in Real Time
    (ACM Association for Computing Machinery, 2022) Schütz, Markus; Kerbl, Bernhard; Wimmer, Michael; Josef Spjut; Marc Stamminger; Victor Zordan
    The accelerated collection of detailed real-world 3D data in the form of ever-larger point clouds is sparking a demand for novel visualization techniques that are capable of rendering billions of point primitives in real-time. We propose a software rasterization pipeline for point clouds that is capable of rendering up to two billion points in real-time (60 FPS) on commodity hardware. Improvements over the state of the art are achieved by batching points, enabling a number of batch-level optimizations before rasterizing them within the same rendering pass. These optimizations include frustum culling, level-of-detail (LOD) rendering, and choosing the appropriate coordinate precision for a given batch of points directly within a compute workgroup. Adaptive coordinate precision, in conjunction with visibility buffers, reduces the required data for the majority of points to just four bytes, making our approach several times faster than the bandwidth-limited state of the art. Furthermore, support for LOD rendering makes our software rasterization approach suitable for rendering arbitrarily large point clouds, and to meet the elevated performance demands of virtual reality applications.
  • Item
    High-Quality Point Based Rendering Using Fast Single Pass Interpolation
    (IEEE, 2015) Schütz, Markus; Wimmer, Michael; Gabriele Guidi and Roberto Scopigno and Pere Brunet
    We present a method to improve the visual quality of point cloud renderings through a nearest-neighbor-like interpolation of points. This allows applications to render points at larger sizes in order to reduce holes, without reducing the readability of fine details due to occluding points. The implementation requires only few modifications to existing shaders, making it eligible to be integrated in software applications without major design changes.
  • Item
    Fast Multi-View Rendering for Real-Time Applications
    (The Eurographics Association, 2020) Unterguggenberger, Johannes; Kerbl, Bernhard; Steinberger, Markus; Schmalstieg, Dieter; Wimmer, Michael; Frey, Steffen and Huang, Jian and Sadlo, Filip
    Efficient rendering of multiple views can be a critical performance factor for real-time rendering applications. Generating more than one view multiplies the amount of rendered geometry, which can cause a huge performance impact. Minimizing that impact has been a target of previous research and GPU manufacturers, who have started to equip devices with dedicated acceleration units. However, vendor-specific acceleration is not the only option to increase multi-view rendering (MVR) performance. Available graphics API features, shader stages and optimizations can be exploited for improved MVR performance, while generally offering more versatile pipeline configurations, including the preservation of custom tessellation and geometry shaders. In this paper, we present an exhaustive evaluation of MVR pipelines available on modern GPUs. We provide a detailed analysis of previous techniques, hardware-accelerated MVR and propose a novel method, leading to the creation of an MVR catalogue. Our analyses cover three distinct applications to help gain clarity on overall MVR performance characteristics. Our interpretation of the observed results provides a guideline for selecting the most appropriate one for various use cases on different GPU architectures.
  • Item
    PPSurf: Combining Patches and Point Convolutions for Detailed Surface Reconstruction
    (© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Erler, Philipp; Fuentes‐Perez, Lizeth; Hermosilla, Pedro; Guerrero, Paul; Pajarola, Renato; Wimmer, Michael; Alliez, Pierre; Wimmer, Michael
    3D surface reconstruction from point clouds is a key step in areas such as content creation, archaeology, digital cultural heritage and engineering. Current approaches either try to optimize a non‐data‐driven surface representation to fit the points, or learn a data‐driven prior over the distribution of commonly occurring surfaces and how they correlate with potentially noisy point clouds. Data‐driven methods enable robust handling of noise and typically either focus on a or a prior, which trade‐off between robustness to noise on the global end and surface detail preservation on the local end. We propose as a method that combines a global prior based on point convolutions and a local prior based on processing local point cloud patches. We show that this approach is robust to noise while recovering surface details more accurately than the current state‐of‐the‐art. Our source code, pre‐trained model and dataset are available at .
  • Item
    Non-Sampled Anti-Aliasing
    (The Eurographics Association, 2013) Auzinger, Thomas; Musialski, Przemyslaw; Preiner, Reinhold; Wimmer, Michael; Michael Bronstein and Jean Favre and Kai Hormann
    In this paper we present a parallel method for high-quality edge anti-aliasing in rasterization. In contrast to traditional graphics hardware methods, which rely on massive oversampling to combat aliasing issues, we evaluate a closed-form solution of the associated prefilter convolution. This enables the use of a wide range of filter functions with arbitrary kernel sizes, as well as general shading methods such as texture mapping or complex illumination models. Due to the use of analytic solutions, our results are exact in the mathematical sense and provide objective ground-truth for other anti-aliasing methods and enable the rigorous comparison of different models and filters. An efficient implementation on general purpose graphics hardware is discussed and several comparisons to existing techniques and of various filter functions are given.
  • Item
    CHC+RT: Coherent Hierarchical Culling for Ray Tracing
    (The Eurographics Association and John Wiley & Sons Ltd., 2015) Mattausch, Oliver; Bittner, Jirí; Jaspe, Alberto; Gobbetti, Enrico; Wimmer, Michael; Pajarola, Renato; Olga Sorkine-Hornung and Michael Wimmer
    We propose a new technique for in-core and out-of-core GPU ray tracing using a generalization of hierarchical occlusion culling in the style of the CHC++ method. Our method exploits the rasterization pipeline and hardware occlusion queries in order to create coherent batches of work for localized shader-based ray tracing kernels. By combining hierarchies in both ray space and object space, the method is able to share intermediate traversal results among multiple rays. We exploit temporal coherence among similar ray sets between frames and also within the given frame. A suitable management of the current visibility state makes it possible to benefit from occlusion culling for less coherent ray types like diffuse reflections. Since large scenes are still a challenge for modern GPU ray tracers, our method is most useful for scenes with medium to high complexity, especially since our method inherently supports ray tracing highly complex scenes that do not fit in GPU memory. For in-core scenes our method is comparable to CUDA ray tracing and performs up to 5:94 better than pure shader-based ray tracing.