Search Results

Now showing 1 - 10 of 30
  • Item
    Spectral Upsampling Approaches for RGB Illumination
    (The Eurographics Association, 2022) Guarnera, Giuseppe Claudio; Gitlina, Yuliya; Deschaintre, Valentin; Ghosh, Abhijeet; Ghosh, Abhijeet; Wei, Li-Yi
    We present two practical approaches for high fidelity spectral upsampling of previously recorded RGB illumination in the form of an image-based representation such as an RGB light probe. Unlike previous approaches that require multiple measurements with a spectrometer or a reference color chart under a target illumination environment, our method requires no additional information for the spectral upsampling step. Instead, we construct a data-driven basis of spectral distributions for incident illumination from a set of six RGBW LEDs (three narrowband and three broadband) that we employ to represent a given RGB color using a convex combination of the six basis spectra. We propose two different approaches for estimating the weights of the convex combination using – (a) genetic algorithm, and (b) neural networks. We additionally propose a theoretical basis consisting of a set of narrow and broad Gaussians as a generalization of the approach, and also evaluate an alternate LED basis for spectral upsampling. We achieve good qualitative matches of the predicted illumination spectrum using our spectral upsampling approach to ground truth illumination spectrum while achieving near perfect matching of the RGB color of the given illumination in the vast majority of cases. We demonstrate that the spectrally upsampled RGB illumination can be employed for various applications including improved lighting reproduction as well as more accurate spectral rendering.
  • Item
    Efficient High-Quality Rendering of Ribbons and Twisted Lines
    (The Eurographics Association, 2022) Neuhauser, Christoph; Wang, Junpeng; Kern, Michael; Westermann, Rüdiger; Bender, Jan; Botsch, Mario; Keim, Daniel A.
    Flat twisting ribbons are often used for visualizing twists along lines in 3D space. Flat ribbons can disappear when looking at them under oblique angles, and they introduce flickering due to aliasing during animations. We demonstrate that this limitation can be overcome by procedurally rendering generalized cylinders with elliptic profiles. By adjusting the length of the cylinder's semi-minor axis, the ribbon thickness can be controlled so that it always remains visible. The proposed rendering approach further enables the visualization of twists via the projection of a line spiralling around the cylinder's center line. In contrast to texture mapping, this keeps the line width fixed, regardless of the strength of the twist, and provides efficient control over the spiralling frequency and coloring between the twisting lines. The proposed rendering approach can be performed efficiently on recent GPUs by exploiting programmable pulling, mesh shaders and hardware-accelerated ray tracing.
  • Item
    Learning Dynamic 3D Geometry and Texture for Video Face Swapping
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Otto, Christopher; Naruniec, Jacek; Helminger, Leonhard; Etterlin, Thomas; Mignone, Graziana; Chandran, Prashanth; Zoss, Gaspard; Schroers, Christopher; Gross, Markus; Gotardo, Paulo; Bradley, Derek; Weber, Romann; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    Face swapping is the process of applying a source actor's appearance to a target actor's performance in a video. This is a challenging visual effect that has seen increasing demand in film and television production. Recent work has shown that datadriven methods based on deep learning can produce compelling effects at production quality in a fraction of the time required for a traditional 3D pipeline. However, the dominant approach operates only on 2D imagery without reference to the underlying facial geometry or texture, resulting in poor generalization under novel viewpoints and little artistic control. Methods that do incorporate geometry rely on pre-learned facial priors that do not adapt well to particular geometric features of the source and target faces. We approach the problem of face swapping from the perspective of learning simultaneous convolutional facial autoencoders for the source and target identities, using a shared encoder network with identity-specific decoders. The key novelty in our approach is that each decoder first lifts the latent code into a 3D representation, comprising a dynamic face texture and a deformable 3D face shape, before projecting this 3D face back onto the input image using a differentiable renderer. The coupled autoencoders are trained only on videos of the source and target identities, without requiring 3D supervision. By leveraging the learned 3D geometry and texture, our method achieves face swapping with higher quality than when using offthe- shelf monocular 3D face reconstruction, and overall lower FID score than state-of-the-art 2D methods. Furthermore, our 3D representation allows for efficient artistic control over the result, which can be hard to achieve with existing 2D approaches.
  • Item
    A Density Estimation Technique for Radiosity
    (The Eurographics Association, 2022) Lastra, Miguel; Urefía, Carlos; Revelles, Jorge; Montes, Rosana; Xavier Pueyo; Manuel Próspero dos Santos; Luiz Velho
    Radiosity computation on scenes including objects with a complex geometry, or with a large number of faces and meshes with very different sizes, is very complex. We presenta new method (based on the Photon Maps method [7]) where density estimation on the tangent plane at each surface point is performed for irradiance computation by using photon paths (fine segments traveled by a ray) instead of photon impacts. Therefore we improve the results for scenes containing small objects which receive only afew impacts. Also, geometry is completely decoupledfrom radiosity computation.
  • Item
    Htex: Per-Halfedge Texturing for Arbitrary Mesh Topologies
    (ACM Association for Computing Machinery, 2022) Barbier, Wilhem; Dupuy, Jonathan; Josef Spjut; Marc Stamminger; Victor Zordan
    We introduce per-halfedge texturing (Htex) a GPU-friendly method for texturing arbitrary polygon-meshes without an explicit parameterization. Htex builds upon the insight that halfedges encode an intrinsic triangulation for polygon meshes, where each halfedge spans a unique triangle with direct adjacency information. Rather than storing a separate texture per face of the input mesh as is done by previous parameterization-free texturing methods, Htex stores a square texture for each halfedge and its twin.We show that this simple change from face to halfedge induces two important properties for high performance parameterization-free texturing. First, Htex natively supports arbitrary polygons without requiring dedicated code for, e.g, non-quad faces. Second, Htex leads to a straightforward and efficient GPU implementation that uses only three texture-fetches per halfedge to produce continuous texturing across the entire mesh. We demonstrate the effectiveness of Htex by rendering production assets in real time.
  • Item
    NeuLF: Efficient Novel View Synthesis with Neural 4D Light Field
    (The Eurographics Association, 2022) Li, Zhong; Song, Liangchen; Liu, Celong; Yuan, Junsong; Xu, Yi; Ghosh, Abhijeet; Wei, Li-Yi
    In this paper, we present an efficient and robust deep learning solution for novel view synthesis of complex scenes. In our approach, a 3D scene is represented as a light field, i.e., a set of rays, each of which has a corresponding color when reaching the image plane. For efficient novel view rendering, we adopt a two-plane parameterization of the light field, where each ray is characterized by a 4D parameter. We then formulate the light field as a function that indexes rays to corresponding color values. We train a deep fully connected network to optimize this implicit function and memorize the 3D scene. Then, the scene-specific model is used to synthesize novel views. Different from previous light field approaches which require dense view sampling to reliably render novel views, our method can render novel views by sampling rays and querying the color for each ray from the network directly, thus enabling high-quality light field rendering with a sparser set of training images. Per-ray depth can be optionally predicted by the network, thus enabling applications such as auto refocus. Our novel view synthesis results are comparable to the state-of-the-arts, and even superior in some challenging scenes with refraction and reflection. We achieve this while maintaining an interactive frame rate and a small memory footprint.
  • Item
    A Real-Time Adaptive Ray Marching Method for Particle-Based Fluid Surface Reconstruction
    (The Eurographics Association, 2022) Wu, Tong; Zhou, Zhiqiang; Wang, Anlan; Gong, Yuning; Zhang, Yanci; Ghosh, Abhijeet; Wei, Li-Yi
    In the rendering of particle-based fluids, the surfaces reconstructed by ray marching techniques contain more details than screen space filtering methods. However, the ray marching process is quite time-consuming because it needs a large number of steps for each ray. In this paper, we introduce an adaptive ray marching method to construct high-quality fluid surfaces in real-time. In order to reduce the number of ray marching steps, we propose a new data structure called binary density grid so that our ray marching method is capable of adaptively adjusting the step length. We also classify the fluid particles into two categories, i.e. high-density aggregations and low-density splashes. Based on this classification, two depth maps are generated to quickly provide the accurate start and approximated stop points of ray marching. In addition to reduce the number of marching steps, we also propose a method to adaptively determine the number of rays cast for different screen regions. And finally, in order to improve the quality of reconstructed surfaces, we present a method to adaptively blending the normal vectors computed from screen and object space. With the various adaptive optimizations mentioned above, our method can reconstruct high-quality fluid surfaces in real time.
  • Item
    Meshlets and How to Shade Them: A Study on Texture-Space Shading
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Neff, Thomas; Mueller, Joerg H.; Steinberger, Markus; Schmalstieg, Dieter; Chaine, Raphaëlle; Kim, Min H.
    Commonly used image-space layouts of shading points, such as used in deferred shading, are strictly view-dependent, which restricts efficient caching and temporal amortization. In contrast, texture-space layouts can represent shading on all surface points and can be tailored to the needs of a particular application. However, the best grouping of shading points-which we call a shading unit-in texture space remains unclear. Choices of shading unit granularity (how many primitives or pixels per unit) and in shading unit parametrization (how to assign texture coordinates to shading points) lead to different outcomes in terms of final image quality, overshading cost, and memory consumption. Among the possible choices, shading units consisting of larger groups of scene primitives, so-called meshlets, remain unexplored as of yet. In this paper, we introduce a taxonomy for analyzing existing texture-space shading methods based on the group size and parametrization of shading units. Furthermore, we introduce a novel texture-space layout strategy that operates on large shading units: the meshlet shading atlas. We experimentally demonstrate that the meshlet shading atlas outperforms previous approaches in terms of image quality, run-time performance and temporal upsampling for a given number of fragment shader invocations. The meshlet shading atlas lends itself to work together with popular cluster-based rendering of meshes with high geometric detail.
  • Item
    Data Parallel Path Tracing with Object Hierarchies
    (ACM Association for Computing Machinery, 2022) Wald, Ingo; Parker, Steven G; Josef Spjut; Marc Stamminger; Victor Zordan
    We propose a new approach to rendering production-style content with full path tracing in a data-distributed fashion-that is, with multiple collaborating nodes and/or GPUs that each store only part of the model. In particular, we propose a new approach to ray-forwarding based data-parallel ray tracing that improves over traditional spatial partitioning, that can support both object-hierarchy and spatial partitioning (or any combination thereof), and that employs multiple techniques for reducing the number of rays sent across the network. We show that this approach can simultaneously achieve higher flexibility in model partitioning, lower memory per node, lower bandwidth during rendering, and higher performance; and that it can ultimately achieve interactive rendering performance for non-trivial models with full path tracing even on quite moderate hardware resources with relatively low-end interconnect.
  • Item
    Stochastic Light Culling for Single Scattering in Participating Media
    (The Eurographics Association, 2022) Fujieda, Shin; Tokuyoshi, Yusuke; Harada, Takahiro; Pelechano, Nuria; Vanderhaeghe, David
    We introduce a simple but efficient method to compute single scattering from point and arbitrarily shaped area light sources in participating media. Our method extends the stochastic light culling method to volume rendering by considering the intersection of a ray and spherical bounds of light influence ranges. For primary rays, this allows simple computation of the lighting in participating media without hierarchical data structures such as a light tree. First, we show how to combine equiangular sampling with the proposed light culling method in a simple case of point lights. We then apply it to arbitrarily shaped area lights by considering virtual point lights on the surface of area lights. Using our method, we are able to improve the rendering quality for scenes with many lights without tree construction and traversal.