EGSR07: 18th Eurographics Symposium on Rendering
https://diglib.eg.org:443/handle/10.2312/386
2024-03-29T09:44:08ZFeature-Guided Dynamic Texture Synthesis on Continuous Flows
https://diglib.eg.org:443/handle/10.2312/EGWR.EGSR07.361-370
Feature-Guided Dynamic Texture Synthesis on Continuous Flows
Narain, Rahul; Kwatra, Vivek; Lee, Huai-Ping; Kim, Theodore; Carlson, Mark; Lin, Ming C.
Jan Kautz and Sumanta Pattanaik
We present a technique for synthesizing spatially and temporally varying textures on continuous flows using image or video input, guided by the physical characteristics of the fluid stream itself. This approach enables the generation of realistic textures on the fluid that correspond to the local flow behavior, creating the appearance of complex surface effects, such as foam and small bubbles. Our technique requires only a simple specification of texture behavior, and automatically generates and tracks the features and texture over time in a temporally coherent manner. Based on this framework, we also introduce a technique to perform feature-guided video synthesis. We demonstrate our algorithm on several simulated and recorded natural phenomena, including splashing water and lava flows. We also show how our methodology can be extended beyond realistic appearance synthesis to more general scenarios, such as temperature-guided synthesis of complex surface phenomena in a liquid during boiling.
2007-01-01T00:00:00ZInteractive Smooth and Curved Shell Mapping
https://diglib.eg.org:443/handle/10.2312/EGWR.EGSR07.351-360
Interactive Smooth and Curved Shell Mapping
Jeschke, Stefan; Mantler, Stephan; Wimmer, Michael
Jan Kautz and Sumanta Pattanaik
Shell mapping is a technique to represent three-dimensional surface details. This is achieved by extruding the triangles of an existing mesh along their normals, and mapping a 3D function (e.g., a 3D texture) into the resulting prisms. Unfortunately, such a mapping is nonlinear. Previous approaches perform a piece-wise linear approximation by subdividing the prisms into tetrahedrons. However, such an approximation often leads to severe artifacts. In this paper we present a correct (i.e., smooth) mapping that does not rely on a decomposition into tetrahedrons. We present an efficient GPU ray casting algorithm which provides correct parallax, self-occlusion, and silhouettes, at the cost of longer rendering times. The new formulation also allows modeling shells with smooth curvatures using Coons patches within the prisms. Tangent continuity between adjacent prisms is guaranteed, while the mapping itself remains local, i.e. every curved prism content is modeled at runtime in the GPU without the need for any precomputation. This allows instantly replacing animated triangular meshes with prism-based shells.
2007-01-01T00:00:00ZCompressed Random-Access Trees for Spatially Coherent Data
https://diglib.eg.org:443/handle/10.2312/EGWR.EGSR07.339-349
Compressed Random-Access Trees for Spatially Coherent Data
Lefebvre, Sylvain; Hoppe, Hugues
Jan Kautz and Sumanta Pattanaik
Adaptive multiresolution hierarchies are highly efficient at representing spatially coherent graphics data. We introduce a framework for compressing such adaptive hierarchies using a compact randomly-accessible tree structure. Prior schemes have explored compressed trees, but nearly all involve entropy coding of a sequential traversal, thus preventing fine-grain random queries required by rendering algorithms. Instead, we use fixed-rate encoding for both the tree topology and its data. Key elements include the replacement of pointers by local offsets, a forested mipmap structure, vector quantization of inter-level residuals, and efficient coding of partially defined data. Both the offsets and codebook indices are stored as byte records for easy parsing by either CPU or GPU shaders. We show that continuous mipmapping over an adaptive tree is more efficient using primal subdivision than traditional dual subdivision. Finally, we demonstrate efficient compression of many data types including light maps, alpha mattes, distance fields, and HDR images.
2007-01-01T00:00:00ZUsing Photographs to Enhance Videos of a Static Scene
https://diglib.eg.org:443/handle/10.2312/EGWR.EGSR07.327-338
Using Photographs to Enhance Videos of a Static Scene
Bhat, Pravin; Zitnick, C. Lawrence; Snavely, Noah; Agarwala, Aseem; Agrawala, Maneesh; Cohen, Michael; Curless, Brian; Kang, Sing Bing
Jan Kautz and Sumanta Pattanaik
We present a framework for automatically enhancing videos of a static scene using a few photographs of the same scene. For example, our system can transfer photographic qualities such as high resolution, high dynamic range and better lighting from the photographs to the video. Additionally, the user can quickly modify the video by editing only a few still images of the scene. Finally, our system allows a user to remove unwanted objects and camera shake from the video. These capabilities are enabled by two technical contributions presented in this paper. First, we make several improvements to a state-of-the-art multiview stereo algorithm in order to compute view-dependent depths using video, photographs, and structure-from-motion data. Second, we present a novel image-based rendering algorithm that can re-render the input video using the appearance of the photographs while preserving certain temporal dynamics such as specularities and dynamic scene lighting.
2007-01-01T00:00:00Z