Search Results

Now showing 1 - 10 of 50
  • Item
    EUROGRAPHICS 2020: Short Papers Frontmatter
    (Eurographics Association, 2020) Wilkie, Alexander; Banterle, Francesco; Wilkie, Alexander and Banterle, Francesco
  • Item
    EUROGRAPHICS 2020: CGF 39-2 STARs Frontmatter
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Mantiuk, Rafal; Sundstedt, Veronica; Mantiuk, Rafal and Sundstedt, Veronica
    -
  • Item
    Triplanar Displacement Mapping for Terrain Rendering
    (The Eurographics Association, 2020) Weiss, Sebastian; Bayer, Florian; Westermann, Rüdiger; Wilkie, Alexander and Banterle, Francesco
    Heightmap-based terrain representations are common in computer games and simulations. However, adding geometric details to such a representation during rendering on the GPU is difficult to achieve. In this paper, we propose a combination of triplanar mapping, displacement mapping, and tessellation on the GPU, to create extruded geometry along steep faces of heightmap-based terrain fields on-the-fky during rendering. The method allows rendering geometric details such as overhangs and boulders, without explicit triangulation. We further demonstrate how to handle collisions and shadows for the enriched geometry.
  • Item
    Recreating Past and Present: An Exceptional Student-Created Virtual Heritage Experience
    (The Eurographics Association, 2020) Anderson, Eike Falk; Sloan, Susan; Romero, Mario and Sousa Santos, Beatrice
    We present an outstanding undergraduate student project in the form of a virtual heritage experience, created by a multidisciplinary group of six 4th semester undergraduate students from a range of computer graphics related programmes of study, ranging from 3D art and design to graphics software development. The "Exercise Smash" application allows participants to take part in a 1944 military exercise that was held in preparation of the D-Day landings in Normandy, during which several amphibious tanks sank, and then to dive down to the tank wrecks on the modern-day seafloor. The virtual heritage experience was presented during a public event at a military history museum and has also been demonstrated at an archaeology conference, being well-received in both cases.
  • Item
    State of the Art on Neural Rendering
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Tewari, Ayush; Fried, Ohad; Thies, Justus; Sitzmann, Vincent; Lombardi, Stephen; Sunkavalli, Kalyan; Martin-Brualla, Ricardo; Simon, Tomas; Saragih, Jason; Nießner, Matthias; Pandey, Rohit; Fanello, Sean; Wetzstein, Gordon; Zhu, Jun-Yan; Theobalt, Christian; Agrawala, Maneesh; Shechtman, Eli; Goldman, Dan B.; Zollhöfer, Michael; Mantiuk, Rafal and Sundstedt, Veronica
    Efficient rendering of photo-realistic virtual worlds is a long standing effort of computer graphics. Modern graphics techniques have succeeded in synthesizing photo-realistic images from hand-crafted scene representations. However, the automatic generation of shape, materials, lighting, and other aspects of scenes remains a challenging problem that, if solved, would make photo-realistic computer graphics more widely accessible. Concurrently, progress in computer vision and machine learning have given rise to a new approach to image synthesis and editing, namely deep generative models. Neural rendering is a new and rapidly emerging field that combines generative machine learning techniques with physical knowledge from computer graphics, e.g., by the integration of differentiable rendering into network training. With a plethora of applications in computer graphics and vision, neural rendering is poised to become a new area in the graphics community, yet no survey of this emerging field exists. This state-of-the-art report summarizes the recent trends and applications of neural rendering. We focus on approaches that combine classic computer graphics techniques with deep generative models to obtain controllable and photorealistic outputs. Starting with an overview of the underlying computer graphics and machine learning concepts, we discuss critical aspects of neural rendering approaches. Specifically, our emphasis is on the type of control, i.e., how the control is provided, which parts of the pipeline are learned, explicit vs. implicit control, generalization, and stochastic vs. deterministic synthesis. The second half of this state-of-the-art report is focused on the many important use cases for the described algorithms such as novel view synthesis, semantic photo manipulation, facial and body reenactment, relighting, free-viewpoint video, and the creation of photo-realistic avatars for virtual and augmented reality telepresence. Finally, we conclude with a discussion of the social implications of such technology and investigate open research problems.
  • Item
    Frequency-Aware Reconstruction of Fluid Simulations with Generative Networks
    (The Eurographics Association, 2020) Biland, Simon; Azevedo, Vinicius C.; Kim, Byungsoo; Solenthaler, Barbara; Wilkie, Alexander and Banterle, Francesco
    Convolutional neural networks were recently employed to fully reconstruct fluid simulation data from a set of reduced parameters. However, since (de-)convolutions traditionally trained with supervised l1-loss functions do not discriminate between low and high frequencies in the data, the error is not minimized efficiently for higher bands. This directly correlates with the quality of the perceived results, since missing high frequency details are easily noticeable. In this paper, we analyze the reconstruction quality of generative networks and present a frequency-aware loss function that is able to focus on specific bands of the dataset during training time. We show that our approach improves reconstruction quality of fluid simulation data in mid-frequency bands, yielding perceptually better results while requiring comparable training time.
  • Item
    Procedural 3D Asteroid Surface Detail Synthesis
    (The Eurographics Association, 2020) Li, Xi-zhi; Weller, René; Zachmann, Gabriel; Wilkie, Alexander and Banterle, Francesco
    We present a novel noise model to procedurally generate volumetric terrain on implicit surfaces. The main idea is to combine a novel Locally Controlled 3D Spot noise (LCSN) for authoring the macro structures and 3D Gabor noise to add micro details. More specifically, a spatially-defined kernel formulation in combination with an impulse distribution enables the LCSN to generate arbitrary size craters and boulders, while the Gabor noise generates stochastic Gaussian details. The corresponding metaball positions in the underlying implicit surface preserve locality to avoid the globality of traditional procedural noise textures, which yields an essential feature that is often missing in procedural texture based terrain generators. Furthermore, different noise-based primitives are integrated through operators, i.e. blending, replacing, or warping into the complex volumetric terrain. The result is a completely implicit representation and, as such, has the advantage of compactness as well as flexible user control. We applied our method to generating high quality asteroid meshes with fine surface details.
  • Item
    Neural Smoke Stylization with Color Transfer
    (The Eurographics Association, 2020) Christen, Fabienne; Kim, Byungsoo; Azevedo, Vinicius C.; Solenthaler, Barbara; Wilkie, Alexander and Banterle, Francesco
    Artistically controlling fluid simulations requires a large amount of manual work by an artist. The recently presented transportbased neural style transfer approach simplifies workflows as it transfers the style of arbitrary input images onto 3D smoke simulations. However, the method only modifies the shape of the fluid but omits color information. In this work, we therefore extend the previous approach to obtain a complete pipeline for transferring shape and color information onto 2D and 3D smoke simulations with neural networks. Our results demonstrate that our method successfully transfers colored style features consistently in space and time to smoke data for different input textures.
  • Item
    Treemap Literacy: A Classroom-Based Investigation
    (The Eurographics Association, 2020) Firat, Elif E.; Denisova, Alena; Laramee, Robert S.; Romero, Mario and Sousa Santos, Beatrice
    Visualization literacy, the ability to interpret and understand visual designs, has gained momentum in the educational and information visualization communities. The goal of this research is to identify and address barriers to treemap literacy - a popular visual design, with a view to improve a non-expert user's treemap visualization literacy skills. In this paper we present the results of two years of an information visualization assignment, which are used to identify the barriers to and challenges of understanding and creating treemaps. From this, we develop a treemap visualization literacy test. Then, we propose a pedagogical tool that facilitates both teaching and learning of treemaps and advances treemap visualization literacy. To investigate the efficiency of this educational software, we then conduct a classroom-based study with 25 participants. We identify the properties of treemaps that can hinder literacy and cognition based on the results from the treemap visualization literacy test. Results also provide further support for the use of our tool that had a positive effect on treemap literacy skills of university students.
  • Item
    Fabric Appearance Benchmark
    (The Eurographics Association, 2020) Merzbach, Sebastian; Klein, Reinhard; Ritschel, Tobias and Eilertsen, Gabriel
    Appearance modeling is a difficult problem that still receives considerable attention from the graphics and vision communities. Though recent years have brought a growing number of high-quality material databases that have sparked new research, there is a general lack of evaluation benchmarks for performance assessment and fair comparisons between competing works. We therefore release a new dataset and pose a public challenge that will enable standardized evaluations. For this we measured 56 fabric samples with a commercial appearance scanner. We publish the resulting calibrated HDR images, along with baseline SVBRDF fits. The challenge is to recreate, under known light and view sampling, the appearance of a subset of unseen images. User submissions will be automatically evaluated and ranked by a set of standard image metrics.