Search Results

Now showing 1 - 10 of 45
  • Item
    MesoGAN: Generative Neural Reflectance Shells
    (© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Diolatzis, Stavros; Novak, Jan; Rousselle, Fabrice; Granskog, Jonathan; Aittala, Miika; Ramamoorthi, Ravi; Drettakis, George; Hauser, Helwig and Alliez, Pierre
    We introduce MesoGAN, a model for generative 3D neural textures. This new graphics primitive represents mesoscale appearance by combining the strengths of generative adversarial networks (StyleGAN) and volumetric neural field rendering. The primitive can be applied to surfaces as a neural reflectance shell; a thin volumetric layer above the surface with appearance parameters defined by a neural network. To construct the neural shell, we first generate a 2D feature texture using StyleGAN with carefully randomized Fourier features to support arbitrarily sized textures without repeating artefacts. We augment the 2D feature texture with a learned height feature, which aids the neural field renderer in producing volumetric parameters from the 2D texture. To facilitate filtering, and to enable end‐to‐end training within memory constraints of current hardware, we utilize a hierarchical texturing approach and train our model on multi‐scale synthetic datasets of 3D mesoscale structures. We propose one possible approach for conditioning MesoGAN on artistic parameters (e.g. fibre length, density of strands, lighting direction) and demonstrate and discuss integration into physically based renderers.
  • Item
    Point-Based Neural Rendering with Per-View Optimization
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Kopanas, Georgios; Philip, Julien; Leimkühler, Thomas; Drettakis, George; Bousseau, Adrien and McGuire, Morgan
    There has recently been great interest in neural rendering methods. Some approaches use 3D geometry reconstructed with Multi-View Stereo (MVS) but cannot recover from the errors of this process, while others directly learn a volumetric neural representation, but suffer from expensive training and inference. We introduce a general approach that is initialized with MVS, but allows further optimization of scene properties in the space of input views, including depth and reprojected features, resulting in improved novel-view synthesis. A key element of our approach is our new differentiable point-based pipeline, based on bi-directional Elliptical Weighted Average splatting, a probabilistic depth test and effective camera selection. We use these elements together in our neural renderer, that outperforms all previous methods both in quality and speed in almost all scenes we tested. Our pipeline can be applied to multi-view harmonization and stylization in addition to novel-view synthesis.
  • Item
    Video-Based Rendering of Dynamic Stationary Environments from Unsynchronized Inputs
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Thonat, Theo; Aksoy, Yagiz; Aittala, Miika; Paris, Sylvain; Durand, Fredo; Drettakis, George; Bousseau, Adrien and McGuire, Morgan
    Image-Based Rendering allows users to easily capture a scene using a single camera and then navigate freely with realistic results. However, the resulting renderings are completely static, and dynamic effects - such as fire, waterfalls or small waves - cannot be reproduced. We tackle the challenging problem of enabling free-viewpoint navigation including such stationary dynamic effects, but still maintaining the simplicity of casual capture. Using a single camera - instead of previous complex synchronized multi-camera setups - means that we have unsynchronized videos of the dynamic effect from multiple views, making it hard to blend them when synthesizing novel views. We present a solution that allows smooth free-viewpoint video-based rendering (VBR) of such scenes using temporal Laplacian pyramid decomposition video, enabling spatio-temporal blending. For effects such as fire and waterfalls, that are semi-transparent and occupy 3D space, we first estimate their spatial volume. This allows us to create per-video geometries and alpha-matte videos that we can blend using our frequency-dependent method. We also extend Laplacian blending to the temporal dimension to remove additional temporal seams. We show results on scenes containing fire, waterfalls or rippling waves at the seaside, bringing these scenes to life.
  • Item
    Interactive Sampling and Rendering for Complex and Procedural Geometry
    (The Eurographics Association, 2001) Stamminger, Marc; Drettakis, George; S. J. Gortle and K. Myszkowski
    We present a new sampling method for procedural and complex geometries, which allows interactive point-based modeling and rendering of such scenes. For a variety of scenes, object-space point sets can be generated rapidly, resulting in a sufficiently dense sampling of the final image. We present an integrated approach that exploits the simplicity of the point primitive. For procedural objects a hierarchical sampling scheme is presented that adapts sample densities locally according to the projected size in the image. Dynamic procedural objects and interactive user manipulation thus become possible. The same scheme is also applied to on-the-fly generation and rendering of terrains, and enables the use of an efficient occlusion culling algorithm. Furthermore, by using points the system enables interactive rendering and simple modification of complex objects (e.g., trees). For display, hardware-accelerated 3-D point rendering is used, but our sampling method can be used by any other point-rendering approach.
  • Item
    Unifying Color and Texture Transfer for Predictive Appearance Manipulation
    (The Eurographics Association and John Wiley & Sons Ltd., 2015) Okura, Fumio; Vanhoey, Kenneth; Bousseau, Adrien; Efros, Alexei A.; Drettakis, George; Jaakko Lehtinen and Derek Nowrouzezahrai
    Recent color transfer methods use local information to learn the transformation from a source to an exemplar image, and then transfer this appearance change to a target image. These solutions achieve very successful results for general mood changes, e.g., changing the appearance of an image from ''sunny'' to ''overcast''. However, such methods have a hard time creating new image content, such as leaves on a bare tree. Texture transfer, on the other hand, can synthesize such content but tends to destroy image structure. We propose the first algorithm that unifies color and texture transfer, outperforming both by leveraging their respective strengths. A key novelty in our approach resides in teasing apart appearance changes that can be modeled simply as changes in color versus those that require new image content to be generated. Our method starts with an analysis phase which evaluates the success of color transfer by comparing the exemplar with the source. This analysis then drives a selective, iterative texture transfer algorithm that simultaneously predicts the success of color transfer on the target and synthesizes new content where needed. We demonstrate our unified algorithm by transferring large temporal changes between photographs, such as change of season - e.g., leaves on bare trees or piles of snow on a street - and flooding.
  • Item
    Guided Fine-Tuning for Large-Scale Material Transfer
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Deschaintre, Valentin; Drettakis, George; Bousseau, Adrien; Dachsbacher, Carsten and Pharr, Matt
    We present a method to transfer the appearance of one or a few exemplar SVBRDFs to a target image representing similar materials. Our solution is extremely simple: we fine-tune a deep appearance-capture network on the provided exemplars, such that it learns to extract similar SVBRDF values from the target image. We introduce two novel material capture and design workflows that demonstrate the strength of this simple approach. Our first workflow allows to produce plausible SVBRDFs of large-scale objects from only a few pictures. Specifically, users only need take a single picture of a large surface and a few close-up flash pictures of some of its details.We use existing methods to extract SVBRDF parameters from the close-ups, and our method to transfer these parameters to the entire surface, enabling the lightweight capture of surfaces several meters wide such as murals, floors and furniture. In our second workflow, we provide a powerful way for users to create large SVBRDFs from internet pictures by transferring the appearance of existing, pre-designed SVBRDFs. By selecting different exemplars, users can control the materials assigned to the target image, greatly enhancing the creative possibilities offered by deep appearance capture.
  • Item
    Incremental Updates for Rapid Glossy Global Illumination
    (Blackwell Publishers Ltd and the Eurographics Association, 2001) Granier, Xavier; Drettakis, George
    We present an integrated global illumination algorithm including non-diffuse light transport which can handle complex scenes and enables rapid incremental updates. We build on a unified algorithm which uses hierarchical radiosity with clustering and particle tracing for diffuse and non-diffuse transport respectively. We present a new algorithm which chooses between reconstructing specular effects such as caustics on the diffuse radiosity mesh, or special purpose caustic textures, when high frequencies are present. Algorithms are presented to choose the resolution of these textures and to reconstruct the high-frequency non-diffuse lighting effects. We use a dynamic spatial data structure to restrict the number of particles re-emitted during the local modifications of the scene. By combining this incremental particle trace with a line-space hierarchy for incremental update of diffuse illumination, we can locally modify complex scenes rapidly. We also develop an algorithm which, by permitting slight quality degradation during motion, achieves quasi-interactive updates. We present an implementation of our new method and its application to indoors and outdoors scenes.
  • Item
    Proxy-Guided Texture Synthesis for Rendering Natural Scenes
    (The Eurographics Association, 2010) Bonneel, Nicolas; Panne, Michiel van de; Lefebvre, Sylvain; Drettakis, George; Reinhard Koch and Andreas Kolb and Christof Rezk-Salama
    Landscapes and other natural scenes are easy to photograph but difficult to model and render. We present a proxy-guided pipeline which allows for simple 3D proxy geometry to be rendered with the rich visual detail found in a suitably pre-annotated example image. This greatly simplifies the geometric modeling and texture mapping of such scenes. Our method renders at near-interactive rates and is designed by carefully adapting guidancebased texture synthesis to our goals. A guidance-map synthesis step is used to obtain silhouettes and borders that have the same rich detail as the source photo, using a Chamfer distance metric as a principled way of dealing with discrete texture labels. We adapt an efficient parallel approach to the challenging guided synthesis step we require, providing a fast and scalable solution. We provide a solution for local temporal coherence, by introducing a reprojection algorithm, which reuses earlier synthesis results when feasible, as measured by a distortion metric. Our method allows for the consistent integration of standard CG elements with the texture-synthesized elements. We demonstrate near-interactive camera motion and landscape editing on a number of examples.
  • Item
    Drawing for Illustration and Annotation in 3D
    (Blackwell Publishers Ltd and the Eurographics Association, 2001) Bourguignon, David; Cani, Marie-Paule; Drettakis, George
    We present a system for sketching in 3D, which strives to preserve the degree of expression, imagination, and simplicity of use achieved by 2D drawing. Our system directly uses user-drawn strokes to infer the sketches representing the same scene from different viewpoints, rather than attempting to reconstruct a 3D model. This is achieved by interpreting strokes as indications of a local surface silhouette or contour. Strokes thus deform and disappear progressively as we move away from the original viewpoint. They may be occluded by objects indicated by other strokes, or, in contrast, be drawn above such objects. The user draws on a plane which can be positioned explicitly or relative to other objects or strokes in the sketch. Our system is interactive, since we use fast algorithms and graphics hardware for rendering. We present applications to education, design, architecture and fashion, where 3D sketches can be used alone or as an annotation of an existing 3D model.
  • Item
    Vectorising Bitmaps into Semi-Transparent Gradient Layers
    (The Eurographics Association and John Wiley and Sons Ltd., 2014) Richardt, Christian; Lopez-Moreno, Jorge; Bousseau, Adrien; Agrawala, Maneesh; Drettakis, George; Wojciech Jarosz and Pieter Peers
    We present an interactive approach for decompositing bitmap drawings and studio photographs into opaque and semi-transparent vector layers. Semi-transparent layers are especially challenging to extract, since they require the inversion of the non-linear compositing equation. We make this problem tractable by exploiting the parametric nature of vector gradients, jointly separating and vectorising semi-transparent regions. Specifically, we constrain the foreground colours to vary according to linear or radial parametric gradients, restricting the number of unknowns and allowing our system to efficiently solve for an editable semi-transparent foreground. We propose a progressive workflow, where the user successively selects a semi-transparent or opaque region in the bitmap, which our algorithm separates as a foreground vector gradient and a background bitmap layer. The user can choose to decompose the background further or vectorise it as an opaque layer. The resulting layered vector representation allows a variety of edits, such as modifying the shape of highlights, adding texture to an object or changing its diffuse colour.