12 results
Search Results
Now showing 1 - 10 of 12
Item Unifying Color and Texture Transfer for Predictive Appearance Manipulation(The Eurographics Association and John Wiley & Sons Ltd., 2015) Okura, Fumio; Vanhoey, Kenneth; Bousseau, Adrien; Efros, Alexei A.; Drettakis, George; Jaakko Lehtinen and Derek NowrouzezahraiRecent color transfer methods use local information to learn the transformation from a source to an exemplar image, and then transfer this appearance change to a target image. These solutions achieve very successful results for general mood changes, e.g., changing the appearance of an image from ''sunny'' to ''overcast''. However, such methods have a hard time creating new image content, such as leaves on a bare tree. Texture transfer, on the other hand, can synthesize such content but tends to destroy image structure. We propose the first algorithm that unifies color and texture transfer, outperforming both by leveraging their respective strengths. A key novelty in our approach resides in teasing apart appearance changes that can be modeled simply as changes in color versus those that require new image content to be generated. Our method starts with an analysis phase which evaluates the success of color transfer by comparing the exemplar with the source. This analysis then drives a selective, iterative texture transfer algorithm that simultaneously predicts the success of color transfer on the target and synthesizes new content where needed. We demonstrate our unified algorithm by transferring large temporal changes between photographs, such as change of season - e.g., leaves on bare trees or piles of snow on a street - and flooding.Item Proxy-Guided Texture Synthesis for Rendering Natural Scenes(The Eurographics Association, 2010) Bonneel, Nicolas; Panne, Michiel van de; Lefebvre, Sylvain; Drettakis, George; Reinhard Koch and Andreas Kolb and Christof Rezk-SalamaLandscapes and other natural scenes are easy to photograph but difficult to model and render. We present a proxy-guided pipeline which allows for simple 3D proxy geometry to be rendered with the rich visual detail found in a suitably pre-annotated example image. This greatly simplifies the geometric modeling and texture mapping of such scenes. Our method renders at near-interactive rates and is designed by carefully adapting guidancebased texture synthesis to our goals. A guidance-map synthesis step is used to obtain silhouettes and borders that have the same rich detail as the source photo, using a Chamfer distance metric as a principled way of dealing with discrete texture labels. We adapt an efficient parallel approach to the challenging guided synthesis step we require, providing a fast and scalable solution. We provide a solution for local temporal coherence, by introducing a reprojection algorithm, which reuses earlier synthesis results when feasible, as measured by a distortion metric. Our method allows for the consistent integration of standard CG elements with the texture-synthesized elements. We demonstrate near-interactive camera motion and landscape editing on a number of examples.Item Vectorising Bitmaps into Semi-Transparent Gradient Layers(The Eurographics Association and John Wiley and Sons Ltd., 2014) Richardt, Christian; Lopez-Moreno, Jorge; Bousseau, Adrien; Agrawala, Maneesh; Drettakis, George; Wojciech Jarosz and Pieter PeersWe present an interactive approach for decompositing bitmap drawings and studio photographs into opaque and semi-transparent vector layers. Semi-transparent layers are especially challenging to extract, since they require the inversion of the non-linear compositing equation. We make this problem tractable by exploiting the parametric nature of vector gradients, jointly separating and vectorising semi-transparent regions. Specifically, we constrain the foreground colours to vary according to linear or radial parametric gradients, restricting the number of unknowns and allowing our system to efficiently solve for an editable semi-transparent foreground. We propose a progressive workflow, where the user successively selects a semi-transparent or opaque region in the bitmap, which our algorithm separates as a foreground vector gradient and a background bitmap layer. The user can choose to decompose the background further or vectorise it as an opaque layer. The resulting layered vector representation allows a variety of edits, such as modifying the shape of highlights, adding texture to an object or changing its diffuse colour.Item Perception of Visual Artifacts in Image-Based Rendering of Façades(The Eurographics Association and Blackwell Publishing Ltd., 2011) Vangorp, Peter; Chaurasia, Gaurav; Laffont, Pierre-Yves; Fleming, Roland W.; Drettakis, George; Ravi Ramamoorthi and Erik ReinhardImage-based rendering (IBR) techniques allow users to create interactive 3D visualizations of scenes by taking a few snapshots. However, despite substantial progress in the field, the main barrier to better quality and more efficient IBR visualizations are several types of common, visually objectionable artifacts. These occur when scene geometry is approximate or viewpoints differ from the original shots, leading to parallax distortions, blurring, ghosting and popping errors that detract from the appearance of the scene. We argue that a better understanding of the causes and perceptual impact of these artifacts is the key to improving IBR methods. In this study we present a series of psychophysical experiments in which we systematically map out the perception of artifacts in IBR visualizations of façades as a function of the most common causes. We separate artifacts into different classes and measure how they impact visual appearance as a function of the number of images available, the geometry of the scene and the viewpoint. The results reveal a number of counter-intuitive effects in the perception of artifacts. We summarize our results in terms of practical guidelines for improving existing and future IBR techniques.Item Flexible SVBRDF Capture with a Multi-Image Deep Network(The Eurographics Association and John Wiley & Sons Ltd., 2019) Deschaintre, Valentin; Aittala, Miika; Durand, Fredo; Drettakis, George; Bousseau, Adrien; Boubekeur, Tamy and Sen, PradeepEmpowered by deep learning, recent methods for material capture can estimate a spatially-varying reflectance from a single photograph. Such lightweight capture is in stark contrast with the tens or hundreds of pictures required by traditional optimization-based approaches. However, a single image is often simply not enough to observe the rich appearance of realworld materials. We present a deep-learning method capable of estimating material appearance from a variable number of uncalibrated and unordered pictures captured with a handheld camera and flash. Thanks to an order-independent fusing layer, this architecture extracts the most useful information from each picture, while benefiting from strong priors learned from data. The method can handle both view and light direction variation without calibration. We show how our method improves its prediction with the number of input pictures, and reaches high quality reconstructions with as little as 1 to 10 images - a sweet spot between existing single-image and complex multi-image approaches.Item Compiling High Performance Recursive Filters(ACM Siggraph, 2015) Chaurasia, Gaurav; Ragan-Kelley, Jonathan; Paris, Sylvain; Drettakis, George; Durand, Frédo; Petrik Clarberg and Elmar EisemannInfinite impulse response (IIR) or recursive filters, are essential for image processing because they turn expensive large-footprint convolutions into operations that have a constant cost per pixel regardless of kernel size. However, their recursive nature constrains the order in which pixels can be computed, severely limiting both parallelism within a filter and memory locality across multiple filters. Prior research has developed algorithms that can compute IIR filters with image tiles. Using a divide-and-recombine strategy inspired by parallel prefix sum, they expose greater parallelism and exploit producer-consumer locality in pipelines of IIR filters over multidimensional images. While the principles are simple, it is hard, given a recursive filter, to derive a corresponding tile-parallel algorithm, and even harder to implement and debug it. We show that parallel and locality-aware implementations of IIR filter pipelines can be obtained through program transformations, which we mechanize through a domain-specific compiler. We show that the composition of a small set of transformations suffices to cover the space of possible strategies. We also demonstrate that the tiled implementations can be automatically scheduled in hardwarespecific manners using a small set of generic heuristics. The programmer specifies the basic recursive filters, and the choice of transformation requires only a few lines of code. Our compiler then generates high-performance implementations that are an order of magnitude faster than standard GPU implementations, and outperform hand tuned tiled implementations of specialized algorithms which require orders of magnitude more programming effort-a few lines of code instead of a few thousand lines per pipeline.Item Silhouette-Aware Warping for Image-Based Rendering(The Eurographics Association and Blackwell Publishing Ltd., 2011) Chaurasia, Gaurav; Sorkine, Olga; Drettakis, George; Ravi Ramamoorthi and Erik ReinhardImage-based rendering (IBR) techniques allow capture and display of 3D environments using photographs. Modern IBR pipelines reconstruct proxy geometry using multi-view stereo, reproject the photographs onto the proxy and blend them to create novel views. The success of these methods depends on accurate 3D proxies, which are difficult to obtain for complex objects such as trees and cars. Large number of input images do not improve reconstruction proportionally; surface extraction is challenging even from dense range scans for scenes containing such objects. Our approach does not depend on dense accurate geometric reconstruction; instead we compensate for sparse 3D information by variational image warping. In particular, we formulate silhouette-aware warps that preserve salient depth discontinuities. This improves the rendering of difficult foreground objects, even when deviating from view interpolation. We use a semi-automatic step to identify depth discontinuities and extract a sparse set of depth constraints used to guide the warp. Our framework is lightweight and results in good quality IBR for previously challenging environments.Item Thin Structures in Image Based Rendering(The Eurographics Association and John Wiley & Sons Ltd., 2018) Thonat, Theo; Djelouah, Abdelaziz; Durand, Fredo; Drettakis, George; Jakob, Wenzel and Hachisuka, ToshiyaWe propose a novel method to handle thin structures in Image-Based Rendering (IBR), and specifically structures supported by simple geometric shapes such as planes, cylinders, etc. These structures, e.g. railings, fences, oven grills etc, are present in many man-made environments and are extremely challenging for multi-view 3D reconstruction, representing a major limitation of existing IBR methods. Our key insight is to exploit multi-view information. After a handful of user clicks to specify the supporting geometry, we compute multi-view and multi-layer alpha mattes to extract the thin structures. We use two multi-view terms in a graph-cut segmentation, the first based on multi-view foreground color prediction and the second ensuring multiview consistency of labels. Occlusion of the background can challenge reprojection error calculation and we use multiview median images and variance, with multiple layers of thin structures. Our end-to-end solution uses the multi-layer segmentation to create per-view mattes and the median colors and variance to create a clean background. We introduce a new multi-pass IBR algorithm based on depth-peeling to allow free-viewpoint navigation of multi-layer semi-transparent thin structures. Our results show significant improvement in rendering quality for thin structures compared to previous image-based rendering solutions.Item Exploiting Repetitions for Image-Based Rendering of Facades(The Eurographics Association and John Wiley & Sons Ltd., 2018) Rodriguez, Simon; Bousseau, Adrien; Durand, Fredo; Drettakis, George; Jakob, Wenzel and Hachisuka, ToshiyaStreet-level imagery is now abundant but does not have sufficient capture density to be usable for Image-Based Rendering (IBR) of facades. We present a method that exploits repetitive elements in facades - such as windows - to perform data augmentation, in turn improving camera calibration, reconstructed geometry and overall rendering quality for IBR. The main intuition behind our approach is that a few views of several instances of an element provide similar information to many views of a single instance of that element. We first select similar instances of an element from 3-4 views of a facade and transform them into a common coordinate system, creating a ''platonic'' element. We use this common space to refine the camera calibration of each view of each instance and to reconstruct a 3D mesh of the element with multi-view stereo, that we regularize to obtain a piecewise-planar mesh aligned with dominant image contours. Observing the same element under multiple views also allows us to identify reflective areas - such as glass panels - which we use at rendering time to generate plausible reflections using an environment map. Our detailed 3D mesh, augmented set of views, and reflection mask enable image-based rendering of much higher quality than results obtained using the input images directly.Item C-LOD: Context-aware Material Level-of-Detail applied to Mobile Graphics(The Eurographics Association and John Wiley and Sons Ltd., 2014) Koulieris, George Alex; Drettakis, George; Cunningham, Douglas; Mania, Katerina; Wojciech Jarosz and Pieter PeersAttention-based Level-Of-Detail (LOD) managers downgrade the quality of areas that are expected to go unnoticed by an observer to economize on computational resources. The perceptibility of lowered visual fidelity is determined by the accuracy of the attention model that assigns quality levels. Most previous attention based LOD managers do not take into account saliency provoked by context, failing to provide consistently accurate attention predictions. In this work, we extend a recent high level saliency model with four additional components yielding more accurate predictions: an object-intrinsic factor accounting for canonical form of objects, an object-context factor for contextual isolation of objects, a feature uniqueness term that accounts for the number of salient features in an image, and a temporal context that generates recurring fixations for objects inconsistent with the context. We conduct a perceptual experiment to acquire the weighting factors to initialize our model. We design C-LOD, a LOD manager that maintains a constant frame rate on mobile devices by dynamically re-adjusting material quality on secondary visual features of non-attended objects. In a proof of concept study we establish that by incorporating C-LOD, complex effects such as parallax occlusion mapping usually omitted in mobile devices can now be employed, without overloading GPU capability and, at the same time, conserving battery power.