38-Issue 7
Permanent URI for this collection
Browse
Browsing 38-Issue 7 by Issue Date
Now showing 1 - 20 of 70
Results Per Page
Sort Options
Item Anisotropic Surface Remeshing without Obtuse Angles(The Eurographics Association and John Wiley & Sons Ltd., 2019) Xu, Qun-Ce; Yan, Dong-Ming; Li, Wenbin; Yang, Yong-Liang; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonWe present a novel anisotropic surface remeshing method that can efficiently eliminate obtuse angles. Unlike previous work that can only suppress obtuse angles with expensive resampling and Lloyd-type iterations, our method relies on a simple yet efficient connectivity and geometry refinement, which can not only remove all the obtuse angles, but also preserves the original mesh connectivity as much as possible. Our method can be directly used as a post-processing step for anisotropic meshes generated from existing algorithms to improve mesh quality. We evaluate our method by testing on a variety of meshes with different geometry and topology, and comparing with representative prior work. The results demonstrate the effectiveness and efficiency of our approach.Item Polycube Shape Space(The Eurographics Association and John Wiley & Sons Ltd., 2019) Zhao, Hui; Li, Xuan; Wang, Wencheng; Wang, Xiaoling; Wang, Shaodong; Lei, Na; Gu, Xianfeng; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonThere are many methods proposed for generating polycube polyhedrons, but it lacks the study about the possibility of generating polycube polyhedrons. In this paper, we prove a theorem for characterizing the necessary condition for the skeleton graph of a polycube polyhedron, by which Steinitz's theorem for convex polyhedra and Eppstein's theorem for simple orthogonal polyhedra are generalized to polycube polyhedra of any genus and with non-simply connected faces. Based on our theorem, we present a faster linear algorithm to determine the dimensions of the polycube shape space for a valid graph, for all its possible polycube polyhedrons. We also propose a quadratic optimization method to generate embedding polycube polyhedrons with interactive assistance. Finally, we provide a graph-based framework for polycube mesh generation, quadrangulation, and all-hex meshing to demonstrate the utility and applicability of our approach.Item Subdivision Schemes for Quadrilateral Meshes with the Least Polar Artifact in Extraordinary Regions(The Eurographics Association and John Wiley & Sons Ltd., 2019) Ma, Yue; Ma, Weiyin; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonThis paper presents subdivision schemes with subdivision stencils near an extraordinary vertex that are free from or with substantially reduced polar artifact in extraordinary regions while maintaining the best possible bounded curvature at extraordinary positions. The subdivision stencils are firstly constructed to meet tangent plane continuity with bounded curvature at extraordinary positions. They are further optimized towards curvature continuity at an extraordinary position with additional measures for removing or for minimizing the polar artifact in extraordinary regions. The polar artifact for subdivision stencils of lower valences is removed by applying an additional constraint to the subdominant eigenvalue to be the same as that of subdivision at regular vertices, while the polar artifact for subdivision stencils of higher valances is substantially reduced by introducing an additional thin-plate energy function and a penalty function for maintaining the uniformity and regularity of the characteristic map. A new tuned subdivision scheme is introduced by replacing subdivision stencils of Catmull-Clark subdivision with that from this paper for extraordinary vertices of valences up to nine. We also compare the refined meshes and limit surface quality of the resulting subdivision scheme with that of Catmull-Clark subdivision and other tuned subdivision schemes. The results show that subdivision stencils from our method produce well behaved subdivision meshes with the least polar artifact while maintaining satisfactory limit surface quality.Item Inertia-based Fast Vectorization of Line Drawings(The Eurographics Association and John Wiley & Sons Ltd., 2019) Najgebauer, Patryk; Scherer, Rafal; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonImage vectorisation is a fundamental method in graphic design and is one of the tools allowing to transfer artist work into computer graphics. The existing methods are based mainly on segmentation, or they analyse every image pixel; thus, they are relatively slow. We introduce a novel method for fast line drawing image vectorisation, based on a multi-scale second derivative detector accelerated by the summed-area table and an auxiliary grid. Image is scanned initially along the grid lines, and nodes are added to improve accuracy. Applying inertia in the line tracing allows for better junction mapping in a single pass. Our method is dedicated to grey-scale sketches and line drawings. It works efficiently regardless of the thickness of the line or its shading. Experiments show it is more than two orders of magnitude faster than the existing methods, not sacrificing accuracy.Item Interactive Curation of Datasets for Training and Refining Generative Models(The Eurographics Association and John Wiley & Sons Ltd., 2019) Ye, Wenjie; Dong, Yue; Peers, Pieter; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonWe present a novel interactive learning-based method for curating datasets using user-defined criteria for training and refining Generative Adversarial Networks. We employ a novel batch-mode active learning strategy to progressively select small batches of candidate exemplars for which the user is asked to indicate whether they match the, possibly subjective, selection criteria. After each batch, a classifier that models the user's intent is refined and subsequently used to select the next batch of candidates. After the selection process ends, the final classifier, trained with limited but adaptively selected training data, is used to sift through the large collection of input exemplars to extract a sufficiently large subset for training or refining the generative model that matches the user's selection criteria. A key distinguishing feature of our system is that we do not assume that the user can always make a firm binary decision (i.e., ''meets'' or ''does not meet'' the selection criteria) for each candidate exemplar, and we allow the user to label an exemplar as ''undecided''. We rely on a non-binary query-by-committee strategy to distinguish between the user's uncertainty and the trained classifier's uncertainty, and develop a novel disagreement distance metric to encourage a diverse candidate set. In addition, a number of optimization strategies are employed to achieve an interactive experience. We demonstrate our interactive curation system on several applications related to training or refining generative models: training a Generative Adversarial Network that meets a user-defined criteria, adjusting the output distribution of an existing generative model, and removing unwanted samples from a generative model.Item Unsupervised Dense Light Field Reconstruction with Occlusion Awareness(The Eurographics Association and John Wiley & Sons Ltd., 2019) Ni, Lixia; Jiang, Haiyong; Cai, Jianfei; Zheng, Jianmin; Li, Haifeng; Liu, Xu; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonLight field (LF) reconstruction is a fundamental technique in light field imaging and has applications in both software and hardware aspects. This paper presents an unsupervised learning method for LF-oriented view synthesis, which provides a simple solution for generating quality light fields from a sparse set of views. The method is built on disparity estimation and image warping. Specifically, we first use per-view disparity as a geometry proxy to warp input views to novel views. Then we compensate the occlusion with a network by a forward-backward warping process. Cycle-consistency between different views are explored to enable unsupervised learning and accurate synthesis. The method overcomes the drawbacks of fully supervised learning methods that require large labeled training dataset and epipolar plane image based interpolation methods that do not make full use of geometry consistency in LFs. Experimental results demonstrate that the proposed method can generate high quality views for LF, which outperforms unsupervised approaches and is comparable to fully-supervised approaches.Item Dual Illumination Estimation for Robust Exposure Correction(The Eurographics Association and John Wiley & Sons Ltd., 2019) Zhang, Qing; Nie, Yongwei; Zheng, Wei-Shi; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonExposure correction is one of the fundamental tasks in image processing and computational photography. While various methods have been proposed, they either fail to produce visually pleasing results, or only work well for limited types of image (e.g., underexposed images). In this paper, we present a novel automatic exposure correction method, which is able to robustly produce high-quality results for images of various exposure conditions (e.g., underexposed, overexposed, and partially under- and over-exposed). At the core of our approach is the proposed dual illumination estimation, where we separately cast the underand over-exposure correction as trivial illumination estimation of the input image and the inverted input image. By performing dual illumination estimation, we obtain two intermediate exposure correction results for the input image, with one fixes the underexposed regions and the other one restores the overexposed regions. A multi-exposure image fusion technique is then employed to adaptively blend the visually best exposed parts in the two intermediate exposure correction images and the input image into a globally well-exposed image. Experiments on a number of challenging images demonstrate the effectiveness of the proposed approach and its superiority over the state-of-the-art methods and popular automatic exposure correction tools.Item High Dynamic Range Point Clouds for Real-Time Relighting(The Eurographics Association and John Wiley & Sons Ltd., 2019) Sabbadin, Manuele; Palma, Gianpaolo; BANTERLE, FRANCESCO; Boubekeur, Tamy; Cignoni, Paolo; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonAcquired 3D point clouds make possible quick modeling of virtual scenes from the real world.With modern 3D capture pipelines, each point sample often comes with additional attributes such as normal vector and color response. Although rendering and processing such data has been extensively studied, little attention has been devoted using the light transport hidden in the recorded per-sample color response to relight virtual objects in visual effects (VFX) look-dev or augmented reality (AR) scenarios. Typically, standard relighting environment exploits global environment maps together with a collection of local light probes to reflect the light mood of the real scene on the virtual object. We propose instead a unified spatial approximation of the radiance and visibility relationships present in the scene, in the form of a colored point cloud. To do so, our method relies on two core components: High Dynamic Range (HDR) expansion and real-time Point-Based Global Illumination (PBGI). First, since an acquired color point cloud typically comes in Low Dynamic Range (LDR) format, we boost it using a single HDR photo exemplar of the captured scene that can cover part of it. We perform this expansion efficiently by first expanding the dynamic range of a set of renderings of the point cloud and then projecting these renderings on the original cloud. At this stage, we propagate the expansion to the regions not covered by the renderings or with low-quality dynamic range by solving a Poisson system. Then, at rendering time, we use the resulting HDR point cloud to relight virtual objects, providing a diffuse model of the indirect illumination propagated by the environment. To do so, we design a PBGI algorithm that exploits the GPU's geometry shader stage as well as a new mipmapping operator, tailored for G-buffers, to achieve real-time performances. As a result, our method can effectively relight virtual objects exhibiting diffuse and glossy physically-based materials in real time. Furthermore, it accounts for the spatial embedding of the object within the 3D environment. We evaluate our approach on manufactured scenes to assess the error introduced at every step from the perfect ground truth. We also report experiments with real captured data, covering a range of capture technologies, from active scanning to multiview stereo reconstruction.Item Learning to Trace: Expressive Line Drawing Generation from Photographs(The Eurographics Association and John Wiley & Sons Ltd., 2019) Inoue, Naoto; Ito, Daichi; Xu, Ning; Yang, Jimei; Price, Brian; Yamasaki, Toshihiko; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonIn this paper, we present a new computational method for automatically tracing high-resolution photographs to create expressive line drawings. We define expressive lines as those that convey important edges, shape contours, and large-scale texture lines that are necessary to accurately depict the overall structure of objects (similar to those found in technical drawings) while still being sparse and artistically pleasing. Given a photograph, our algorithm extracts expressive edges and creates a clean line drawing using a convolutional neural network (CNN). We employ an end-to-end trainable fully-convolutional CNN to learn the model in a data-driven manner. The model consists of two networks to cope with two sub-tasks; extracting coarse lines and refining them to be more clean and expressive. To build a model that is optimal for each domain, we construct two new datasets for face/body and manga background. The experimental results qualitatively and quantitatively demonstrate the effectiveness of our model. We further illustrate two practical applications.Item Computing Surface PolyCube-Maps by Constrained Voxelization(The Eurographics Association and John Wiley & Sons Ltd., 2019) Yang, Yang; Fu, Xiao-Ming; Liu, Ligang; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonWe present a novel method to compute bijective PolyCube-maps with low isometric distortion. Given a surface and its preaxis- aligned shape that is not an exact PolyCube shape, the algorithm contains two steps: (i) construct a PolyCube shape to approximate the pre-axis-aligned shape; and (ii) generate a bijective, low isometric distortion mapping between the constructed PolyCube shape and the input surface. The PolyCube construction is formulated as a constrained optimization problem, where the objective is the number of corners in the constructed PolyCube, and the constraint is to bound the approximation error between the constructed PolyCube and the input pre-axis-aligned shape while ensuring topological validity. A novel erasing-and-filling solver is proposed to solve this challenging problem. Centeral to the algorithm for computing bijective PolyCube-maps is a quad mesh optimization process that projects the constructed PolyCube onto the input surface with high-quality quads. We demonstrate the efficacy of our algorithm on a data set containing 300 closed meshes. Compared to state-of-the-art methods, our method achieves higher practical robustness and lower mapping distortion.Item Selecting Texture Resolution Using a Task-specific Visibility Metric(The Eurographics Association and John Wiley & Sons Ltd., 2019) Wolski, Krzysztof; Giunchi, Daniele; Kinuwaki, Shinichi; Didyk, Piotr; Myszkowski, Karol; Steed, Anthony; Mantiuk, Rafal K.; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonIn real-time rendering, the appearance of scenes is greatly affected by the quality and resolution of the textures used for image synthesis. At the same time, the size of textures determines the performance and the memory requirements of rendering. As a result, finding the optimal texture resolution is critical, but also a non-trivial task since the visibility of texture imperfections depends on underlying geometry, illumination, interactions between several texture maps, and viewing positions. Ideally, we would like to automate the task with a visibility metric, which could predict the optimal texture resolution. To maximize the performance of such a metric, it should be trained on a given task. This, however, requires sufficient user data which is often difficult to obtain. To address this problem, we develop a procedure for training an image visibility metric for a specific task while reducing the effort required to collect new data. The procedure involves generating a large dataset using an existing visibility metric followed by refining that dataset with the help of an efficient perceptual experiment. Then, such a refined dataset is used to retune the metric. This way, we augment sparse perceptual data to a large number of per-pixel annotated visibility maps which serve as the training data for application-specific visibility metrics. While our approach is general and can be potentially applied for different image distortions, we demonstrate an application in a game-engine where we optimize the resolution of various textures, such as albedo and normal maps.Item Discrete Calabi Flow: A Unified Conformal Parameterization Method(The Eurographics Association and John Wiley & Sons Ltd., 2019) Su, Kehua; Li, Chenchen; Zhou, Yuming; Xu, Xu; Gu, Xianfeng; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonConformal parameterization for surfaces into various parameter domains is a fundamental task in computer graphics. Prior research on discrete Ricci flow provided us with promising inspirations from methods derived via Riemannian geometry, which is rigorous in theory and effective in practice. In this paper, we propose a unified conformal parameterization approach for turning triangle meshes into planar and spherical domains using discrete Calabi flow on piecewise linear metric. We incorporate edgeflipping surgery to guarantee convergence as well as other significant improvements including approximate Newton's method, optimal step-lengths, priority embedding and boundary customizing, which achieve better performance and functionality with robustness and accuracy.Item A PatchMatch-based Approach for Matte Propagation in Videos(The Eurographics Association and John Wiley & Sons Ltd., 2019) Backes, Marcos; Menezes de Oliveira Neto, Manuel; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonDespite considerable advances in natural image matting over the last decades, video matting still remains a difficult problem. The main challenges faced by existing methods are the large amount of user input required, and temporal inconsistencies in mattes between pairs of adjacent frames. We present a temporally-coherent matte-propagation method for videos based on PatchMatch and edge-aware filtering. Given an input video and trimaps for a few frames, including the first and last, our approach generates alpha mattes for all frames of the video sequence. We also present a user scribble-based interface for video matting that takes advantage of the efficiency of our method to interactively refine the matte results. We demonstrate the effectiveness of our approach by using it to generate temporally-coherent mattes for several natural video sequences. We perform quantitative comparisons against the state-of-the-art sparse-input video matting techniques and show that our method produces significantly better results according to three different metrics. We also perform qualitative comparisons against the state-of-the-art dense-input video matting techniques and show that our approach produces similar quality results while requiring only about 7% of the amount of user input required by such techniques. These results show that our method is both effective and user-friendly, outperforming state-of-the-art solutions.Item Procedural Riverscapes(The Eurographics Association and John Wiley & Sons Ltd., 2019) Peytavie, Adrien; Dupont, Thibault; Guérin, Eric; Cortial, Yann; Benes, Bedrich; Gain, James; Galin, Eric; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonThis paper addresses the problem of creating animated riverscapes through a novel procedural framework that generates the inscribing geometry of a river network and then synthesizes matching real-time water movement animation. Our approach takes bare-earth heightfields as input, derives hydrologically-inspired river network trajectories, carves riverbeds into the terrain, and then automatically generates a corresponding blend-flow tree for the water surface. Characteristics, such as the riverbed width, depth and shape, as well as elevation and flow of the fluid surface, are procedurally derived from the terrain and river type. The riverbed is inscribed by combining compactly supported elevation modifiers over the river course. Subsequently, the water surface is defined as a time-varying continuous function encoded as a blend-flow tree with leaves that are parameterized procedural flow primitives and internal nodes that are blend operators. While river generation is fully automated, we also incorporate intuitive interactive editing of both river trajectories and individual riverbed and flow primitives. The resulting framework enables the generation of a wide range of river forms, ranging from slow meandering rivers to rapids with churning water, including surface effects, such as foam and leaves carried downstream.Item Surface Fairing towards Regular Principal Curvature Line Networks(The Eurographics Association and John Wiley & Sons Ltd., 2019) Chu, Lei; Bo, Pengbo; Liu, Yang; Wang, Wenping; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonFreeform surfaces whose principal curvature line network is regularly distributed, are essential to many real applications like CAD modeling, architecture design, and industrial fabrication. However, most designed surfaces do not hold this nice property because it is hard to enforce such constraints in the design process. In this paper, we present a novel method for surface fairing which takes a regular distribution of the principal curvature line network on a surface as an objective. Our method first removes the high-frequency signals from the curvature tensor field of an input freeform surface by a novel rolling guidance tensor filter, which results in a more regular and smooth curvature tensor field, then deforms the input surface to match the smoothed field as much as possible. As an application, we solve the problem of approximating freeform surfaces with regular principal curvature line networks, discretized by quadrilateral meshes. By introducing the circular or conical conditions on the quadrilateral mesh to guarantee the existence of discrete principal curvature line networks, and minimizing the approximate error to the original surface and improving the fairness of the quad mesh, we obtain a regular discrete principal curvature line network that approximates the original surface. We evaluate the efficacy of our method on various freeform surfaces and demonstrate the superiority of the rolling guidance tensor filter over other tensor smoothing techniques. We also utilize our method to generate high-quality circular/conical meshes for architecture design and cyclide spline surfaces for CAD modeling.Item Appearance Flow Completion for Novel View Synthesis(The Eurographics Association and John Wiley & Sons Ltd., 2019) Le, Hoang; Liu, Feng; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonNovel view synthesis from sparse and unstructured input views faces challenges like the difficulty with dense 3D reconstruction and large occlusion. This paper addresses these problems by estimating proper appearance flows from the target to input views to warp and blend the input views. Our method first estimates a sparse set 3D scene points using an off-the-shelf 3D reconstruction method and calculates sparse flows from the target to input views. Our method then performs appearance flow completion to estimate the dense flows from the corresponding sparse ones. Specifically, we design a deep fully convolutional neural network that takes sparse flows and input views as input and outputs the dense flows. Furthermore, we estimate the optical flows between input views as references to guide the estimation of dense flows between the target view and input views. Besides the dense flows, our network also estimates the masks to blend multiple warped inputs to render the target view. Experiments on the KITTI benchmark show that our method can generate high quality novel views from sparse and unstructured input views.Item A Rigging-Skinning Scheme to Control Fluid Simulation(The Eurographics Association and John Wiley & Sons Ltd., 2019) Lu, Jia-Ming; Chen, Xiao-Song; Yan, Xiao; Li, Chen-Feng; Lin, Ming; Hu, Shi-Min; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonInspired by skeletal animation, a novel rigging-skinning flow control scheme is proposed to animate fluids intuitively and efficiently. The new animation pipeline creates fluid animation via two steps: fluid rigging and fluid skinning. The fluid rig is defined by a point cloud with rigid-body movement and incompressible deformation, whose time series can be intuitively specified by a rigid body motion and a constrained free-form deformation, respectively. The fluid skin generates plausible fluid flows by virtually fluidizing the point-cloud fluid rig with adjustable zero- and first-order flow features and at fixed computational cost. Fluid rigging allows the animator to conveniently specify the desired low-frequency flow motion through intuitive manipulations of a point cloud, while fluid skinning truthfully and efficiently converts the motion specified on the fluid rig into plausible flows of the animation fluid, with adjustable fine-scale effects. Besides being intuitive, the rigging-skinning scheme for fluid animation is robust and highly efficient, avoiding completely iterative trials or time-consuming nonlinear optimization. It is also versatile, supporting both particle- and grid- based fluid solvers. A series of examples including liquid, gas and mixed scenes are presented to demonstrate the performance of the new animation pipeline.Item Deep Video-Based Performance Synthesis from Sparse Multi-View Capture(The Eurographics Association and John Wiley & Sons Ltd., 2019) Chen, Mingjia; Wang, Changbo; Liu, Ligang; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonWe present a deep learning based technique that enables novel-view videos of human performances to be synthesized from sparse multi-view captures. While performance capturing from a sparse set of videos has received significant attention, there has been relatively less progress which is about non-rigid objects (e.g., human bodies). The rich articulation modes of human body make it rather challenging to synthesize and interpolate the model well. To address this problem, we propose a novel deep learning based framework that directly predicts novel-view videos of human performances without explicit 3D reconstruction. Our method is a composition of two steps: novel-view prediction and detail enhancement. We first learn a novel deep generative query network for view prediction. We synthesize novel-view performances from a sparse set of just five or less camera videos. Then, we use a new generative adversarial network to enhance fine-scale details of the first step results. This opens up the possibility of high-quality low-cost video-based performance synthesis, which is gaining popularity for VA and AR applications. We demonstrate a variety of promising results, where our method is able to synthesis more robust and accurate performances than existing state-of-the-art approaches when only sparse views are available.Item Rain Wiper: An Incremental RandomlyWired Network for Single Image Deraining(The Eurographics Association and John Wiley & Sons Ltd., 2019) Liang, Xiwen; Qiu, Bin; Su, Zhuo; Gao, Chengying; Shi, Xiaohong; Wang, Ruomei; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonSingle image rain removal is a challenging ill-posed problem due to various shapes and densities of rain streaks. We present a novel incremental randomly wired network (IRWN) for single image deraining. Different from previous methods, most structures of modules in IRWN are generated by a stochastic network generator based on the random graph theory, which ease the burden of manual design and further help to characterize more complex rain streaks. To decrease network parameters and extract more details efficiently, the image pyramid is fused via the multi-scale network structure. An incremental rectified loss is proposed to better remove rain streaks in different rain conditions and recover the texture information of target objects. Extensive experiments on synthetic and real-world datasets demonstrate that the proposed method outperforms the state-ofthe- art methods significantly. In addition, an ablation study is conducted to illustrate the improvements obtained by different modules and loss items in IRWN.Item Offline Deep Importance Sampling for Monte Carlo Path Tracing(The Eurographics Association and John Wiley & Sons Ltd., 2019) Bako, Steve; Meyer, Mark; DeRose, Tony; Sen, Pradeep; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonAlthough modern path tracers are successfully being applied to many rendering applications, there is considerable interest to push them towards ever-decreasing sampling rates. As the sampling rate is substantially reduced, however, even Monte Carlo (MC) denoisers-which have been very successful at removing large amounts of noise-typically do not produce acceptable final results. As an orthogonal approach to this, we believe that good importance sampling of paths is critical for producing betterconverged, path-traced images at low sample counts that can then, for example, be more effectively denoised. However, most recent importance-sampling techniques for guiding path tracing (an area known as ''path guiding'') involve expensive online (per-scene) training and offer benefits only at high sample counts. In this paper, we propose an offline, scene-independent deeplearning approach that can importance sample first-bounce light paths for general scenes without the need of the costly online training, and can start guiding path sampling with as little as 1 sample per pixel. Instead of learning to ''overfit'' to the sampling distribution of a specific scene like most previous work, our data-driven approach is trained a priori on a set of training scenes on how to use a local neighborhood of samples with additional feature information to reconstruct the full incident radiance at a point in the scene, which enables first-bounce importance sampling for new test scenes. Our solution is easy to integrate into existing rendering pipelines without the need for retraining, as we demonstrate by incorporating it into both the Blender/Cycles and Mitsuba path tracers. Finally, we show how our offline, deep importance sampler (ODIS) increases convergence at low sample counts and improves the results of an off-the-shelf denoiser relative to other state-of-the-art sampling techniques.