Search Results

Now showing 1 - 10 of 89
  • Item
    Learning Physically Based Humanoid Climbing Movements
    (The Eurographics Association and John Wiley & Sons Ltd., 2018) Naderi, Kourosh; Babadi, Amin; Hämäläinen, Perttu; Thuerey, Nils and Beeler, Thabo
    We propose a novel learning-based solution for motion planning of physically-based humanoid climbing that allows for fast and robust planning of complex climbing strategies and movements, including extreme movements such as jumping. Similar to recent previous work, we combine a high-level graph-based path planner with low-level sampling-based optimization of climbing moves. We contribute through showing that neural network models of move success probability, effortfulness, and control policy can make both the high-level and low-level components more efficient and robust. The models can be trained through random simulation practice without any data. The models also eliminate the need for laboriously hand-tuned heuristics for graph search. As a result, we are able to efficiently synthesize climbing sequences involving dynamic leaps and one-hand swings, i.e. there are no limits to the movement complexity or the number of limbs allowed to move simultaneously. Our supplemental video also provides some comparisons between our AI climber and a real human climber.
  • Item
    Semantic Reconstruction: Reconstruction of Semantically Segmented 3D Meshes via Volumetric Semantic Fusion
    (The Eurographics Association and John Wiley & Sons Ltd., 2018) Jeon, Junho; Jung, Jinwoong; Kim, Jungeon; Lee, Seungyong; Fu, Hongbo and Ghosh, Abhijeet and Kopf, Johannes
    Semantic segmentation partitions a given image or 3D model of a scene into semantically meaning parts and assigns predetermined labels to the parts. With well-established datasets, deep networks have been successfully used for semantic segmentation of RGB and RGB-D images. On the other hand, due to the lack of annotated large-scale 3D datasets, semantic segmentation for 3D scenes has not yet been much addressed with deep learning. In this paper, we present a novel framework for generating semantically segmented triangular meshes of reconstructed 3D indoor scenes using volumetric semantic fusion in the reconstruction process. Our method integrates the results of CNN-based 2D semantic segmentation that is applied to the RGB-D stream used for dense surface reconstruction. To reduce the artifacts from noise and uncertainty of single-view semantic segmentation, we introduce adaptive integration for the volumetric semantic fusion and CRF-based semantic label regularization. With these methods, our framework can easily generate a high-quality triangular mesh of the reconstructed 3D scene with dense (i.e., per-vertex) semantic labels. Extensive experiments demonstrate that our semantic segmentation results of 3D scenes achieves the state-of-the-art performance compared to the previous voxel-based and point cloud-based methods.
  • Item
    A Practical Approach to Physically-Based Reproduction of Diffusive Cosmetics
    (The Eurographics Association and John Wiley & Sons Ltd., 2018) Kim, Goanghun; Ko, Hyeong-Seok; Fu, Hongbo and Ghosh, Abhijeet and Kopf, Johannes
    In this paper, we introduce so-called the bSX method as a new way to utilize the Kubelka-Munk (K-M) model. Assuming the material is completely diffusive, the K-M model gives the reflectance and transmittance of the material from the observation of the material applied on a backing, where the observation includes the thickness of the material application. By rearranging the original K-M equation, we propose that the reflectance and transmittance can be calculated without knowing the thickness. This is a practically useful contribution. Based on the above finding, we develop the bSX method which can (1) capture the material specific parameters from the two photos - taken before and after the material application, and (2) reproduce its effect on a novel backing. We experimented the proposed method in various cases related to virtual cosmetic try-on, which include (1) capture from a single color backing, (2) capture from human skin backing, (3) reproduction of varying thickness effect, (4) reproduction of multi-layer cosmetic application effect, (5) applying the proposed method to makeup transfer. Compared to previous image-based makeup transfer methods, the bSX method reproduces the feel of the cosmetics more accurately.
  • Item
    Subdivision Schemes With Optimal Bounded Curvature Near Extraordinary Vertices
    (The Eurographics Association and John Wiley & Sons Ltd., 2018) Ma, Yue; Ma, Weiyin; Fu, Hongbo and Ghosh, Abhijeet and Kopf, Johannes
    We present a novel method to construct subdivision stencils near extraordinary vertices with limit surfaces having optimal bounded curvature at extraordinary positions. With the proposed method, subdivision stencils for newly inserted and updated vertices near extraordinary vertices are first constructed to ensure subdivision with G1 continuity and bounded curvature at extraordinary positions. The remaining degrees of freedom of the constructed subdivision stencils are further used to optimize the eigenbasis functions corresponding to the subsubdominant eigenvalues of the subdivision with respect to G2 continuity constraints. We demonstrate the method by replacing subdivision stencils near extraordinary vertices for Catmull-Clark subdivision and compare the results with the original Catmull-Clark subdivision and previous tuning schemes known with small curvature variation near extraordinary positions. The results show that the proposed method produces subdivision schemes with better or comparable curvature behavior around extraordinary vertices with comparatively simple subdivision stencils.
  • Item
    Sequences with Low-Discrepancy Blue-Noise 2-D Projections
    (The Eurographics Association and John Wiley & Sons Ltd., 2018) Perrier, Hélène; Coeurjolly, David; Xie, Feng; Pharr, Matt; Hanrahan, Pat; Ostromoukhov, Victor; Gutierrez, Diego and Sheffer, Alla
    Distributions of samples play a very important role in rendering, affecting variance, bias and aliasing in Monte-Carlo and Quasi-Monte Carlo evaluation of the rendering equation. In this paper, we propose an original sampler which inherits many important features of classical low-discrepancy sequences (LDS): a high degree of uniformity of the achieved distribution of samples, computational efficiency and progressive sampling capability. At the same time, we purposely tailor our sampler in order to improve its spectral characteristics, which in turn play a crucial role in variance reduction, anti-aliasing and improving visual appearance of rendering. Our sampler can efficiently generate sequences of multidimensional points, whose power spectra approach so-called Blue-Noise (BN) spectral property while preserving low discrepancy (LD) in certain 2-D projections. In our tile-based approach, we perform permutations on subsets of the original Sobol LDS. In a large space of all possible permutations, we select those which better approach the target BN property, using pair-correlation statistics. We pre-calculate such ''good'' permutations for each possible Sobol pattern, and store them in a lookup table efficiently accessible in runtime. We provide a complete and rigorous proof that such permutations preserve dyadic partitioning and thus the LDS properties of the point set in 2-D projections. Our construction is computationally efficient, has a relatively low memory footprint and supports adaptive sampling. We validate our method by performing spectral/discrepancy/aliasing analysis of the achieved distributions, and provide variance analysis for several target integrands of theoretical and practical interest.
  • Item
    Semantic Segmentation for Line Drawing Vectorization Using Neural Networks
    (The Eurographics Association and John Wiley & Sons Ltd., 2018) Kim, Byungsoo; Wang, Oliver; Öztireli, A. Cengiz; Gross, Markus; Gutierrez, Diego and Sheffer, Alla
    In this work, we present a method to vectorize raster images of line art. Inverting the rasterization procedure is inherently ill-conditioned, as there exist many possible vector images that could yield the same raster image. However, not all of these vector images are equally useful to the user, especially if performing further edits is desired. We therefore define the problem of computing an instance segmentation of the most likely set of paths that could have created the raster image. Once the segmentation is computed, we use existing vectorization approaches to vectorize each path, and then combine all paths into the final output vector image. To determine which set of paths is most likely, we train a pair of neural networks to provide semantic clues that help resolve ambiguities at intersection and overlap regions. These predictions are made considering the full context of the image, and are then globally combined by solving a Markov Random Field (MRF). We demonstrate the flexibility of our method by generating results on character datasets, a synthetic random line dataset, and a dataset composed of human drawn sketches. For all cases, our system accurately recovers paths that adhere to the semantics of the drawings.
  • Item
    ExpandNet: A Deep Convolutional Neural Network for High Dynamic Range Expansion from Low Dynamic Range Content
    (The Eurographics Association and John Wiley & Sons Ltd., 2018) Marnerides, Demetris; Bashford-Rogers, Thomas; Hatchett, Jon; Debattista, Kurt; Gutierrez, Diego and Sheffer, Alla
    High dynamic range (HDR) imaging provides the capability of handling real world lighting as opposed to the traditional low dynamic range (LDR) which struggles to accurately represent images with higher dynamic range. However, most imaging content is still available only in LDR. This paper presents a method for generating HDR content from LDR content based on deep Convolutional Neural Networks (CNNs) termed ExpandNet. ExpandNet accepts LDR images as input and generates images with an expanded range in an end-to-end fashion. The model attempts to reconstruct missing information that was lost from the original signal due to quantization, clipping, tone mapping or gamma correction. The added information is reconstructed from learned features, as the network is trained in a supervised fashion using a dataset of HDR images. The approach is fully automatic and data driven; it does not require any heuristics or human expertise. ExpandNet uses a multiscale architecture which avoids the use of upsampling layers to improve image quality. The method performs well compared to expansion/inverse tone mapping operators quantitatively on multiple metrics, even for badly exposed inputs.
  • Item
    FTP-SC: Fuzzy Topology Preserving Stroke Correspondence
    (The Eurographics Association and John Wiley & Sons Ltd., 2018) Yang, Wenwu; Seah, Hock-Soon; Chen, Quan; Liew, Hong-Ze; Sýkora, Daniel; Thuerey, Nils and Beeler, Thabo
    Stroke correspondence construction is a precondition for vectorized 2D animation inbetweening and remains a challenging problem. This paper introduces the FTP-SC, a fuzzy topology preserving stroke correspondence technique, which is accurate and provides the user more effective control on the correspondence result than previous matching approaches. The method employs a two-stage scheme to progressively establish the stroke correspondence construction between the keyframes. In the first stage, the stroke correspondences with high confidence are constructed by enforcing the preservation of the so-called “fuzzy topology” which encodes intrinsic connectivity among the neighboring strokes. Starting with the high-confidence correspondences, the second stage performs a greedy matching algorithm to generate a full correspondence between the strokes. Experimental results show that the FTP-SC outperforms the existing approaches and can establish the stroke correspondence with a reasonable amount of user interaction even for keyframes with large geometric and spatial variations between strokes.
  • Item
    Stratified Sampling of Projected Spherical Caps
    (The Eurographics Association and John Wiley & Sons Ltd., 2018) Ureña, Carlos; Georgiev, Iliyan; Jakob, Wenzel and Hachisuka, Toshiya
    We present a method for uniformly sampling points inside the projection of a spherical cap onto a plane through the sphere's center. To achieve this, we devise two novel area-preserving mappings from the unit square to this projection, which is often an ellipse but generally has a more complex shape. Our maps allow for low-variance rendering of direct illumination from finite and infinite (e.g. sun-like) spherical light sources by sampling their projected solid angle in a stratified manner. We discuss the practical implementation of our maps and show significant quality improvement over traditional uniform spherical cap sampling in a production renderer.
  • Item
    Approximate Program Smoothing Using Mean-Variance Statistics, with Application to Procedural Shader Bandlimiting
    (The Eurographics Association and John Wiley & Sons Ltd., 2018) Yang, Yuting; Barnes, Connelly; Gutierrez, Diego and Sheffer, Alla
    We introduce a general method to approximate the convolution of a program with a Gaussian kernel. This results in the program being smoothed. Our compiler framework models intermediate values in the program as random variables, by using mean and variance statistics. We decompose the input program into atomic parts and relate the statistics of the different parts of the smoothed program. We give several approximate smoothing rules that can be used for the parts of the program. These include an improved variant of Dorn et al. [DBLW15], a novel adaptive Gaussian approximation, Monte Carlo sampling, and compactly supported kernels. Our adaptive Gaussian approximation handles multivariate Gaussian distributed inputs, gives exact results for a larger class of programs than previous work, and is accurate to the second order in the standard deviation of the kernel for programs with certain analytic properties. Because each expression in the program can have multiple approximation choices, we use a genetic search to automatically select the best approximations. We apply this framework to the problem of automatically bandlimiting procedural shader programs. We evaluate our method on a variety of geometries and complex shaders, including shaders with parallax mapping, animation, and spatially varying statistics. The resulting smoothed shader programs outperform previous approaches both numerically and aesthetically.