Garifullin, AlbertMaiorov, NikolayFrolov, VladimirVangorp, PeterHunter, David2023-09-122023-09-122023978-3-03868-231-8https://doi.org/10.2312/cgvc.20231189https://diglib.eg.org:443/handle/10.2312/cgvc20231189Most existing solutions for single-view 3D object reconstruction are based on deep learning with implicit or voxel representations of the scene and are unable to produce detailed and high-quality meshes and textures that can be directly used in practice. Differentiable rendering, on the other hand, is able to produce high-quality meshes but requires several images of an object. We propose a novel approach to single-view 3D reconstruction that uses procedural generator input parameters as a scene representation. Instead of estimating the vertex positions of the mesh directly, we estimate the input parameters of a procedural generator by minimizing the silhouette loss function between reference and rendered images. We use differentiable rendering and create partly differentiable procedural generators to use gradient-based optimization of the loss function. It allows us to create a highly detailed model from a single image taken in an uncontrolled environment. Moreover, the reconstructed model can be further modified in a convenient way by changing the estimated input parameters.Attribution 4.0 International LicenseCCS Concepts: Computing methodologies -> Rendering; Shape modelingComputing methodologiesRenderingShape modelingDifferentiable Procedural Models for Single-view 3D Mesh Reconstruction10.2312/cgvc.2023118939-435 pages