Differentiable Procedural Models for Single-view 3D Mesh Reconstruction

No Thumbnail Available
Date
2023
Journal Title
Journal ISSN
Volume Title
Publisher
The Eurographics Association
Abstract
Most existing solutions for single-view 3D object reconstruction are based on deep learning with implicit or voxel representations of the scene and are unable to produce detailed and high-quality meshes and textures that can be directly used in practice. Differentiable rendering, on the other hand, is able to produce high-quality meshes but requires several images of an object. We propose a novel approach to single-view 3D reconstruction that uses procedural generator input parameters as a scene representation. Instead of estimating the vertex positions of the mesh directly, we estimate the input parameters of a procedural generator by minimizing the silhouette loss function between reference and rendered images. We use differentiable rendering and create partly differentiable procedural generators to use gradient-based optimization of the loss function. It allows us to create a highly detailed model from a single image taken in an uncontrolled environment. Moreover, the reconstructed model can be further modified in a convenient way by changing the estimated input parameters.
Description

CCS Concepts: Computing methodologies -> Rendering; Shape modeling

        
@inproceedings{
10.2312:cgvc.20231189
, booktitle = {
Computer Graphics and Visual Computing (CGVC)
}, editor = {
Vangorp, Peter
and
Hunter, David
}, title = {{
Differentiable Procedural Models for Single-view 3D Mesh Reconstruction
}}, author = {
Garifullin, Albert
and
Maiorov, Nikolay
and
Frolov, Vladimir
}, year = {
2023
}, publisher = {
The Eurographics Association
}, ISBN = {
978-3-03868-231-8
}, DOI = {
10.2312/cgvc.20231189
} }
Citation