Niu, ChengjieYu, YangBian, ZhenweiLi, JunXu, KaiEisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue2020-10-292020-10-2920201467-8659https://doi.org/10.1111/cgf.14158https://diglib.eg.org:443/handle/10.1111/cgf14158In order for the deep learning models to truly understand the 2D images for 3D geometry recovery, we argue that singleview reconstruction should be learned in a part-aware and weakly supervised manner. Such models lead to more profound interpretation of 2D images in which part-based parsing and assembling are involved. To this end, we learn a deep neural network which takes a single-view RGB image as input, and outputs a 3D shape in parts represented by 3D point clouds with an array of 3D part generators. In particular, we devise two levels of generative adversarial network (GAN) to generate shapes with both correct part shape and reasonable overall structure. To enable a self-taught network training, we devise a differentiable projection module along with a self-projection loss measuring the error between the shape projection and the input image. The training data in our method is unpaired between the 2D images and the 3D shapes with part decomposition. Through qualitative and quantitative evaluations on public datasets, we show that our method achieves good performance in part-wise single-view reconstruction.Computing methodologiesReconstructionShape representationsPointbased modelsComputer systems organizationNeural networksWeakly Supervised Part-wise 3D Shape Reconstruction from Single-View RGB Images10.1111/cgf.14158447-457