Inpainting Normal Maps for Lightstage data

dc.contributor.authorZuo, Hanchengen_US
dc.contributor.authorTiddeman, Bernarden_US
dc.contributor.editorVangorp, Peteren_US
dc.contributor.editorHunter, Daviden_US
dc.date.accessioned2023-09-12T05:44:48Z
dc.date.available2023-09-12T05:44:48Z
dc.date.issued2023
dc.description.abstractThis paper presents a new method for inpainting of normal maps using a generative adversarial network (GAN) model. Normal maps can be acquired from a lightstage, and when used for performance capture, there is a risk of areas of the face being obscured by the movement (e.g. by arms, hair or props). Inpainting aims to fill missing areas of an image with plausible data. This work builds on previous work for general image inpainting, using a bow tie-like generator network and a discriminator network, and alternating training of the generator and discriminator. The generator tries to sythesise images that match the ground truth, and that can also fool the discriminator that is classifying real vs processed images. The discriminator is occasionally retrained to improve its performance at identifying the processed images. In addition, our method takes into account the nature of the normal map data, and so requires modification to the loss function. We replace a mean squared error loss with a cosine loss when training the generator. Due to the small amount of available training data available, even when using synthetic datasets, we require significant augmentation, which also needs to take account of the particular nature of the input data. Image flipping and in-plane rotations need to properly flip and rotate the normal vectors. During training, we monitored key performance metrics including average loss, Structural Similarity Index Measure (SSIM), and Peak Signal-to-Noise Ratio (PSNR) of the generator, alongside average loss and accuracy of the discriminator. Our analysis reveals that the proposed model generates high-quality, realistic inpainted normal maps, demonstrating the potential for application to performance capture. The results of this investigation provide a baseline on which future researchers could build with more advanced networks and comparison with inpainting of the source images used to generate the normal maps.en_US
dc.description.sectionheadersShape Reconstruction
dc.description.seriesinformationComputer Graphics and Visual Computing (CGVC)
dc.identifier.doi10.2312/cgvc.20231190
dc.identifier.isbn978-3-03868-231-8
dc.identifier.pages45-52
dc.identifier.pages8 pages
dc.identifier.urihttps://doi.org/10.2312/cgvc.20231190
dc.identifier.urihttps://diglib.eg.org:443/handle/10.2312/cgvc20231190
dc.publisherThe Eurographics Associationen_US
dc.rightsAttribution 4.0 International License
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.subjectCCS Concepts: Computing methodologies -> Neural networks; Reconstruction
dc.subjectComputing methodologies
dc.subjectNeural networks
dc.subjectReconstruction
dc.titleInpainting Normal Maps for Lightstage dataen_US
Files
Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
045-052.pdf
Size:
64.95 MB
Format:
Adobe Portable Document Format