Chao, Xian JinLeung, HowardDominik L. MichelsSoeren Pirk2022-08-102022-08-1020221467-8659https://doi.org/10.1111/cgf.14646https://diglib.eg.org:443/handle/10.1111/cgf14646Multi-person novel view synthesis aims to generate free-viewpoint videos for dynamic scenes of multiple persons. However, current methods require numerous views to reconstruct a dynamic person and only achieve good performance when only a single person is present in the video. This paper aims to reconstruct a multi-person scene with fewer views, especially addressing the occlusion and interaction problems that appear in the multi-person scene. We propose MP-NeRF, a practical method for multiperson novel view synthesis from sparse cameras without the pre-scanned template human models. We apply a multi-person SMPL template as the identity and human motion prior. Then we build a global latent code to integrate the relative observations among multiple people, so we could represent multiple dynamic people into multiple neural radiance representations from sparse views. Experiments on multi-person dataset MVMP show that our method is superior to other state-of-the-art methods.MP-NeRF: Neural Radiance Fields for Dynamic Multi-person synthesis from Sparse Views10.1111/cgf.14646317-3259 pages