Sun, TianchengLin, Kai-EnBi, SaiXu, ZexiangRamamoorthi, RaviBousseau, Adrien and McGuire, Morgan2021-07-122021-07-122021978-3-03868-157-11727-3463https://doi.org/10.2312/sr.20211299https://diglib.eg.org:443/handle/10.2312/sr20211299Human portraits exhibit various appearances when observed from different views under different lighting conditions. We can easily imagine how the face will look like in another setup, but computer algorithms still fail on this problem given limited observations. To this end, we present a system for portrait view synthesis and relighting: given multiple portraits, we use a neural network to predict the light-transport field in 3D space, and from the predicted Neural Light-transport Field (NeLF) produce a portrait from a new camera view under a new environmental lighting. Our system is trained on a large number of synthetic models, and can generalize to different synthetic and real portraits under various lighting conditions. Our method achieves simultaneous view synthesis and relighting given multi-view portraits as the input, and achieves state-of-the-art results.Computing methodologiesImagebased renderingComputational photographyNeLF: Neural Light-transport Field for Portrait View Synthesis and Relighting10.2312/sr.20211299155-166