Appearance Flow Completion for Novel View Synthesis

dc.contributor.authorLe, Hoangen_US
dc.contributor.authorLiu, Fengen_US
dc.contributor.editorLee, Jehee and Theobalt, Christian and Wetzstein, Gordonen_US
dc.date.accessioned2019-10-14T05:09:39Z
dc.date.available2019-10-14T05:09:39Z
dc.date.issued2019
dc.description.abstractNovel view synthesis from sparse and unstructured input views faces challenges like the difficulty with dense 3D reconstruction and large occlusion. This paper addresses these problems by estimating proper appearance flows from the target to input views to warp and blend the input views. Our method first estimates a sparse set 3D scene points using an off-the-shelf 3D reconstruction method and calculates sparse flows from the target to input views. Our method then performs appearance flow completion to estimate the dense flows from the corresponding sparse ones. Specifically, we design a deep fully convolutional neural network that takes sparse flows and input views as input and outputs the dense flows. Furthermore, we estimate the optical flows between input views as references to guide the estimation of dense flows between the target view and input views. Besides the dense flows, our network also estimates the masks to blend multiple warped inputs to render the target view. Experiments on the KITTI benchmark show that our method can generate high quality novel views from sparse and unstructured input views.en_US
dc.description.number7
dc.description.sectionheadersImage Based Rendering
dc.description.seriesinformationComputer Graphics Forum
dc.description.volume38
dc.identifier.doi10.1111/cgf.13860
dc.identifier.issn1467-8659
dc.identifier.pages555-565
dc.identifier.urihttps://doi.org/10.1111/cgf.13860
dc.identifier.urihttps://diglib.eg.org:443/handle/10.1111/cgf13860
dc.publisherThe Eurographics Association and John Wiley & Sons Ltd.en_US
dc.titleAppearance Flow Completion for Novel View Synthesisen_US
Files
Collections