Dai, PinxuanXie, NingGhosh, AbhijeetWei, Li-Yi2022-07-012022-07-0120221467-8659https://doi.org/10.1111/cgf.14593https://diglib.eg.org:443/handle/10.1111/cgf14593Novel view synthesis (NVS) generates images from unseen viewpoints based on a set of input images. It is a challenge because of inaccurate lighting optimization and geometry inference. Although current neural rendering methods have made significant progress, they still struggle to reconstruct global illumination effects like reflections and exhibit ambiguous blurs in highly viewdependent areas. This work addresses high-quality view synthesis to emphasize reflection on non-concave surfaces. We propose Deep Flow Rendering that optimizes direct and indirect lighting separately, leveraging texture mapping, appearance flow, and neural rendering. A learnable texture is used to predict view-independent features, meanwhile enabling efficient reflection extraction. To accurately fit view-dependent effects, we adopt a constrained neural flow to transfer image-space features from nearby views to the target view in an edge-preserving manner. Then we further implement a fusing renderer that utilizes the predictions of both layers to form the output image. The experiments demonstrate that our method outperforms the state-of-theart methods at synthesizing various scenes with challenging reflection effects.CCS Concepts: Computing methodologies --> Image-based rendering; Neural networksComputing methodologiesImagebased renderingNeural networksDeep Flow Rendering: View Synthesis via Layer-aware Reflection Flow10.1111/cgf.14593139-14810 pages