Hahlbohm, FlorianKappel, MoritzTauscher, Jan-PhilippEisemann, MartinMagnor, MarcusGuthe, MichaelGrosch, Thorsten2023-09-252023-09-252023978-3-03868-232-5https://doi.org/10.2312/vmv.20231226https://diglib.eg.org:443/handle/10.2312/vmv20231226This paper presents a point-based, neural rendering approach for complex real-world objects from a set of photographs. Our method is specifically geared towards representing fine detail and reflective surface characteristics at improved quality over current state-of-the-art methods. From the photographs, we create a 3D point model based on optimized neural feature points located on a regular grid. For rendering, we employ view-dependent spherical harmonics shading, differentiable rasterization, and a deep neural rendering network. By combining a point-based approach and novel regularizers, our method is able to accurately represent local detail such as fine geometry and high-frequency texture while at the same time convincingly interpolating unseen viewpoints during inference. Our method achieves about 7 frames per second at 800×800 pixel output resolution on commodity hardware, putting it within reach for real-time rendering applications.Attribution 4.0 International LicenseCCS Concepts: Computing methodologies → Image-based rendering; Point-based modelsComputing methodologies → Imagebased renderingPointbased modelsPlenopticPoints: Rasterizing Neural Feature Points for High-Quality Novel View Synthesis10.2312/vmv.2023122653-619 pages