Seibt, SimonChang, Thomas Vincent Siu-Lungvon Rymon Lipinski, BartoszLatoschik, Marc ErichLiu, LingjieAverkiou, Melinos2024-04-302024-04-302024978-3-03868-239-41017-4656https://doi.org/10.2312/egp.20241038https://diglib.eg.org/handle/10.2312/egp20241038This paper presents advancements in novel-view synthesis with 3D Gaussian Splatting (3DGS) using a dense and accurate SfM point cloud initialization approach. We address the challenge of achieving photorealistic renderings from sparse image data, where basic 3DGS training may result in suboptimal convergence, thus leading to visual artifacts. The proposed method enhances precision and density of initially reconstructed point clouds by refining 3D positions and extrapolating additional points, even for difficult image regions, e.g. with repeating patterns and suboptimal visual coverage. Our contributions focus on improving ''Dense Feature Matching for Structure-from-Motion'' (DFM4SfM) based on a homographic decomposition of the image space to support 3DGS training: First, a grid-based feature detection method is introduced for DFM4SfM to ensure a welldistributed 3D Gaussian initialization uniformly over all depth planes. Second, the SfM feature matching is complemented by a geometric plausibility check, priming the homography estimation and thereby improving the initial placement of 3D Gaussians. Experimental results on the NeRF-LLFF dataset demonstrate that this approach achieves superior qualitative and quantitative results, even for fewer views, and the potential for a significantly accelerated 3DGS training with faster convergence.Attribution 4.0 International LicenseCCS Concepts: Computing methodologies → Reconstruction; Point-based models; RenderingComputing methodologies → ReconstructionPointbased modelsRenderingDense 3D Gaussian Splatting Initialization for Sparse Image Data10.2312/egp.202410382 pages