Fink, LauraFranke, LinusEgger, BernhardKeinert, JoachimStamminger, MarcEgger, BernhardGünther, Tobias2025-09-242025-09-242025978-3-03868-294-3https://doi.org/10.2312/vmv.20251232https://diglib.eg.org/handle/10.2312/vmv20251232Accurate depth estimation is at the core of many applications in computer graphics, vision, and robotics. Current state-ofthe- art monocular depth estimators, trained on extensive datasets, generalize well but lack 3D consistency needed for many applications. In this paper, we combine the strength of those generalizing monocular depth estimation techniques with multiview data by framing this as an analysis-by-synthesis optimization problem to lift and refine such relative depth maps to accurate error-free depth maps. After an initial global scale estimation through structure-from-motion point clouds, we further refine the depth map through optimization enforcing multi-view consistency via photometric and geometric losses with differentiable rendering of the meshed depth map. In a two-stage optimization, scaling is further refined first, and afterwards artifacts and errors in the depth map are corrected via nearby-view photometric supervision. Our evaluation shows that our method is able to generate detailed, high-quality, view consistent, accurate depth maps, also in challenging indoor scenarios, and outperforms state-of-the-art multi-view depth reconstruction approaches on such datasets. Project page and source code can be found at https://lorafib.github.io/ref_depth/.Attribution 4.0 International LicenseCCS Concepts: Computing methodologies → Computer vision problems; RasterizationComputing methodologies → Computer vision problemsRasterizationRefinement of Monocular Depth Maps via Multi-View Differentiable Rendering10.2312/vmv.2025123210 pages