Neumann, Kai AlexanderSantos, PedroFellner, Dieter W.Corsini, MassimilianoFerdani, DanieleKuijper, ArjanKutlu, Hasan2024-09-152024-09-152024978-3-03868-248-62312-6124https://doi.org/10.2312/gch.20241257https://diglib.eg.org/handle/10.2312/gch20241257Image-based 3D reconstruction is a commonly used technique for measuring the geometry and color of objects or scenes based on images. While the geometry reconstruction of state-of-the-art approaches is mostly robust against varying lighting conditions and outliers, these pose a significant challenge for calculating an accurate texture map. This work proposes a deep-learning based texturing approach called ''DeepTex'' that uses a custom learned blending method on top of a traditional mosaic-based texturing approach. The model was trained using a custom synthetic data generation workflow and showed a significantly increased accuracy when generating textures in the presence of outliers and non-uniform lighting.Attribution 4.0 International LicenseCCS Concepts: Computing methodologies → Reconstruction; Computer vision; Artificial intelligenceComputing methodologies → ReconstructionComputer visionArtificial intelligenceDeepTex: Deep Learning-Based Texturing of Image-Based 3D Reconstructions10.2312/gch.202412574 pages