Wirth, TristanJamili, AriaBuelow, Max vonKnauthe, VolkerGuthe, StefanPelechano, NuriaVanderhaeghe, David2022-04-222022-04-222022978-3-03868-169-41017-4656https://doi.org/10.2312/egs.20221020https://diglib.eg.org:443/handle/10.2312/egs20221020Due to material properties, monocular depth estimation of transparent structures is inherently challenging. Recent advances leverage additional knowledge that is not available in all contexts, i.e., known shape or depth information from a sensor. General-purpose machine learning models, that do not utilize such additional knowledge, have not yet been explicitly evaluated regarding their performance on transparent structures. In this work, we show that these models show poor performance on the depth estimation of transparent structures. However, fine-tuning on suitable data sets, such as ClearGrasp, increases their estimation performance on the task at hand. Our evaluations show that high performance on general-purpose benchmarks translates well into performance on transparent objects after fine-tuning. Furthermore, our analysis suggests, that state-of-theart high-performing models are not able to capture a high grade of detail from both the image foreground and background at the same time. This finding shows the demand for a combination of existing models to further enhance depth estimation quality.Attribution 4.0 International LicenseCCS Concepts: Computing methodologies --> Computer vision; Shape inferenceComputing methodologiesComputer visionShape inferenceFitness of General-Purpose Monocular Depth Estimation Architectures for Transparent Structures10.2312/egs.202210209-124 pages