Zheng, ChengyuShi, DingYan, XuefengLiang, DongWei, MingqiangYang, XinGuo, YanwenXie, HaoranHauser, Helwig and Alliez, Pierre2022-03-252022-03-2520221467-8659https://doi.org/10.1111/cgf.14441https://diglib.eg.org:443/handle/10.1111/cgf14441Most of the existing object detection methods generate poor glass detection results, due to the fact that the transparent glass shares the same appearance with arbitrary objects behind it in an image. Different from traditional deep learning‐based wisdoms that simply use the object boundary as an auxiliary supervision, we exploit label decoupling to decompose the original labelled ground‐truth (GT) map into an interior‐diffusion map and a boundary‐diffusion map. The GT map in collaboration with the two newly generated maps breaks the imbalanced distribution of the object boundary, leading to improved glass detection quality. We have three key contributions to solve the transparent glass detection problem: (1) We propose a three‐stream neural network (call GlassNet for short) to fully absorb beneficial features in the three maps. (2) We design a multi‐scale interactive dilation module to explore a wider range of contextual information. (3) We develop an attention‐based boundary‐aware feature Mosaic module to integrate multi‐modal information. Extensive experiments on the benchmark dataset exhibit clear improvements of our method over SOTAs, in terms of both the overall glass detection accuracy and boundary clearness.image processingimage and video processingimage segmentationimage and video processingcomputer vision–shape recognitionmethods and applicationsGlassNet: Label Decoupling‐based Three‐stream Neural Network for Robust Image Glass Detection10.1111/cgf.14441377-388