Guerrero, PaulWinnemöller, HolgerLi, WilmotMitra, Niloy J.Jakob Andreas Bærentzen and Klaus Hildebrandt2017-07-022017-07-022017978-3-03868-047-51727-8384https://doi.org/10.2312/sgp.20171202https://diglib.eg.org:443/handle/10.2312/sgp20171202In the context of scene understanding, a variety of methods exists to estimate different information channels from mono or stereo images, including disparity, depth, and normals. Although several advances have been reported in the recent years for these tasks, the estimated information is often imprecise particularly near depth contours or creases. Studies have however shown that precisely such depth edges carry critical cues for the perception of shape, and play important roles in tasks like depth-based segmentation or foreground selection. Unfortunately, the currently extracted channels often carry conflicting signals, making it difficult for subsequent applications to effectively use them. In this paper, we focus on the problem of obtaining high-precision depth edges by jointly analyzing such unreliable information channels. We propose DEPTHCUT, a data-driven fusion of the channels using a convolutional neural network trained on a large dataset with known depth. The resulting depth edges can be used for segmentation, decomposing a scene into segments with relatively smooth depth, or improving the accuracy of the depth estimate near depth edges by constraining its gradients to agree with these edges. Quantitative experiments show that our depth edges result in an improved segmentation performance compared to a more naive channel fusion. Qualitatively, we demonstrate that the depth edges can be used for superior segmentation and an improved depth estimate near depth edges.DepthCut: Improved Depth Edge Estimation Using Multiple Unreliable Channels10.2312/sgp.201712023-4