Janik, AdriannaSankaran, KrisOrtiz, AnthonyArchambault, Daniel and Nabney, Ian and Peltonen, Jaakko2019-06-022019-06-022019978-3-03868-089-5https://doi.org/10.2312/mlvis.20191158https://diglib.eg.org:443/handle/10.2312/mlvis20191158In the interpretability literature, attention is focused on understanding black-box classifiers, but many problems ranging from medicine through agriculture and crisis response in humanitarian aid are tackled by semantic segmentation models. The absence of interpretability for these canonical problems in computer vision motivates this study. In this study we present a usercentric approach that blends techniques from interpretability, representation learning, and interactive visualization. It allows to visualize and link latent representation to real data instances as well as qualitatively assess strength of predictions. We have applied our method to a deep learning model for semantic segmentation, U-Net, in a remote sensing application of building detection. This application is of high interest for humanitarian crisis response teams that rely on satellite images analysis. Preliminary results shows utility in understanding semantic segmentation models, demo presenting the idea is available online.Humancentered computingInformation visualizationComputing methodologiesKnowledge representation and reasoningImage segmentation"Interpreting Black-Box Semantic Segmentation Models in Remote Sensing Applications10.2312/mlvis.201911587-11