Bodria, FrancescoRinzivillo, SalvatoreFadda, DanieleGuidotti, RiccardoGiannotti, FoscaPedreschi, DinoAgus, MarcoAigner, WolfgangHoellt, Thomas2022-06-022022-06-022022978-3-03868-184-7https://doi.org/10.2312/evs.20221098https://diglib.eg.org:443/handle/10.2312/evs20221098Autoencoders are a powerful yet opaque feature reduction technique, on top of which we propose a novel way for the joint visual exploration of both latent and real space. By interactively exploiting the mapping between latent and real features, it is possible to unveil the meaning of latent features while providing deeper insight into the original variables. To achieve this goal, we exploit and re-adapt existing approaches from eXplainable Artificial Intelligence (XAI) to understand the relationships between the input and latent features. The uncovered relationships between input features and latent ones allow the user to understand the data structure concerning external variables such as the predictions of a classification model. We developed an interactive framework that visually explores the latent space and allows the user to understand the relationships of the input features with model prediction.Attribution 4.0 International LicenseCCS Concepts: Human-centered computing --> User interface design; Visualization techniquesHuman centered computingUser interface designVisualization techniquesExplaining Black Box with Visual Exploration of Latent Space10.2312/evs.2022109885-895 pages