Grósz, TamásKurimo, MikkoArchambault, Daniel and Nabney, Ian and Peltonen, Jaakko2020-05-242020-05-242020978-3-03868-113-7https://doi.org/10.2312/mlvis.20201103https://diglib.eg.org:443/handle/10.2312/mlvis20201103In the past few years, Deep Neural Networks (DNN) have become the state-of-the-art solution in several areas, including automatic speech recognition (ASR), unfortunately, they are generally viewed as black boxes. Recently, this started to change as researchers have dedicated much effort into interpreting their behavior. In this work, we concentrate on visual interpretation by depicting the hidden activation vectors of the DNN, and propose the usage of deep Autoencoders (DAE) to transform these hidden representations for inspection. We use multiple metrics to compare our approach with other, widely-used algorithms and the results show that our approach is quite competitive. The main advantage of using Autoencoders over the existing ones is that after the training phase, it applies a fixed transformation that can be used to visualize any hidden activation vector without any further optimization, which is not true for the other methods.Computing methodologiesDimensionality reduction and manifold learningSpeech recognitionNeural networksVisual Interpretation of DNN-based Acoustic Models using Deep Autoencoders10.2312/mlvis.2020110325-29