Lee, Teng-YokAgus, Marco and Garth, Christoph and Kerren, Andreas2021-06-122021-06-122021978-3-03868-143-4https://doi.org/10.2312/evs.20211046https://diglib.eg.org:443/handle/10.2312/evs20211046This paper presents an in situ visualization algorithm for neural network training. As each training data item leads to multiple hidden variables when being forward-propagated through a neural network, our algorithm first estimates how much each hidden variable contributes to the training loss. Based on linear approximation, we can approximate the contribution mainly based on the forward-propagated value and the backward-propagated derivative per hidden variable, both of which are available during the training with no cost. By aggregating the loss contribution of hidden variables per data item, we can detect difficult data items that contribute most to the loss, which can be ambiguous or even incorrectly labeled. For convolution neural networks (CNN) with images as inputs, we extend the estimation of loss contribution to measure how different image areas impact the loss, which can be visualized over time to see how a CNN evolves to handle ambiguous images.Humancentered computingScientific visualizationLoss-contribution-based in situ Visualization for Neural Network Training10.2312/evs.202110461-5