Show simple item record

dc.contributor.authorLee, Teng-Yoken_US
dc.contributor.editorAgus, Marco and Garth, Christoph and Kerren, Andreasen_US
dc.date.accessioned2021-06-12T11:03:13Z
dc.date.available2021-06-12T11:03:13Z
dc.date.issued2021
dc.identifier.isbn978-3-03868-143-4
dc.identifier.urihttps://doi.org/10.2312/evs.20211046
dc.identifier.urihttps://diglib.eg.org:443/handle/10.2312/evs20211046
dc.description.abstractThis paper presents an in situ visualization algorithm for neural network training. As each training data item leads to multiple hidden variables when being forward-propagated through a neural network, our algorithm first estimates how much each hidden variable contributes to the training loss. Based on linear approximation, we can approximate the contribution mainly based on the forward-propagated value and the backward-propagated derivative per hidden variable, both of which are available during the training with no cost. By aggregating the loss contribution of hidden variables per data item, we can detect difficult data items that contribute most to the loss, which can be ambiguous or even incorrectly labeled. For convolution neural networks (CNN) with images as inputs, we extend the estimation of loss contribution to measure how different image areas impact the loss, which can be visualized over time to see how a CNN evolves to handle ambiguous images.en_US
dc.publisherThe Eurographics Associationen_US
dc.subjectHuman
dc.subjectcentered computing
dc.subjectScientific visualization
dc.titleLoss-contribution-based in situ Visualization for Neural Network Trainingen_US
dc.description.seriesinformationEuroVis 2021 - Short Papers
dc.description.sectionheadersMachine Learning and SciVis Applications
dc.identifier.doi10.2312/evs.20211046
dc.identifier.pages1-5


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record