Knutsson, AlexUnnebäck, JakobJönsson, DanielEilertsen, GabrielHansen, ChristianProcter, JamesRenata G. RaidouJönsson, DanielHöllt, Thomas2023-09-192023-09-192023978-3-03868-216-52070-5786https://doi.org/10.2312/vcbm.20231212https://diglib.eg.org:443/handle/10.2312/vcbm20231212Training a deep neural network is computationally expensive, but achieving the same network performance with less computation is possible if the training data is carefully chosen. However, selecting input samples during training is challenging as their true importance for the optimization is unknown. Furthermore, evaluation of the importance of individual samples must be computationally efficient and unbiased. In this paper, we present a new input data importance sampling strategy for reducing the training time of deep neural networks. We investigate different importance metrics that can be efficiently retrieved as they are available during training, i.e., the training loss and gradient norm. We found that choosing only samples with large loss or gradient norm, which are hard for the network to learn, is not optimal for the network performance. Instead, we introduce an importance sampling strategy that selects samples based on the cumulative distribution function of the loss and gradient norm, thereby making it more likely to choose hard samples while still including easy ones. The behavior of the proposed strategy is first analyzed on a synthetic dataset, and then evaluated in the application of classification of malignant cancer in digital pathology image patches. As pathology images contain many repetitive patterns, there could be significant gains in focusing on features that contribute stronger to the optimization. Finally, we show how the importance sampling process can be used to gain insights about the input data through visualization of samples that are found most or least useful for the training.Attribution 4.0 International LicenseCCS Concepts: Computing methodologies -> Neural networks; Human-centered computing -> Visualization techniquesComputing methodologiesNeural networksHuman centered computingVisualization techniquesCDF-Based Importance Sampling and Visualization for Neural Network Training10.2312/vcbm.2023121251-555 pages