Controllably Sparse Perturbations of Robust Classifiers for Explaining Predictions and Probing Learned Concepts

Loading...
Thumbnail Image
Date
2021
Journal Title
Journal ISSN
Volume Title
Publisher
The Eurographics Association
Abstract
Explaining the predictions of a deep neural network (DNN) in image classification is an active area of research. Many methods focus on localizing pixels, or groups of pixels, which maximize a relevance metric for the prediction. Others aim at creating local "proxy" explainers which aim to account for an individual prediction of a model. We aim to explore "why" a model made a prediction by perturbing inputs to robust classifiers and interpreting the semantically meaningful results. For such an explanation to be useful for humans it is desirable for it to be sparse; however, generating sparse perturbations can computationally expensive and infeasible on high resolution data. Here we introduce controllably sparse explanations that can be efficiently generated on higher resolution data to provide improved counter-factual explanations. Further we use these controllably sparse explanations to probe what the robust classifier has learned. These explanations could provide insight for model developers as well as assist in detecting dataset bias.
Description

        
@inproceedings{
10.2312:mlvis.20211072
, booktitle = {
Machine Learning Methods in Visualisation for Big Data
}, editor = {
Archambault, Daniel and Nabney, Ian and Peltonen, Jaakko
}, title = {{
Controllably Sparse Perturbations of Robust Classifiers for Explaining Predictions and Probing Learned Concepts
}}, author = {
Roberts, Jay
and
Tsiligkaridis, Theodoros
}, year = {
2021
}, publisher = {
The Eurographics Association
}, ISBN = {
978-3-03868-146-5
}, DOI = {
10.2312/mlvis.20211072
} }
Citation