Sohns, Jan‐TobiasGarth, ChristophLeitte, HeikeHauser, Helwig and Alliez, Pierre2023-03-222023-03-2220231467-8659https://doi.org/10.1111/cgf.14650https://diglib.eg.org:443/handle/10.1111/cgf14650Machine learning algorithms are widely applied to create powerful prediction models. With increasingly complex models, humans' ability to understand the decision function (that maps from a high‐dimensional input space) is quickly exceeded. To explain a model's decisions, black‐box methods have been proposed that provide either non‐linear maps of the global topology of the decision boundary, or samples that allow approximating it locally. The former loses information about distances in input space, while the latter only provides statements about given samples, but lacks a focus on the underlying model for precise ‘What‐If'‐reasoning. In this paper, we integrate both approaches and propose an interactive exploration method using local linear maps of the decision space. We create the maps on high‐dimensional hyperplanes—2D‐slices of the high‐dimensional parameter space—based on statistical and personal feature mutability and guided by feature importance. We complement the proposed workflow with established model inspection techniques to provide orientation and guidance. We demonstrate our approach on real‐world datasets and illustrate that it allows identification of instance‐based decision boundary structures and can answer multi‐dimensional ‘What‐If'‐questions, thereby identifying counterfactual scenarios visually.Attribution 4.0 International Licensevisual model evaluationmachine learning explanationinverse multi‐dimensional projectionDecision Boundary Visualization for Counterfactual Reasoning10.1111/cgf.146507-20