Schlegel, UdoCakmak, ErenKeim, Daniel A.Archambault, Daniel and Nabney, Ian and Peltonen, Jaakko2020-05-242020-05-242020978-3-03868-113-7https://doi.org/10.2312/mlvis.20201100https://diglib.eg.org:443/handle/10.2312/mlvis20201100Explainable artificial intelligence (XAI) methods aim to reveal the non-transparent decision-making mechanisms of black-box models. The evaluation of insight generated by such XAI methods remains challenging as the applied techniques depend on many factors (e.g., parameters and human interpretation). We propose ModelSpeX, a visual analytics workflow to interactively extract human-centered rule-sets to generate model specifications from black-box models (e.g., neural networks). The workflow enables to reason about the underlying problem, to extract decision rule sets, and to evaluate the suitability of the model for a particular task. An exemplary usage scenario walks an analyst trough the steps of the workflow to show the applicability.Computing methodologiesArtificial intelligenceHumancentered computingHCI theoryconcepts and modelsModelSpeX: Model Specification Using Explainable Artificial Intelligence Methods10.2312/mlvis.202011007-11