ModelSpeX: Model Specification Using Explainable Artificial Intelligence Methods

dc.contributor.authorSchlegel, Udoen_US
dc.contributor.authorCakmak, Erenen_US
dc.contributor.authorKeim, Daniel A.en_US
dc.contributor.editorArchambault, Daniel and Nabney, Ian and Peltonen, Jaakkoen_US
dc.date.accessioned2020-05-24T13:27:42Z
dc.date.available2020-05-24T13:27:42Z
dc.date.issued2020
dc.description.abstractExplainable artificial intelligence (XAI) methods aim to reveal the non-transparent decision-making mechanisms of black-box models. The evaluation of insight generated by such XAI methods remains challenging as the applied techniques depend on many factors (e.g., parameters and human interpretation). We propose ModelSpeX, a visual analytics workflow to interactively extract human-centered rule-sets to generate model specifications from black-box models (e.g., neural networks). The workflow enables to reason about the underlying problem, to extract decision rule sets, and to evaluate the suitability of the model for a particular task. An exemplary usage scenario walks an analyst trough the steps of the workflow to show the applicability.en_US
dc.description.sectionheadersPapers
dc.description.seriesinformationMachine Learning Methods in Visualisation for Big Data
dc.identifier.doi10.2312/mlvis.20201100
dc.identifier.isbn978-3-03868-113-7
dc.identifier.pages7-11
dc.identifier.urihttps://doi.org/10.2312/mlvis.20201100
dc.identifier.urihttps://diglib.eg.org:443/handle/10.2312/mlvis20201100
dc.publisherThe Eurographics Associationen_US
dc.subjectComputing methodologies
dc.subjectArtificial intelligence
dc.subjectHuman
dc.subjectcentered computing
dc.subjectHCI theory
dc.subjectconcepts and models
dc.titleModelSpeX: Model Specification Using Explainable Artificial Intelligence Methodsen_US
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
007-011.pdf
Size:
405.3 KB
Format:
Adobe Portable Document Format