Zhou, YazhuoXing, YiwenAbdul-Rahman, AlfieBorgo, RitaHunter, DavidSlingsby, Aidan2024-09-092024-09-092024978-3-03868-249-3https://doi.org/10.2312/cgvc.20241236https://diglib.eg.org/handle/10.2312/cgvc20241236In task-oriented dialogue systems, tagging tasks leverage Large Language Models (LLMs) to understand dialogue semantics. The specifics of how these models capture and utilize dialogue semantics for decision-making remain unclear. Unlike binary or multi-classification, tagging involves complex multi-to-multi relationships between features and predictions, complicating attribution analyses. To address these challenges, we introduce a novel interactive visualization system that enhances understanding of dialogue semantics through attribution analysis. Our system offers a multi-level and layer-wise visualization framework, revealing the evolution of attributions across layers and allowing users to interactively probe attributions. With a dual-view for streamlined comparisons, users can effectively compare different LLMs. We demonstrate our system's effectiveness with a common task-oriented dialogue task: slot filling. This tool aids NLP experts in understanding attributions, diagnosing models, and advancing dialogue understanding development by identifying potential sources of model hallucinations.Attribution 4.0 International LicenseCCS Concepts: Human-centered computing → Visual analyticsHuman centered computing → Visual analyticsVisual Interpretation of Tagging: Advancing Understanding in Task-Oriented Dialogue Systems10.2312/cgvc.202412369 pages