Louis-Alexandre, JudithWaldner, ManuelaHoellt, ThomasAigner, WolfgangWang, Bei2023-06-102023-06-102023978-3-03868-219-6https://doi.org/10.2312/evs.20231034https://diglib.eg.org:443/handle/10.2312/evs20231034Language models are trained on large text corpora that often include stereotypes. This can lead to direct or indirect bias in downstream applications. In this work, we present a method for interactive visual exploration of indirect multiclass bias learned by contextual word embeddings. We introduce a new indirect bias quantification score and present two interactive visualizations to explore interactions between multiple non-sensitive concepts (such as sports, occupations, and beverages) and sensitive attributes (such as gender or year of birth) based on this score.Attribution 4.0 International LicenseCCS Concepts: Human-centered computing -> Visual analytics; Computing methodologies -> Natural language processingHuman centered computingVisual analyticsComputing methodologiesNatural language processingVisual Exploration of Indirect Bias in Language Models10.2312/evs.202310341-55 pages