Auman, ChristianBhati, DeepshikhaArquilla, KyleNeha, FnuGuercio, AngelaSchulz, Hans-JörgVillanova, Anna2025-05-262025-05-262025978-3-03868-283-72664-4487https://doi.org/10.2312/eurova.20251102https://diglib.eg.org/handle/10.2312/eurova20251102Diffusion-based generative models, such as Stable Diffusion and DALL-E, have revolutionized artificial intelligence by enabling high-quality image generation from textual descriptions. Despite their success, these models raise ethical concerns, such as style appropriation and misuse, closely tied to the interpretability and transparency of the underlying mechanisms. This paper introduces a framework integrating Layer-wise Relevance Propagation (LRP) into the Stable Diffusion model to enhance interpretability. LRP assigns relevance scores to specific elements of textual prompts, allowing users to understand and visualize how input text influences image generation. We also present an interactive web-based visualization tool that supports intuitive exploration of diffusion processes. By improving interpretability, this approach fosters responsible use of generative AI technologies. A user study involving 35 participants demonstrates the tool's accessibility and effectiveness.Attribution 4.0 International LicenseCCS Concepts: Computing methodologies → Generative AI; Diffusion Models; Stable Diffusion; Layer-wise Relevance Propagation; AI Transparency; Human-centered computing → Visual analyticsComputing methodologies → Generative AIDiffusion ModelsStable DiffusionLayerwise Relevance PropagationAI TransparencyHuman centered computing → Visual analyticsIntegrating Layer-Wise Relevance Propagation with Stable Diffusion for Enhanced Interpretability10.2312/eurova.202511026 pages