Jobst, AdrianAtzberger, DanielScheibel, WillyDöllner, JürgenDiehl, AlexandraKucher, KostiantynMédoc, Nicolas2025-05-262025-05-262025978-3-03868-286-8https://doi.org/10.2312/evp.20251134https://diglib.eg.org/handle/10.2312/evp20251134Large Language Models (LLMs) are increasingly integrated into Natural Language Interfaces (NLIs) for visualizations, enabling users to inquire about visualizations through natural language. This work introduces a software framework for evaluating LLMs' visualization literacy, i.e., their ability to interpret and answer questions about visualizations. Our framework generates a set of data points across different LLMs, prompts, and question types, allowing for in-depth analysis. We demonstrate its utility by two experiments, examining the impact of the temperature parameter and predefined answer choices.Attribution 4.0 International LicenseCCS Concepts: Human-centered computing → Visualization systems and tools; Information visualization; Natural language interfaces; Accessibility technologies; Visualization theory, concepts and paradigms; User centered designHuman centered computing → Visualization systems and toolsInformation visualizationNatural language interfacesAccessibility technologiesVisualization theoryconcepts and paradigmsUser centered designTowards a Software Framework for Evaluating the Visualization Literacy of Large Language Models10.2312/evp.202511343 pages