Podo, LucaIshmal, MuhammadAngelini, MarcoEl-Assady, MennatallahSchulz, Hans-Jörg2024-05-212024-05-212024978-3-03868-253-0https://doi.org/10.2312/eurova.20241118https://diglib.eg.org/handle/10.2312/eurova20241118The automatic generation of visualizations is an old task that, through the years, has shown more and more interest from the research and practitioner communities. Recently, large language models (LLM) have become an interesting option for supporting generative tasks related to visualization, demonstrating initial promising results. At the same time, several pitfalls, like the multiple ways of instructing an LLM to generate the desired result, the different perspectives leading the generation (code-based, image-based, grammar-based), and the presence of hallucinations even for the visualization generation task, make their usage less affordable than expected. Following similar initiatives for benchmarking LLMs, this paper explores the problem of modeling the evaluation of a generated visualization through an LLM. We propose a theoretical evaluation stack, EvaLLM, that decomposes the evaluation effort in its atomic components, characterizes their nature, and provides an overview of how to implement them. One use case on the Llama2-70-b model shows the benefits of EvaLLM and illustrates interesting results on the current state-of-the-art LLM-generated visualizations. The materials are available at this GitHub repository: https://github.com/lucapodo/evallm_llama2_70b.gitAttribution 4.0 International LicenseCCS Concepts: Human-centered computing→Visualization design and evaluation methodsHuman centered computing→Visualization design and evaluation methodsToward a Structured Theoretical Framework for the Evaluation of Generative AI-based Visualizations10.2312/eurova.202411186 pages