Ahmad, MakMa, Kwan-LiuFirat, Elif E.Laramee, Robert S.Andersen, Nicklas Sindelv2024-05-212024-05-212024978-3-03868-257-8https://doi.org/10.2312/eved.20241056https://diglib.eg.org/handle/10.2312/eved20241056This study investigates the integration of Large Language Models (LLMs) like ChatGPT and Claude into data visualization courses to enhance literacy among computer science students. Through a structured 3-week workshop involving 30 graduate students, we examine the effects of LLM-assisted conversational prompting on students' visualization skills and confidence. Our findings reveal that while engagement and confidence levels increased significantly, improvements in actual visualization proficiency were modest. Our study underscores the importance of prompt engineering skills in maximizing the educational value of LLMs and offers evidence-based insights for software engineering educators on effectively leveraging conversational AI. This research contributes to the ongoing discussion on incorporating AI tools in education, providing a foundation for future ethical and effective LLM integration strategies.Attribution 4.0 International LicenseCCS Concepts: Human-centered computing → Empirical studies in visualization; Empirical studies in HCIHuman centered computing → Empirical studies in visualizationEmpirical studies in HCIMore Than Chatting: Conversational LLMs for Enhancing Data Visualization Competencies10.2312/eved.202410569 pages