Mei, LiwenGuan, ManhaoZheng, YifanZhang, DongliangChristie, MarcHan, Ping-HsuanLin, Shih-SyunPietroni, NicoSchneider, TeseoTsai, Hsin-RueyWang, Yu-ShuenZhang, Eugene2025-10-072025-10-072025978-3-03868-295-0https://doi.org/10.2312/pg.20251298https://diglib.eg.org/handle/10.2312/pg20251298Sketching serves as both a medium for visualizing ideas and a process for creative iteration. While early neural sketch generation methods rely on category-specific data and lack generalization and iteration capability, recent advances in Large Language Models (LLMs) have opened new possibilities for more flexible and semantically guided sketching. In this work, we present CoSketcher, a controllable and iterative sketch generation system that leverages the prior knowledge and textual reasoning abilities of LLMs to align with the creative iteration process of human sketching. CoSketcher introduces a novel XML-style sketch language that represents stroke-level information in structured format, enabling the LLM to plan and generate complex sketches under both linguistic and spatial control. The system supports visual appealing sketch construction, including skeleton-contour decomposition for volumetric shapes and layout-aware reasoning for object relationships. Through extensive evaluation, we demonstrate that our method generates expressive sketches across both in-distribution and out-of-distribution categories, while also supporting scene-level composition and controllable iteration. Our method establishes a new paradigm for controllable sketch generation using off-the-shelf LLMs, with broad implications for creative human-AI collaboration.Attribution 4.0 International LicenseCCS Concepts: Human-centered computing → Human computer interaction (HCI)Human centered computing → Human computer interaction (HCI)CoSketcher: Collaborative and Iterative Sketch Generation with LLMs under Linguistic and Spatial Control10.2312/pg.2025129812 pages