Chang, MinsukWang, YaoWang, Huichen WillBulling, AndreasBearfield, Cindy XiongEl-Assady, MennatallahOttley, AlvittaTominski, Christian2025-05-262025-05-262025978-3-03868-282-0https://doi.org/10.2312/evs.20251092https://diglib.eg.org/handle/10.2312/evs20251092Knowing where people look in visualizations is key to effective design. Yet, existing research primarily focuses on free-viewingbased saliency models- although visual attention is inherently task-dependent. Collecting task-relevant importance data remains a resource-intensive challenge. To address this, we introduce Grid Labeling - a novel annotation method for collecting task-specific importance data to enhance saliency prediction models. Grid Labeling dynamically segments visualizations into Adaptive Grids, enabling efficient, low-effort annotation while adapting to visualization structure. We conducted a humansubject study comparing Grid Labeling with existing annotation methods, ImportAnnots, and BubbleView across multiple metrics. Results show that Grid Labeling produces the least noisy data and the highest inter-participant agreement with fewer participants while requiring less physical (e.g., clicks/mouse movements) and cognitive effort. An interactive demo and the accompanying dataset are available at https://github.com/jangsus1/Grid-Labeling.Attribution 4.0 International LicenseCCS Concepts: Human-centered computing → Visualization techniques; Empirical studies in visualizationHuman centered computing → Visualization techniquesEmpirical studies in visualizationGrid Labeling: Crowdsourcing Task-Specific Importance from Visualizations10.2312/evs.202510925 pages