Suh, AshleyMosca, AbRobinson, ShannonPham, QuinnCashman, DylanOttley, AlvittaChang, RemcoAgus, MarcoAigner, WolfgangHoellt, Thomas2022-06-022022-06-022022978-3-03868-184-7https://doi.org/10.2312/evs.20221086https://diglib.eg.org:443/handle/10.2312/evs20221086Designing suitable tasks for visualization evaluation remains challenging. Traditional evaluation techniques commonly rely on 'low-level' or 'open-ended' tasks to assess the efficacy of a proposed visualization, however, nontrivial trade-offs exist between the two. Low-level tasks allow for robust quantitative evaluations, but are not indicative of the complex usage of a visualization. Open-ended tasks, while excellent for insight-based evaluations, are typically unstructured and require time-consuming interviews. Bridging this gap, we propose inferential tasks: a complementary task category based on inferential learning in psychology. Inferential tasks produce quantitative evaluation data in which users are prompted to form and validate their own findings with a visualization. We demonstrate the use of inferential tasks through a validation experiment on two well-known visualization tools.Attribution 4.0 International LicenseCCS Concepts: Human-centered computing → Information visualization; Visualization design and evaluation methodsHumancentered computing → Information visualizationVisualization design and evaluation methodsInferential Tasks as an Evaluation Technique for Visualization10.2312/evs.2022108613-175 pages