Andujar, CarlosRaluca Vijulie, CristinaVinacua, AlvarTarini, Marco and Galin, Eric2019-05-052019-05-0520191017-4656https://doi.org/10.2312/eged.20191025https://diglib.eg.org:443/handle/10.2312/eged20191025Although online e-learning environments are increasingly used in university courses, manual assessment still dominates the way students are graded. Interactive judges providing a pass/fail verdict based on test sets are valuable tools both for learning and assessment, but still rely on human review of the code for output-independent issues such as readability and efficiency. In this paper we present a tool to assist instructors in grading programming exercises in Computer Graphics (CG) courses. In contrast to other grading solutions, assessment is based both on checking the output against test sets, and through a set of instructor-defined rubrics based on syntax analysis of the source code. Our current prototype runs in Python and supports the assessment of shaders written in GLSL language. We tested the tool in a CG course involving more than one hundred Computer Science students per year. Our first experiments show the tool can be useful to support both self-assessment and grading, as well as detecting grading mistakes through anomaly detection techniques based on features extracted from the syntax analysis.A Parser-based Tool to Assist Instructors in Grading Computer Graphics Assignments10.2312/eged.2019102521-28