Reimann, MaxWegen, OlePasewaldt, SebastianSemmo, AmirDöllner, JürgenTrapp, MatthiasSousa Santos, Beatriz and Domik, Gitta2021-04-092021-04-092021978-3-03868-132-81017-4656https://doi.org/10.2312/eged.20211000https://diglib.eg.org:443/handle/10.2312/eged20211000This paper presents the concept and experience of teaching an undergraduate course on data-driven image and video processing. When designing visual effects that make use of Machine Learning (ML) models for image-based analysis or processing, the availability of training data typically represents a key limitation when it comes to feasibility and effect quality. The goal of our course is to enable students to implement new kinds of visual effects by acquiring training datasets via crowdsourcing that are used to train ML models as part of a video processing pipeline. First, we propose our course structure and best practices that are involved with crowdsourced data acquisitions. We then discuss the key insights we gathered from an exceptional undergraduate seminar project that tackles the challenging domain of video annotation and learning. In particular, we focus on how to practically develop annotation tools and collect high-quality datasets using Amazon Mechanical Turk (MTurk) in the budget- and time-constrained classroom environment. We observe that implementing the full acquisition and learning pipeline is entirely feasible for a seminar project, imparts hands-on problem solving skills, and promotes undergraduate research.Social and professional topicsComputing educationInformation systemsCrowdsourcingTeaching Data-driven Video Processing via Crowdsourced Data Collection10.2312/eged.202110001-8