Radut, MirunaEvans, MichaelTo, KristieNooney, TamsinPhillipson, GraemeChristie, Marc and Wu, Hui-Yin and Li, Tsai-Yen and Gandhi, Vineet2020-05-242020-05-242020978-3-03868-127-42411-9733https://doi.org/10.2312/wiced.20201127https://diglib.eg.org:443/handle/10.2312/wiced20201127This paper reports on recent and ongoing work to develop empirical methods for assessment of the subjective quality of artificial intelligence (AI)-produced multicamera video. We have developed a prototype software system that recording panel performances, using a variety of didactic and machine learning techniques to intelligently crop and cut between feeds from an array of static, unmanned cameras. Evaluating the subjective quality rendered by the software's decisions regarding when and to what to cut represents an important and interesting challenge, due to the technical behaviour of the system, the large number of potential quality risks, and the need to mitigate for content specificity.Humancentered computingHCI design and evaluation methodsUser studiesComputing methodologiesIntelligent agentsHow Good is Good Enough? The Challenge of Evaluating Subjective Quality of AI-Edited Video Coverage of Live Events10.2312/wiced.2020112717-24