Show simple item record

dc.contributor.authorRadut, Mirunaen_US
dc.contributor.authorEvans, Michaelen_US
dc.contributor.authorTo, Kristieen_US
dc.contributor.authorNooney, Tamsinen_US
dc.contributor.authorPhillipson, Graemeen_US
dc.contributor.editorChristie, Marc and Wu, Hui-Yin and Li, Tsai-Yen and Gandhi, Vineeten_US
dc.date.accessioned2020-05-24T13:14:08Z
dc.date.available2020-05-24T13:14:08Z
dc.date.issued2020
dc.identifier.isbn978-3-03868-127-4
dc.identifier.issn2411-9733
dc.identifier.urihttps://doi.org/10.2312/wiced.20201127
dc.identifier.urihttps://diglib.eg.org:443/handle/10.2312/wiced20201127
dc.description.abstractThis paper reports on recent and ongoing work to develop empirical methods for assessment of the subjective quality of artificial intelligence (AI)-produced multicamera video. We have developed a prototype software system that recording panel performances, using a variety of didactic and machine learning techniques to intelligently crop and cut between feeds from an array of static, unmanned cameras. Evaluating the subjective quality rendered by the software's decisions regarding when and to what to cut represents an important and interesting challenge, due to the technical behaviour of the system, the large number of potential quality risks, and the need to mitigate for content specificity.en_US
dc.publisherThe Eurographics Associationen_US
dc.subjectHuman
dc.subjectcentered computing
dc.subjectHCI design and evaluation methods
dc.subjectUser studies
dc.subjectComputing methodologies
dc.subjectIntelligent agents
dc.titleHow Good is Good Enough? The Challenge of Evaluating Subjective Quality of AI-Edited Video Coverage of Live Eventsen_US
dc.description.seriesinformationWorkshop on Intelligent Cinematography and Editing
dc.description.sectionheadersMorning Session
dc.identifier.doi10.2312/wiced.20201127
dc.identifier.pages17-24


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record