Potapov, DanilaDouze, MatthijsRevaud, JérômeHarchaoui, ZaidSchmid, CordeliaWilliam Bares and Vineet Gandhi and Quentin Galvane and Remi Ronfard2017-04-222017-04-222017978-3-03868-031-42411-9733https://doi.org/10.2312/wiced.20171067https://diglib.eg.org:443/handle/10.2312/wiced20171067While important advances were recently made towards temporally localizing and recognizing specific human actions or activities in videos, efficient detection and classification of long video chunks belonging to semantically-defined categories remains challenging. Examples of such categories can be found in action movies, whose storylines often follow a standardized structure corresponding to a sequence of typical segments such as ''pursuit'', ''romance'', etc. We introduce a new dataset, Action Movie Franchises, consisting of a collection of Hollywood action movie franchises. We define 11 non-exclusive semantic categories that are broad enough to cover most of the movie footage. The corresponding events are annotated as groups of video shots, possibly overlapping. We propose an approach for localizing events based on classifying shots into categories and learning the temporal constraints between shots. We show that temporal constraints significantly improve the classification performance. We set up an evaluation protocol for event localization as well as for shot classification, depending on whether movies from the same franchise are present or not in the training data.I.2.10 [Artificial Intelligence]Vision and Scene understandingVideo analysisInferring the Structure of Action Movies10.2312/wiced.2017106719-27