Show simple item record

dc.contributor.authorMen, Xinen_US
dc.contributor.authorZhou, Fengen_US
dc.contributor.authorLi, Xiaoyongen_US
dc.contributor.editorFu, Hongbo and Ghosh, Abhijeet and Kopf, Johannesen_US
dc.date.accessioned2018-10-07T14:32:27Z
dc.date.available2018-10-07T14:32:27Z
dc.date.issued2018
dc.identifier.isbn978-3-03868-073-4
dc.identifier.urihttps://doi.org/10.2312/pg.20181287
dc.identifier.urihttps://diglib.eg.org:443/handle/10.2312/pg20181287
dc.description.abstractIn this paper, we proposed a deep neural network based method for content based video retrieval. Our approach leveraged the deep neural network to generate the semantic information and introduced the graph-based storage structure to establish the video indices. We devised the Inception-Single Shot Multibox Detector (ISSD) and RI3D model to extract spatial semantic information (objects) and extract temporal semantic information (actions). Our ISSD model achieved a mAP of 26.7% on MS COCO dataset, increasing 3.2% over the original SSD model, while the RI3D model achieved a top-1 accuracy of 97.7% on dataset UCF-101. And we also introduced the graph structure to build the video index with the temporal and spatial semantic information. Our experiment results showed that the deep learned semantic information is highly effective for video indexing and retrieval.en_US
dc.publisherThe Eurographics Associationen_US
dc.subjectComputing methodologies
dc.subjectVisual content
dc.subjectbased indexing and retrieval
dc.titleA Deep Learned Method for Video Indexing and Retrievalen_US
dc.description.seriesinformationPacific Graphics Short Papers
dc.description.sectionheadersVisual Content Matching and Retrieval
dc.identifier.doi10.2312/pg.20181287
dc.identifier.pages85-88


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record