Use What You Have: Video Retrieval Using Representations From Collaborative Experts

31 Jul 2019  ·  Yang Liu, Samuel Albanie, Arsha Nagrani, Andrew Zisserman ·

The rapid growth of video on the internet has made searching for video content using natural language queries a significant challenge. Human-generated queries for video datasets `in the wild' vary a lot in terms of degree of specificity, with some queries describing specific details such as the names of famous identities, content from speech, or text available on the screen. Our goal is to condense the multi-modal, extremely high dimensional information from videos into a single, compact video representation for the task of video retrieval using free-form text queries, where the degree of specificity is open-ended. For this we exploit existing knowledge in the form of pre-trained semantic embeddings which include 'general' features such as motion, appearance, and scene features from visual content. We also explore the use of more 'specific' cues from ASR and OCR which are intermittently available for videos and find that these signals remain challenging to use effectively for retrieval. We propose a collaborative experts model to aggregate information from these different pre-trained experts and assess our approach empirically on five retrieval benchmarks: MSR-VTT, LSMDC, MSVD, DiDeMo, and ActivityNet. Code and data can be found at www.robots.ox.ac.uk/~vgg/research/collaborative-experts/. This paper contains a correction to results reported in the previous version.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Video Retrieval ActivityNet Collaborative Experts text-to-video R@1 20.5 # 30
text-to-video R@5 47.7 # 26
text-to-video R@10 63.9 # 18
text-to-video R@50 91.4 # 7
text-to-video Median Rank 6 # 14
text-to-video Mean Rank 23.1 # 14
Video Retrieval DiDeMo Collaborative Experts text-to-video R@1 16.1 # 38
text-to-video R@5 41.1 # 35
text-to-video R@50 82.7 # 1
text-to-video R@10 54.4 # 35
text-to-video Median Rank 8.3 # 21
text-to-video Mean Rank 43.7 # 14
Video Retrieval LSMDC Collaborative Experts text-to-video R@1 11.2 # 32
text-to-video R@5 26.9 # 27
text-to-video R@10 34.8 # 26
text-to-video Median Rank 25 # 16
Video Retrieval MSR-VTT Collaborative Experts text-to-video R@1 10.0 # 36
text-to-video R@5 29.0 # 31
text-to-video R@10 41.2 # 31
text-to-video Mean Rank 86.8 # 6
text-to-video Median Rank 16 # 15
video-to-text R@1 15.6 # 11
video-to-text R@5 40.9 # 9
video-to-text R@10 55.2 # 8
video-to-text Median Rank 8.3 # 5
video-to-text Mean Rank 38.1 # 3
Video Retrieval MSR-VTT-1kA Collaborative Experts text-to-video Mean Rank 28.2 # 23
text-to-video R@1 20.9 # 53
text-to-video R@5 48.8 # 51
text-to-video R@10 62.4 # 54
text-to-video Median Rank 6 # 34
Video Retrieval MSVD Collaborative Experts text-to-video R@1 19.8 # 24
text-to-video R@5 49.0 # 21
text-to-video R@10 63.8 # 20
text-to-video Median Rank 6.0 # 16
text-to-video R@50 89.0 # 1
text-to-video Mean Rank 23.1 # 15

Methods


No methods listed for this paper. Add relevant methods here