Learning video retrieval models with relevance-aware online mining

16 Mar 2022  ·  Alex Falcon, Giuseppe Serra, Oswald Lanz ·

Due to the amount of videos and related captions uploaded every hour, deep learning-based solutions for cross-modal video retrieval are attracting more and more attention. A typical approach consists in learning a joint text-video embedding space, where the similarity of a video and its associated caption is maximized, whereas a lower similarity is enforced with all the other captions, called negatives. This approach assumes that only the video and caption pairs in the dataset are valid, but different captions - positives - may also describe its visual contents, hence some of them may be wrongly penalized. To address this shortcoming, we propose the Relevance-Aware Negatives and Positives mining (RANP) which, based on the semantics of the negatives, improves their selection while also increasing the similarity of other valid positives. We explore the influence of these techniques on two video-text datasets: EPIC-Kitchens-100 and MSR-VTT. By using the proposed techniques, we achieve considerable improvements in terms of nDCG and mAP, leading to state-of-the-art results, e.g. +5.3% nDCG and +3.0% mAP on EPIC-Kitchens-100. We share code and pretrained models at \url{https://github.com/aranciokov/ranp}.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Multi-Instance Retrieval EPIC-KITCHENS-100 HGR-RANP mAP(V2T) 52 # 4
mAP(T2V) 42.3 # 4
mAP (Avg) 47.2 # 5
nDCG (V2T) 61.2 # 4
nDCG (T2V) 56.5 # 5
nDCG (Avg) 58.8 # 7

Methods


No methods listed for this paper. Add relevant methods here