X-CLIP: End-to-End Multi-grained Contrastive Learning for Video-Text Retrieval

15 Jul 2022  ·  Yiwei Ma, Guohai Xu, Xiaoshuai Sun, Ming Yan, Ji Zhang, Rongrong Ji ·

Video-text retrieval has been a crucial and fundamental task in multi-modal research. The development of video-text retrieval has been considerably promoted by large-scale multi-modal contrastive pre-training, which primarily focuses on coarse-grained or fine-grained contrast. However, cross-grained contrast, which is the contrast between coarse-grained representations and fine-grained representations, has rarely been explored in prior research. Compared with fine-grained or coarse-grained contrasts, cross-grained contrast calculate the correlation between coarse-grained features and each fine-grained feature, and is able to filter out the unnecessary fine-grained features guided by the coarse-grained feature during similarity calculation, thus improving the accuracy of retrieval. To this end, this paper presents a novel multi-grained contrastive model, namely X-CLIP, for video-text retrieval. However, another challenge lies in the similarity aggregation problem, which aims to aggregate fine-grained and cross-grained similarity matrices to instance-level similarity. To address this challenge, we propose the Attention Over Similarity Matrix (AOSM) module to make the model focus on the contrast between essential frames and words, thus lowering the impact of unnecessary frames and words on retrieval results. With multi-grained contrast and the proposed AOSM module, X-CLIP achieves outstanding performance on five widely-used video-text retrieval datasets, including MSR-VTT (49.3 R@1), MSVD (50.4 R@1), LSMDC (26.1 R@1), DiDeMo (47.8 R@1) and ActivityNet (46.2 R@1). It outperforms the previous state-of-theart by +6.3%, +6.6%, +11.1%, +6.7%, +3.8% relative improvements on these benchmarks, demonstrating the superiority of multi-grained contrast and AOSM.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Video Retrieval ActivityNet X-CLIP text-to-video R@1 46.2 # 19
text-to-video R@5 75.5 # 16
text-to-video Mean Rank 6.8 # 9
video-to-text R@1 46.4 # 9
video-to-text R@5 75.9 # 7
video-to-text Mean Rank 6.4 # 6
Video Retrieval DiDeMo X-CLIP text-to-video R@1 47.8 # 28
text-to-video R@5 79.3 # 15
text-to-video Mean Rank 12.6 # 6
video-to-text R@1 47.8 # 11
video-to-text R@10 76.8 # 12
video-to-text Mean Rank 10.5 # 9
Video Retrieval LSMDC X-CLIP text-to-video R@1 26.1 # 14
video-to-text R@1 26.9 # 7
Video Retrieval MSR-VTT-1kA X-CLIP text-to-video Mean Rank 12.2 # 10
text-to-video R@1 49.3 # 19
text-to-video R@5 75.8 # 14
text-to-video R@10 84.8 # 13
text-to-video Median Rank 2.0 # 10
video-to-text R@1 48.9 # 11
video-to-text R@5 76.8 # 7
video-to-text R@10 84.5 # 9
video-to-text Median Rank 2.0 # 7
video-to-text Mean Rank 8.1 # 9
Video Retrieval MSVD X-CLIP text-to-video R@1 50.4 # 12
text-to-video R@5 80.6 # 9
text-to-video Mean Rank 8.4 # 4
video-to-text R@1 66.8 # 10
video-to-text R@10 90.4 # 13
video-to-text Mean Rank 4.2 # 7

Methods


No methods listed for this paper. Add relevant methods here