MDMMT: Multidomain Multimodal Transformer for Video Retrieval

19 Mar 2021  ·  Maksim Dzabraev, Maksim Kalashnikov, Stepan Komkov, Aleksandr Petiushko ·

We present a new state-of-the-art on the text to video retrieval task on MSRVTT and LSMDC benchmarks where our model outperforms all previous solutions by a large margin. Moreover, state-of-the-art results are achieved with a single model on two datasets without finetuning. This multidomain generalisation is achieved by a proper combination of different video caption datasets. We show that training on different datasets can improve test results of each other. Additionally we check intersection between many popular datasets and found that MSRVTT has a significant overlap between the test and the train parts, and the same situation is observed for ActivityNet.

PDF Abstract

Results from the Paper

Ranked #22 on Video Retrieval on MSR-VTT (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Video Retrieval LSMDC MDMMT text-to-video R@1 18.8 # 22
text-to-video R@5 38.5 # 20
text-to-video R@10 47.9 # 20
text-to-video Median Rank 12.3 # 11
text-to-video Mean Rank 58.0 # 9
Video Retrieval MSR-VTT MDMMT text-to-video R@1 23.1 # 22
text-to-video R@5 49.8 # 22
text-to-video R@10 61.8 # 20
text-to-video Mean Rank 52.8 # 5
text-to-video Median Rank 6 # 10
Video Retrieval MSR-VTT-1kA MDMMT text-to-video Mean Rank 16.5 # 16
text-to-video R@1 38.9 # 30
text-to-video R@5 69.0 # 28
text-to-video R@10 79.7 # 28
text-to-video Median Rank 2 # 7


No methods listed for this paper. Add relevant methods here