TS2-Net: Token Shift and Selection Transformer for Text-Video Retrieval

16 Jul 2022  ·  Yuqi Liu, Pengfei Xiong, Luhui Xu, Shengming Cao, Qin Jin ·

Text-Video retrieval is a task of great practical value and has received increasing attention, among which learning spatial-temporal video representation is one of the research hotspots. The video encoders in the state-of-the-art video retrieval models usually directly adopt the pre-trained vision backbones with the network structure fixed, they therefore can not be further improved to produce the fine-grained spatial-temporal video representation. In this paper, we propose Token Shift and Selection Network (TS2-Net), a novel token shift and selection transformer architecture, which dynamically adjusts the token sequence and selects informative tokens in both temporal and spatial dimensions from input video samples. The token shift module temporally shifts the whole token features back-and-forth across adjacent frames, to preserve the complete token representation and capture subtle movements. Then the token selection module selects tokens that contribute most to local spatial semantics. Based on thorough experiments, the proposed TS2-Net achieves state-of-the-art performance on major text-video retrieval benchmarks, including new records on MSRVTT, VATEX, LSMDC, ActivityNet, and DiDeMo.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Video Retrieval MSR-VTT-1kA TS2-Net text-to-video R@1 54.0 # 8
text-to-video R@5 79.3 # 7
text-to-video R@10 87.4 # 7
Video Retrieval VATEX TS2-Net text-to-video R@1 59.1 # 8
text-to-video R@10 95.2 # 6

Methods


No methods listed for this paper. Add relevant methods here