TubeDETR: Spatio-Temporal Video Grounding with Transformers

We consider the problem of localizing a spatio-temporal tube in a video corresponding to a given text query. This is a challenging task that requires the joint and efficient modeling of temporal, spatial and multi-modal interactions. To address this task, we propose TubeDETR, a transformer-based architecture inspired by the recent success of such models for text-conditioned object detection. Our model notably includes: (i) an efficient video and text encoder that models spatial multi-modal interactions over sparsely sampled frames and (ii) a space-time decoder that jointly performs spatio-temporal localization. We demonstrate the advantage of our proposed components through an extensive ablation study. We also evaluate our full approach on the spatio-temporal video grounding task and demonstrate improvements over the state of the art on the challenging VidSTG and HC-STVG benchmarks. Code and trained models are publicly available at https://antoyang.github.io/tubedetr.html.

PDF Abstract CVPR 2022 PDF CVPR 2022 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Spatio-Temporal Video Grounding HC-STVG1 TubeDETR m_vIoU 32.4 # 2
vIoU@0.3 49.8 # 2
vIoU@0.5 23.5 # 2
Spatio-Temporal Video Grounding HC-STVG2 TubeDETR Val m_vIoU 36.4 # 3
Val vIoU@0.3 58.8 # 3
Val vIoU@0.5 30.6 # 3
Spatio-Temporal Video Grounding VidSTG TubeDETR Declarative m_vIoU 30.4 # 2
Declarative vIoU@0.3 42.5 # 2
Declarative vIoU@0.5 28.2 # 2
Interrogative m_vIoU 25.7 # 2
Interrogative vIoU@0.3 35.7 # 2
Interrogative vIoU@0.5 23.2 # 2

Methods