End-to-End Spatio-Temporal Action Localisation with Video Transformers

The most performant spatio-temporal action localisation models use external person proposals and complex external memory banks. We propose a fully end-to-end, purely-transformer based model that directly ingests an input video, and outputs tubelets -- a sequence of bounding boxes and the action classes at each frame. Our flexible model can be trained with either sparse bounding-box supervision on individual frames, or full tubelet annotations. And in both cases, it predicts coherent tubelets as the output. Moreover, our end-to-end model requires no additional pre-processing in the form of proposals, or post-processing in terms of non-maximal suppression. We perform extensive ablation experiments, and significantly advance the state-of-the-art results on four different spatio-temporal action localisation benchmarks with both sparse keyframes and full tubelet annotations.

PDF Abstract

Results from the Paper


 Ranked #1 on Action Recognition on AVA v2.1 (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Spatio-Temporal Action Localization AVA-Kinetics STAR/L val mAP 41.7 # 2
Action Recognition AVA v2.1 STAR/L mAP (Val) 41.7 # 1
Action Recognition AVA v2.2 STAR/L mAP 41.7 # 4
Action Detection UCF101-24 STAR/L Video-mAP 0.2 88.0 # 2
Video-mAP 0.5 71.8 # 2
Frame-mAP 0.5 90.3 # 1

Methods


No methods listed for this paper. Add relevant methods here