Is Space-Time Attention All You Need for Video Understanding?

9 Feb 2021  ·  Gedas Bertasius, Heng Wang, Lorenzo Torresani ·

We present a convolution-free approach to video classification built exclusively on self-attention over space and time. Our method, named "TimeSformer," adapts the standard Transformer architecture to video by enabling spatiotemporal feature learning directly from a sequence of frame-level patches... Our experimental study compares different self-attention schemes and suggests that "divided attention," where temporal attention and spatial attention are separately applied within each block, leads to the best video classification accuracy among the design choices considered. Despite the radically new design, TimeSformer achieves state-of-the-art results on several action recognition benchmarks, including the best reported accuracy on Kinetics-400 and Kinetics-600. Finally, compared to 3D convolutional networks, our model is faster to train, it can achieve dramatically higher test efficiency (at a small drop in accuracy), and it can also be applied to much longer video clips (over one minute long). Code and models are available at: https://github.com/facebookresearch/TimeSformer. read more

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Action Recognition Diving-48 TimeSformer Accuracy 75 # 5
Action Recognition Diving-48 TimeSformer-L Accuracy 81 # 2
Action Recognition Diving-48 TimeSformer-HR Accuracy 78 # 3
Video Question Answering Howto100M-QA TimeSformer Accuracy 62.1 # 2
Action Classification Kinetics-400 TimeSformer Vid acc@1 78 # 48
Vid acc@5 93.7 # 34
Action Classification Kinetics-400 TimeSformer-HR Vid acc@1 79.7 # 30
Vid acc@5 94.4 # 21
Action Classification Kinetics-400 TimeSformer-L Vid acc@1 80.7 # 18
Vid acc@5 94.7 # 13
Action Recognition Something-Something V2 TimeSformer-L Top-1 Accuracy 62.3 # 37
Action Recognition Something-Something V2 TimeSformer Top-1 Accuracy 59.5 # 46
Action Recognition Something-Something V2 TimeSformer-HR Top-1 Accuracy 62.5 # 36

Methods