Video Swin Transformer

The vision community is witnessing a modeling shift from CNNs to Transformers, where pure Transformer architectures have attained top accuracy on the major video recognition benchmarks. These video models are all built on Transformer layers that globally connect patches across the spatial and temporal dimensions. In this paper, we instead advocate an inductive bias of locality in video Transformers, which leads to a better speed-accuracy trade-off compared to previous approaches which compute self-attention globally even with spatial-temporal factorization. The locality of the proposed video architecture is realized by adapting the Swin Transformer designed for the image domain, while continuing to leverage the power of pre-trained image models. Our approach achieves state-of-the-art accuracy on a broad range of video recognition benchmarks, including on action recognition (84.9 top-1 accuracy on Kinetics-400 and 86.1 top-1 accuracy on Kinetics-600 with ~20x less pre-training data and ~3x smaller model size) and temporal modeling (69.6 top-1 accuracy on Something-Something v2). The code and models will be made publicly available at

PDF Abstract CVPR 2022 PDF CVPR 2022 Abstract

Results from the Paper

Ranked #26 on Action Classification on Kinetics-600 (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Action Classification Kinetics-400 Swin-T (ImageNet-1k pretrain) Acc@1 78.8 # 110
Acc@5 93.6 # 88
Action Classification Kinetics-400 Swin-L (ImageNet-21k pretrain) Acc@1 83.1 # 58
Acc@5 95.9 # 42
Action Classification Kinetics-400 Swin-S (ImageNet-1k pretrain) Acc@1 80.6 # 82
Acc@5 94.5 # 64
Action Classification Kinetics-400 Swin-B (ImageNet-1k pretrain) Acc@1 80.6 # 82
Acc@5 94.6 # 60
Action Classification Kinetics-400 Swin-L (384x384, ImageNet-21k pretrain) Acc@1 84.9 # 50
Acc@5 96.7 # 37
Action Classification Kinetics-400 Swin-B (ImageNet-21k pretrain) Acc@1 82.7 # 63
Acc@5 95.5 # 46
Action Classification Kinetics-600 Swin-L (384x384, ImageNet-21k pretrain) Top-1 Accuracy 86.1 # 26
Top-5 Accuracy 97.3 # 14
Action Classification Kinetics-600 Swin-B (ImageNet-21k pretrain) Top-1 Accuracy 84.0 # 33
Top-5 Accuracy 96.5 # 23
Action Recognition Something-Something V2 Swin-B (IN-21K + Kinetics400 pretrain) Top-1 Accuracy 69.6 # 36
Top-5 Accuracy 92.7 # 23
Parameters 89 # 23
GFLOPs 321x3 # 6