UniFormer: Unified Transformer for Efficient Spatial-Temporal Representation Learning

It is a challenging task to learn rich and multi-scale spatial-temporal semantics from high-dimensional videos, due to large local redundancy and complex global dependency between video frames. The recent advances in this research have been mainly driven by 3D convolutional neural networks and vision transformers. Although 3D convolution can efficiently aggregate local context to suppress local redundancy from a small 3D neighborhood, it lacks the capability to capture global dependency because of the limited receptive field. Alternatively, vision transformers can effectively capture long-range dependency by self-attention mechanism, while having limitations on reducing local redundancy with blind similarity comparison among all the tokens in each layer. Based on these observations, we propose a novel Unified transFormer (UniFormer) which seamlessly integrates merits of 3D convolution and spatial-temporal self-attention in a concise transformer format, and achieves a preferable balance between computation and accuracy. Different from traditional transformers, our relation aggregator can tackle both spatial-temporal redundancy and dependency, by learning local and global token affinity respectively in shallow and deep layers. We conduct extensive experiments on the popular video benchmarks, e.g., Kinetics-400, Kinetics-600, and Something-Something V1&V2. With only ImageNet-1K pretraining, our UniFormer achieves 82.9%/84.8% top-1 accuracy on Kinetics-400/Kinetics-600, while requiring 10x fewer GFLOPs than other state-of-the-art methods. For Something-Something V1 and V2, our UniFormer achieves new state-of-the-art performances of 60.8% and 71.4% top-1 accuracy respectively.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Benchmark
Action Classification Kinetics-400 UniFormer-B (ImageNet-1K) Acc@1 82.9 # 64
Acc@5 94.5 # 65
FLOPs (G) x views 259x4 # 1
Action Classification Kinetics-600 UniFormer-B (ImageNet-1K) Top-1 Accuracy 84.8 # 31
Top-5 Accuracy 96.7 # 20
GFLOPs 259x4 # 1
Action Recognition Something-Something V1 UniFormer-B (IN-1K + Kinetics400) Top 1 Accuracy 60.9 # 8
Top 5 Accuracy 87.3 # 5
Param. 50.1 # 2
GFLOPs 259x3 # 1
Action Recognition Something-Something V1 UniFormer-B (IN-1K + Kinetics600) Top 1 Accuracy 57.6 # 12
Top 5 Accuracy 84.9 # 7
Param. 21.4 # 1
GFLOPs 41.8x3 # 1
Action Recognition Something-Something V2 UniFormer-S (IN-1K + Kinetics600 pretrain) Top-1 Accuracy 69.4 # 46
Top-5 Accuracy 92.1 # 30
Parameters 21.4 # 36
GFLOPs 41.8x3 # 6
Action Recognition Something-Something V2 UniFormer-B (IN-1K + Kinetics400 pretrain) Top-1 Accuracy 71.2 # 30
Top-5 Accuracy 92.8 # 20
Parameters 50.1 # 31
GFLOPs 259x3 # 6

Methods