Swin Transformer V2: Scaling Up Capacity and Resolution

Large-scale NLP models have been shown to significantly improve the performance on language tasks with no signs of saturation. They also demonstrate amazing few-shot capabilities like that of human beings. This paper aims to explore large-scale models in computer vision. We tackle three major issues in training and application of large vision models, including training instability, resolution gaps between pre-training and fine-tuning, and hunger on labelled data. Three main techniques are proposed: 1) a residual-post-norm method combined with cosine attention to improve training stability; 2) A log-spaced continuous position bias method to effectively transfer models pre-trained using low-resolution images to downstream tasks with high-resolution inputs; 3) A self-supervised pre-training method, SimMIM, to reduce the needs of vast labeled images. Through these techniques, this paper successfully trained a 3 billion-parameter Swin Transformer V2 model, which is the largest dense vision model to date, and makes it capable of training with images of up to 1,536$\times$1,536 resolution. It set new performance records on 4 representative vision tasks, including ImageNet-V2 image classification, COCO object detection, ADE20K semantic segmentation, and Kinetics-400 video action classification. Also note our training is much more efficient than that in Google's billion-level visual models, which consumes 40 times less labelled data and 40 times less training time. Code is available at \url{https://github.com/microsoft/Swin-Transformer}.

PDF Abstract CVPR 2022 PDF CVPR 2022 Abstract

Results from the Paper


Ranked #4 on Image Classification on ImageNet V2 (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Semantic Segmentation ADE20K SwinV2-G(UperNet) Validation mIoU 59.9 # 12
Semantic Segmentation ADE20K SwinV2-G-HTC++ Liu et al. ([2021a]) Validation mIoU 53.7 # 68
Instance Segmentation COCO minival SwinV2-G (HTC++) mask AP 53.7 # 7
Object Detection COCO minival SwinV2-G (HTC++) box AP 62.5 # 13
Object Detection COCO test-dev SwinV2-G (HTC++) box mAP 63.1 # 16
Params (M) 3000 # 1
Instance Segmentation COCO test-dev SwinV2-G (HTC++) mask AP 54.4 # 8
Image Classification ImageNet SwinV2-G Top 1 Accuracy 90.17% # 15
Number of params 3000M # 968
Image Classification ImageNet SwinV2-B Top 1 Accuracy 87.1% # 103
Number of params 88M # 827
Image Classification ImageNet V2 SwinV2-G Top 1 Accuracy 84.00% # 4
Image Classification ImageNet V2 SwinV2-B Top 1 Accuracy 78.08 # 13
Action Classification Kinetics-400 Video-SwinV2-G (ImageNet-22k and external 70M pretrain) Acc@1 86.8 # 36

Methods