A ConvNet for the 2020s

The "Roaring 20s" of visual recognition began with the introduction of Vision Transformers (ViTs), which quickly superseded ConvNets as the state-of-the-art image classification model. A vanilla ViT, on the other hand, faces difficulties when applied to general computer vision tasks such as object detection and semantic segmentation. It is the hierarchical Transformers (e.g., Swin Transformers) that reintroduced several ConvNet priors, making Transformers practically viable as a generic vision backbone and demonstrating remarkable performance on a wide variety of vision tasks. However, the effectiveness of such hybrid approaches is still largely credited to the intrinsic superiority of Transformers, rather than the inherent inductive biases of convolutions. In this work, we reexamine the design spaces and test the limits of what a pure ConvNet can achieve. We gradually "modernize" a standard ResNet toward the design of a vision Transformer, and discover several key components that contribute to the performance difference along the way. The outcome of this exploration is a family of pure ConvNet models dubbed ConvNeXt. Constructed entirely from standard ConvNet modules, ConvNeXts compete favorably with Transformers in terms of accuracy and scalability, achieving 87.8% ImageNet top-1 accuracy and outperforming Swin Transformers on COCO detection and ADE20K segmentation, while maintaining the simplicity and efficiency of standard ConvNets.

PDF Abstract CVPR 2022 PDF CVPR 2022 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Semantic Segmentation ADE20K ConvNeXt-S Validation mIoU 49.6 # 123
Params (M) 82 # 35
GFLOPs (512 x 512) 1027 # 16
Semantic Segmentation ADE20K ConvNeXt-B++ Validation mIoU 53.1 # 77
Params (M) 122 # 27
GFLOPs (512 x 512) 1828 # 23
Semantic Segmentation ADE20K ConvNeXt-L++ Validation mIoU 53.7 # 68
Params (M) 235 # 18
GFLOPs (512 x 512) 2458 # 24
Semantic Segmentation ADE20K ConvNeXt-XL++ Validation mIoU 54 # 63
Params (M) 391 # 14
GFLOPs (512 x 512) 3335 # 25
Semantic Segmentation ADE20K ConvNeXt-B Validation mIoU 49.9 # 117
Params (M) 122 # 27
GFLOPs (512 x 512) 1170 # 20
Semantic Segmentation ADE20K ConvNeXt-T Validation mIoU 46.7 # 165
Params (M) 60 # 43
GFLOPs (512 x 512) 939 # 12
Object Detection COCO-O ConvNeXt-XL (Cascade Mask R-CNN) Average mAP 37.5 # 7
Effective Robustness 12.68 # 6
Image Classification ImageNet ConvNeXt-L (384 res) Top 1 Accuracy 85.5% # 214
Number of params 198M # 920
GFLOPs 101 # 461
Image Classification ImageNet ConvNeXt-XL (ImageNet-22k) Top 1 Accuracy 87.8% # 74
Number of params 350M # 945
GFLOPs 179 # 480
Image Classification ImageNet Adlik-ViT-SG+Swin_large+Convnext_xlarge(384) Top 1 Accuracy 88.36% # 58
Number of params 1827M # 982
Image Classification ImageNet ConvNeXt-T Top 1 Accuracy 82.1% # 539
Number of params 29M # 654
GFLOPs 4.5 # 215
Domain Generalization ImageNet-A ConvNeXt-XL (Im21k, 384) Top-1 accuracy % 69.3 # 10
Domain Generalization ImageNet-C ConvNeXt-XL (Im21k) (augmentation overlap with ImageNet-C) mean Corruption Error (mCE) 38.8 # 12
Number of params 350M # 44
Domain Generalization ImageNet-R ConvNeXt-XL (Im21k, 384) Top-1 Error Rate 31.8 # 8
Semantic Segmentation ImageNet-S ConvNext-Tiny (P4, 224x224, SUP) mIoU (val) 48.7 # 11
mIoU (test) 48.8 # 10
Domain Generalization ImageNet-Sketch ConvNeXt-XL (Im21k, 384) Top-1 accuracy 55.0 # 4