A ConvNet for the 2020s

The "Roaring 20s" of visual recognition began with the introduction of Vision Transformers (ViTs), which quickly superseded ConvNets as the state-of-the-art image classification model. A vanilla ViT, on the other hand, faces difficulties when applied to general computer vision tasks such as object detection and semantic segmentation. It is the hierarchical Transformers (e.g., Swin Transformers) that reintroduced several ConvNet priors, making Transformers practically viable as a generic vision backbone and demonstrating remarkable performance on a wide variety of vision tasks. However, the effectiveness of such hybrid approaches is still largely credited to the intrinsic superiority of Transformers, rather than the inherent inductive biases of convolutions. In this work, we reexamine the design spaces and test the limits of what a pure ConvNet can achieve. We gradually "modernize" a standard ResNet toward the design of a vision Transformer, and discover several key components that contribute to the performance difference along the way. The outcome of this exploration is a family of pure ConvNet models dubbed ConvNeXt. Constructed entirely from standard ConvNet modules, ConvNeXts compete favorably with Transformers in terms of accuracy and scalability, achieving 87.8% ImageNet top-1 accuracy and outperforming Swin Transformers on COCO detection and ADE20K segmentation, while maintaining the simplicity and efficiency of standard ConvNets.

PDF Abstract CVPR 2022 PDF CVPR 2022 Abstract

Results from the Paper


Ranked #4 on Domain Generalization on ImageNet-Sketch (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Semantic Segmentation ADE20K ConvNeXt-B Validation mIoU 49.9 # 83
Params (M) 122 # 12
GFLOPs (512 x 512) 1170 # 16
Semantic Segmentation ADE20K ConvNeXt-XL++ Validation mIoU 54 # 46
Params (M) 391 # 4
GFLOPs (512 x 512) 3335 # 20
Semantic Segmentation ADE20K ConvNeXt-L++ Validation mIoU 53.7 # 48
Params (M) 235 # 6
GFLOPs (512 x 512) 2458 # 19
Semantic Segmentation ADE20K ConvNeXt-B++ Validation mIoU 53.1 # 56
Params (M) 122 # 12
GFLOPs (512 x 512) 1828 # 18
Semantic Segmentation ADE20K ConvNeXt-T Validation mIoU 46.7 # 128
Params (M) 60 # 24
GFLOPs (512 x 512) 939 # 10
Semantic Segmentation ADE20K ConvNeXt-S Validation mIoU 49.6 # 89
Params (M) 82 # 19
GFLOPs (512 x 512) 1027 # 13
Image Classification ImageNet Adlik-ViT-SG+Swin_large+Convnext_xlarge(384) Top 1 Accuracy 88.36% # 46
Number of params 1827M # 831
Image Classification ImageNet ConvNeXt-XL (ImageNet-22k) Top 1 Accuracy 87.8% # 58
Number of params 350M # 798
GFLOPs 179 # 433
Image Classification ImageNet ConvNeXt-L (384 res) Top 1 Accuracy 85.5% # 176
Number of params 198M # 778
GFLOPs 101 # 417
Image Classification ImageNet ConvNeXt-T Top 1 Accuracy 82.1% # 438
Number of params 29M # 534
GFLOPs 4.5 # 192
Domain Generalization ImageNet-A ConvNeXt-XL (Im21k, 384) Top-1 accuracy % 69.3 # 7
Domain Generalization ImageNet-C ConvNeXt-XL (Im21k) (augmentation overlap with ImageNet-C) mean Corruption Error (mCE) 38.8 # 7
Number of params 350M # 31
Domain Generalization ImageNet-R ConvNeXt-XL (Im21k, 384) Top-1 Error Rate 31.8 # 6
Semantic Segmentation ImageNet-S ConvNext-Tiny (P4, 224x224, SUP) mIoU (val) 48.7 # 10
mIoU (test) 48.8 # 9
Domain Generalization ImageNet-Sketch ConvNeXt-XL (Im21k, 384) Top-1 accuracy 55.0 # 4

Methods