A ConvNet for the 2020s

The "Roaring 20s" of visual recognition began with the introduction of Vision Transformers (ViTs), which quickly superseded ConvNets as the state-of-the-art image classification model. A vanilla ViT, on the other hand, faces difficulties when applied to general computer vision tasks such as object detection and semantic segmentation. It is the hierarchical Transformers (e.g., Swin Transformers) that reintroduced several ConvNet priors, making Transformers practically viable as a generic vision backbone and demonstrating remarkable performance on a wide variety of vision tasks. However, the effectiveness of such hybrid approaches is still largely credited to the intrinsic superiority of Transformers, rather than the inherent inductive biases of convolutions. In this work, we reexamine the design spaces and test the limits of what a pure ConvNet can achieve. We gradually "modernize" a standard ResNet toward the design of a vision Transformer, and discover several key components that contribute to the performance difference along the way. The outcome of this exploration is a family of pure ConvNet models dubbed ConvNeXt. Constructed entirely from standard ConvNet modules, ConvNeXts compete favorably with Transformers in terms of accuracy and scalability, achieving 87.8% ImageNet top-1 accuracy and outperforming Swin Transformers on COCO detection and ADE20K segmentation, while maintaining the simplicity and efficiency of standard ConvNets.

PDF Abstract CVPR 2022 PDF CVPR 2022 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Semantic Segmentation ADE20K ConvNeXt-L++ Validation mIoU 53.7 # 68
Params (M) 235 # 16
GFLOPs (512 x 512) 2458 # 24
Semantic Segmentation ADE20K ConvNeXt-S Validation mIoU 49.6 # 122
Params (M) 82 # 33
GFLOPs (512 x 512) 1027 # 16
Semantic Segmentation ADE20K ConvNeXt-B Validation mIoU 49.9 # 116
Params (M) 122 # 25
GFLOPs (512 x 512) 1170 # 20
Semantic Segmentation ADE20K ConvNeXt-XL++ Validation mIoU 54 # 63
Params (M) 391 # 12
GFLOPs (512 x 512) 3335 # 25
Semantic Segmentation ADE20K ConvNeXt-T Validation mIoU 46.7 # 164
Params (M) 60 # 41
GFLOPs (512 x 512) 939 # 12
Semantic Segmentation ADE20K ConvNeXt-B++ Validation mIoU 53.1 # 77
Params (M) 122 # 25
GFLOPs (512 x 512) 1828 # 23
Object Detection COCO-O ConvNeXt-XL (Cascade Mask R-CNN) Average mAP 37.5 # 7
Effective Robustness 12.68 # 6
Image Classification ImageNet ConvNeXt-L (384 res) Top 1 Accuracy 85.5% # 209
Number of params 198M # 889
GFLOPs 101 # 442
Image Classification ImageNet ConvNeXt-XL (ImageNet-22k) Top 1 Accuracy 87.8% # 75
Number of params 350M # 914
GFLOPs 179 # 461
Image Classification ImageNet Adlik-ViT-SG+Swin_large+Convnext_xlarge(384) Top 1 Accuracy 88.36% # 59
Number of params 1827M # 951
Image Classification ImageNet ConvNeXt-T Top 1 Accuracy 82.1% # 518
Number of params 29M # 632
GFLOPs 4.5 # 208
Domain Generalization ImageNet-A ConvNeXt-XL (Im21k, 384) Top-1 accuracy % 69.3 # 10
Domain Generalization ImageNet-C ConvNeXt-XL (Im21k) (augmentation overlap with ImageNet-C) mean Corruption Error (mCE) 38.8 # 12
Number of params 350M # 40
Domain Generalization ImageNet-R ConvNeXt-XL (Im21k, 384) Top-1 Error Rate 31.8 # 8
Semantic Segmentation ImageNet-S ConvNext-Tiny (P4, 224x224, SUP) mIoU (val) 48.7 # 11
mIoU (test) 48.8 # 10
Domain Generalization ImageNet-Sketch ConvNeXt-XL (Im21k, 384) Top-1 accuracy 55.0 # 4
Classification InDL ConvNext Average Recall 93.47% # 1
Domain Generalization VizWiz-Classification ConvNeXt-B Accuracy - All Images 53.5 # 2
Accuracy - Corrupted Images 46.9 # 3
Accuracy - Clean Images 56 # 2

Methods