CoAtNet: Marrying Convolution and Attention for All Data Sizes

NeurIPS 2021  ยท  Zihang Dai, Hanxiao Liu, Quoc V. Le, Mingxing Tan ยท

Transformers have attracted increasing interests in computer vision, but they still fall behind state-of-the-art convolutional networks. In this work, we show that while Transformers tend to have larger model capacity, their generalization can be worse than convolutional networks due to the lack of the right inductive bias. To effectively combine the strengths from both architectures, we present CoAtNets(pronounced "coat" nets), a family of hybrid models built from two key insights: (1) depthwise Convolution and self-Attention can be naturally unified via simple relative attention; (2) vertically stacking convolution layers and attention layers in a principled way is surprisingly effective in improving generalization, capacity and efficiency. Experiments show that our CoAtNets achieve state-of-the-art performance under different resource constraints across various datasets: Without extra data, CoAtNet achieves 86.0% ImageNet top-1 accuracy; When pre-trained with 13M images from ImageNet-21K, our CoAtNet achieves 88.56% top-1 accuracy, matching ViT-huge pre-trained with 300M images from JFT-300M while using 23x less data; Notably, when we further scale up CoAtNet with JFT-3B, it achieves 90.88% top-1 accuracy on ImageNet, establishing a new state-of-the-art result.

PDF Abstract NeurIPS 2021 PDF NeurIPS 2021 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Classification GasHisSDB CoAtNet-1 Accuracy 98.74 # 1
Precision 99.97 # 1
F1-Score 99.38 # 1
Image Classification ImageNet CoAtNet-3 (21k) Top 1 Accuracy 87.6% # 78
Image Classification ImageNet CoAtNet-3 Top 1 Accuracy 84.5% # 307
Number of params 168M # 959
GFLOPs 34.7 # 438
Image Classification ImageNet CoAtNet-2 (21k) Top 1 Accuracy 87.1% # 101
Image Classification ImageNet CoAtNet-2 Top 1 Accuracy 84.1% # 346
Number of params 75M # 865
GFLOPs 15.7 # 369
Image Classification ImageNet CoAtNet-1 Top 1 Accuracy 83.3% # 435
Number of params 42M # 742
GFLOPs 8.4 # 292
Image Classification ImageNet CoAtNet-0 Top 1 Accuracy 81.6% # 620
Number of params 25M # 638
GFLOPs 4.2 # 207
Image Classification ImageNet CoAtNet-3 @384 Top 1 Accuracy 88.52% # 40
GFLOPs 114 # 504

Methods