Scaling Vision Transformers

Attention-based neural networks such as the Vision Transformer (ViT) have recently attained state-of-the-art results on many computer vision benchmarks. Scale is a primary ingredient in attaining excellent results, therefore, understanding a model's scaling properties is a key to designing future generations effectively. While the laws for scaling Transformer language models have been studied, it is unknown how Vision Transformers scale. To address this, we scale ViT models and data, both up and down, and characterize the relationships between error rate, data, and compute. Along the way, we refine the architecture and training of ViT, reducing memory consumption and increasing accuracy of the resulting models. As a result, we successfully train a ViT model with two billion parameters, which attains a new state-of-the-art on ImageNet of 90.45% top-1 accuracy. The model also performs well for few-shot transfer, for example, reaching 84.86% top-1 accuracy on ImageNet with only 10 examples per class.

PDF Abstract CVPR 2022 PDF CVPR 2022 Abstract

Datasets


Introduced in the Paper:

JFT-3B

Used in the Paper:

ImageNet ObjectNet JFT-300M

Results from the Paper


Ranked #3 on Image Classification on VTAB-1k (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Image Classification ImageNet ViT-G/14 Top 1 Accuracy 90.45% # 10
Number of params 1843M # 962
Hardware Burden None # 1
Operations per network pass None # 1
GFLOPs 2859.9 # 493
Image Classification ImageNet ReaL ViT-G/14 Accuracy 90.81% # 11
Image Classification ImageNet V2 ViT-G/14 Top 1 Accuracy 83.33 # 6
Image Classification ObjectNet ViT-G/14 Top-1 Accuracy 70.53 # 14
Image Classification ObjectNet NS (Eff.-L2) Top-1 Accuracy 68.5 # 16
Image Classification VTAB-1k ViT-G/14 Top-1 Accuracy 78.29 # 3

Methods