Training Techniques | SGD with Momentum, Cosine Annealing, Gradient Clipping |
---|---|
Architecture | Layer Normalization, Multi-Head Attention, Tanh Activation, Dense Connections, Attention Dropout, Dropout, Scaled Dot-Product Attention, GELU, Convolution |
ID | vit_base_patch16_224 |
SHOW MORE |
Training Techniques | SGD with Momentum, Cosine Annealing, Gradient Clipping |
---|---|
Architecture | Layer Normalization, Multi-Head Attention, Tanh Activation, Dense Connections, Attention Dropout, Dropout, Scaled Dot-Product Attention, GELU, Convolution |
ID | vit_base_patch16_384 |
SHOW MORE |
Training Techniques | SGD with Momentum, Cosine Annealing, Gradient Clipping |
---|---|
Architecture | Layer Normalization, Multi-Head Attention, Tanh Activation, Dense Connections, Attention Dropout, Dropout, Scaled Dot-Product Attention, GELU, Convolution |
ID | vit_base_patch32_384 |
SHOW MORE |
Training Techniques | SGD with Momentum, Cosine Annealing, Gradient Clipping |
---|---|
Architecture | Layer Normalization, Multi-Head Attention, Tanh Activation, Dense Connections, Attention Dropout, Dropout, Scaled Dot-Product Attention, GELU, Convolution |
ID | vit_base_resnet50_384 |
SHOW MORE |
Training Techniques | SGD with Momentum, Cosine Annealing, Gradient Clipping |
---|---|
Architecture | Layer Normalization, Multi-Head Attention, Tanh Activation, Dense Connections, Attention Dropout, Dropout, Scaled Dot-Product Attention, GELU, Convolution |
ID | vit_large_patch16_224 |
SHOW MORE |
Training Techniques | SGD with Momentum, Cosine Annealing, Gradient Clipping |
---|---|
Architecture | Layer Normalization, Multi-Head Attention, Tanh Activation, Dense Connections, Attention Dropout, Dropout, Scaled Dot-Product Attention, GELU, Convolution |
ID | vit_large_patch16_384 |
SHOW MORE |
Training Techniques | SGD with Momentum, Cosine Annealing, Gradient Clipping |
---|---|
Architecture | Layer Normalization, Multi-Head Attention, Tanh Activation, Dense Connections, Attention Dropout, Dropout, Scaled Dot-Product Attention, GELU, Convolution |
ID | vit_small_patch16_224 |
SHOW MORE |
The Vision Transformer is a model for image classification that employs a Transformer-like architecture over patches of the image. This includes the use of Multi-Head Attention, Scaled Dot-Product Attention and other architectural features seen in the Transformer architecture traditionally used for NLP.
To load a pretrained model:
import timm
m = timm.create_model('vit_large_patch16_224', pretrained=True)
m.eval()
Replace the model name with the variant you want to use, e.g. vit_large_patch16_224
. You can find the IDs in the model summaries at the top of this page.
You can follow the timm recipe scripts for training a new model afresh.
@misc{dosovitskiy2020image,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Alexey Dosovitskiy and Lucas Beyer and Alexander Kolesnikov and Dirk Weissenborn and Xiaohua Zhai and Thomas Unterthiner and Mostafa Dehghani and Matthias Minderer and Georg Heigold and Sylvain Gelly and Jakob Uszkoreit and Neil Houlsby},
year={2020},
eprint={2010.11929},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
MODEL | TOP 1 ACCURACY | TOP 5 ACCURACY |
---|---|---|
vit_large_patch16_384 | 85.17% | 97.36% |
vit_base_resnet50_384 | 84.99% | 97.3% |
vit_base_patch16_384 | 84.2% | 97.22% |
vit_large_patch16_224 | 83.06% | 96.44% |
vit_base_patch16_224 | 81.78% | 96.13% |
vit_base_patch32_384 | 81.66% | 96.13% |
vit_small_patch16_224 | 77.85% | 93.42% |