The Vision Transformer, or ViT, is a model for image classification that employs a Transformer-like architecture over patches of the image. An image is split into fixed-size patches, each of them are then linearly embedded, position embeddings are added, and the resulting sequence of vectors is fed to a standard Transformer encoder. In order to perform classification, the standard approach of adding an extra learnable “classification token” to the sequence is used.
Source: An Image is Worth 16x16 Words: Transformers for Image Recognition at ScalePaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Image Classification | 82 | 9.92% |
Semantic Segmentation | 68 | 8.22% |
Object Detection | 45 | 5.44% |
Classification | 24 | 2.90% |
Self-Supervised Learning | 24 | 2.90% |
Instance Segmentation | 16 | 1.93% |
Image Segmentation | 14 | 1.69% |
Action Recognition | 12 | 1.45% |
Medical Image Segmentation | 12 | 1.45% |