Image Models

Vision Transformer

Introduced by Dosovitskiy et al. in An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale

The Vision Transformer, or ViT, is a model for image classification that employs a Transformer-like architecture over patches of the image. An image is split into fixed-size patches, each of them are then linearly embedded, position embeddings are added, and the resulting sequence of vectors is fed to a standard Transformer encoder. In order to perform classification, the standard approach of adding an extra learnable “classification token” to the sequence is used.

Source: An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Semantic Segmentation 57 5.75%
Image Classification 54 5.45%
Object Detection 41 4.14%
Self-Supervised Learning 26 2.62%
Decoder 23 2.32%
Image Segmentation 23 2.32%
Object 23 2.32%
Classification 21 2.12%
Computational Efficiency 18 1.82%

Categories