Emerging Properties in Self-Supervised Vision Transformers

In this paper, we question if self-supervised learning provides new properties to Vision Transformer (ViT) that stand out compared to convolutional networks (convnets). Beyond the fact that adapting self-supervised methods to this architecture works particularly well, we make the following observations: first, self-supervised ViT features contain explicit information about the semantic segmentation of an image, which does not emerge as clearly with supervised ViTs, nor with convnets... (read more)

PDF Abstract
TASK DATASET MODEL METRIC NAME METRIC VALUE GLOBAL RANK RESULT BENCHMARK
Copy Detection Copydays strong subset DINO (ViT-B/8) mAP 85.5 # 1
Video Object Detection DAVIS 2017 DINO (ViT-B/8, ImageNet retrain) J&F 71.4 # 1
Self-Supervised Image Classification ImageNet DINO (DeiT-S/8) Top 1 Accuracy 79.7% # 4
Number of Params 21M # 38
Top 1 Accuracy (kNN, k=20) 78.3 # 1
Self-Supervised Image Classification ImageNet DINO (ResNet-50) Top 1 Accuracy 75.3% # 23
Number of Params 24M # 27
Top 1 Accuracy (kNN, k=20) 67.5 # 6
Self-Supervised Image Classification ImageNet DINO (DeiT-S/16) Top 1 Accuracy 77.0% # 15
Number of Params 21M # 38
Top 1 Accuracy (kNN, k=20) 74.5 # 4
Self-Supervised Image Classification ImageNet DINO (ViT-B/8) Top 1 Accuracy 80.1% # 2
Number of Params 85M # 24
Top 1 Accuracy (kNN, k=20) 77.4 # 2
Self-Supervised Image Classification ImageNet DINO (ViT-B/16) Top 1 Accuracy 78.2% # 9
Number of Params 85M # 24
Top 1 Accuracy (kNN, k=20) 76.1 # 3

Methods used in the Paper


METHOD TYPE
Residual Connection
Skip Connections
Softmax
Output Functions
Multi-Head Attention
Attention Modules
BPE
Subword Segmentation
Layer Normalization
Normalization
Adam
Stochastic Optimization
Dropout
Regularization
Label Smoothing
Regularization
Dense Connections
Feedforward Networks
Scaled Dot-Product Attention
Attention Mechanisms
Transformer
Transformers
Vision Transformer
Image Models
k-NN
Non-Parametric Classification