Masked Autoencoders Are Scalable Vision Learners

11 Nov 2021  ·  Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick ·

This paper shows that masked autoencoders (MAE) are scalable self-supervised learners for computer vision. Our MAE approach is simple: we mask random patches of the input image and reconstruct the missing pixels... It is based on two core designs. First, we develop an asymmetric encoder-decoder architecture, with an encoder that operates only on the visible subset of patches (without mask tokens), along with a lightweight decoder that reconstructs the original image from the latent representation and mask tokens. Second, we find that masking a high proportion of the input image, e.g., 75%, yields a nontrivial and meaningful self-supervisory task. Coupling these two designs enables us to train large models efficiently and effectively: we accelerate training (by 3x or more) and improve accuracy. Our scalable approach allows for learning high-capacity models that generalize well: e.g., a vanilla ViT-Huge model achieves the best accuracy (87.8%) among methods that use only ImageNet-1K data. Transfer performance in downstream tasks outperforms supervised pre-training and shows promising scaling behavior. read more

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Semantic Segmentation ADE20K VIT-L (MAE) Validation mIoU 53.6 # 10
Semantic Segmentation ADE20K VIT-B (MAE) Validation mIoU 48.1 # 17
Object Detection COCO minival VIT-B (MAE) box AP 50.3 # 26
Object Detection COCO minival VIT-L (MAE) box AP 53.3 # 18
Image Classification ImageNet VIT-H448 (MAE) Top 1 Accuracy 87.8 # 21
Number of params 632M # 12
Image Classification ImageNet VIT-L (MAE) Top 1 Accuracy 85.9 # 55
Image Classification ImageNet VIT-B (MAE) Top 1 Accuracy 83.6 # 129
Self-Supervised Image Classification ImageNet VIT-L (MAE) Top 1 Accuracy 73.5% # 48
Image Classification ImageNet VIT-H (MAE) Top 1 Accuracy 86.9 # 31

Methods


No methods listed for this paper. Add relevant methods here