Exploring Target Representations for Masked Autoencoders

8 Sep 2022  ·  Xingbin Liu, Jinghao Zhou, Tao Kong, Xianming Lin, Rongrong Ji ·

Masked autoencoders have become popular training paradigms for self-supervised visual representation learning. These models randomly mask a portion of the input and reconstruct the masked portion according to the target representations. In this paper, we first show that a careful choice of the target representation is unnecessary for learning good representations, since different targets tend to derive similarly behaved models. Driven by this observation, we propose a multi-stage masked distillation pipeline and use a randomly initialized model as the teacher, enabling us to effectively train high-capacity models without any efforts to carefully design target representations. Interestingly, we further explore using teachers of larger capacity, obtaining distilled students with remarkable transferring ability. On different tasks of classification, transfer learning, object detection, and semantic segmentation, the proposed method to perform masked knowledge distillation with bootstrapped teachers (dBOT) outperforms previous self-supervised methods by nontrivial margins. We hope our findings, as well as the proposed method, could motivate people to rethink the roles of target representations in pre-training masked autoencoders.The code and pre-trained models are publicly available at https://github.com/liuxingbin/dbot.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Semantic Segmentation ADE20K dBOT ViT-B (CLIP) Validation mIoU 52.9 # 78
Semantic Segmentation ADE20K dBOT ViT-L (CLIP) Validation mIoU 56.2 # 35
Semantic Segmentation ADE20K dBOT ViT-B Validation mIoU 50.8 # 102
Semantic Segmentation ADE20K dBOT ViT-L Validation mIoU 55.2 # 43
Object Detection COCO test-dev dBOT ViT-B (CLIP) box mAP 53.6 # 55
Instance Segmentation COCO test-dev dBOT ViT-B mask AP 46.3 # 34
Object Detection COCO test-dev dBOT ViT-L box mAP 56.1 # 40
Object Detection COCO test-dev dBOT ViT-B box mAP 53.5 # 56
Object Detection COCO test-dev dBOT ViT-L (CLIP) box mAP 56.8 # 36
Instance Segmentation COCO test-dev dBOT ViT-L (CLIP) mask AP 48.8 # 24
Instance Segmentation COCO test-dev dBOT ViT-B (CLIP) mask AP 46.2 # 35
Instance Segmentation COCO test-dev dBOT ViT-L mask AP 48.3 # 27
Image Classification ImageNet dBOT ViT-H (CLIP as Teacher) Top 1 Accuracy 88.2% # 66
Image Classification ImageNet dBOT ViT-B (CLIP as Teacher) Top 1 Accuracy 85.7% # 197
Image Classification ImageNet dBOT ViT-L (CLIP as Teacher) Top 1 Accuracy 87.8% # 75
Self-Supervised Image Classification ImageNet (finetuned) dBOT (ViT-H/14) Number of Params 632M # 7
Top 1 Accuracy 88.0% # 6

Methods