Designing BERT for Convolutional Networks: Sparse and Hierarchical Masked Modeling

9 Jan 2023  ·  Keyu Tian, Yi Jiang, Qishuai Diao, Chen Lin, LiWei Wang, Zehuan Yuan ·

We identify and overcome two key obstacles in extending the success of BERT-style pre-training, or the masked image modeling, to convolutional networks (convnets): (i) convolution operation cannot handle irregular, random-masked input images; (ii) the single-scale nature of BERT pre-training is inconsistent with convnet's hierarchical structure. For (i), we treat unmasked pixels as sparse voxels of 3D point clouds and use sparse convolution to encode. This is the first use of sparse convolution for 2D masked modeling. For (ii), we develop a hierarchical decoder to reconstruct images from multi-scale encoded features. Our method called Sparse masKed modeling (SparK) is general: it can be used directly on any convolutional model without backbone modifications. We validate it on both classical (ResNet) and modern (ConvNeXt) models: on three downstream tasks, it surpasses both state-of-the-art contrastive learning and transformer-based masked modeling by similarly large margins (around +1.0%). Improvements on object detection and instance segmentation are more substantial (up to +3.5%), verifying the strong transferability of features learned. We also find its favorable scaling behavior by observing more gains on larger models. All this evidence reveals a promising future of generative pre-training on convnets. Codes and models are released at

PDF Abstract


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Instance Segmentation COCO 2017 val SparK (ConvNeXt V1-B Mask R-CNN) mask AP 45.1 # 1
mask AP* 45.1 # 1
AP 45.1 # 1
Image Classification ImageNet SparK (ConvNeXt-Large, 384) Top 1 Accuracy 86.0% # 175
Number of params 198M # 887
Self-Supervised Image Classification ImageNet (finetuned) ConvNeXt-Base (SparK pre-training) Number of Params 89M # 32
Top 1 Accuracy 84.8% # 26
Self-Supervised Image Classification ImageNet (finetuned) ConvNeXt-Small (SparK pre-training) Number of Params 50M # 46
Top 1 Accuracy 84.1% # 35
Self-Supervised Image Classification ImageNet (finetuned) ResNet-200 (SparK pre-training) Number of Params 65M # 44
Top 1 Accuracy 83.1% # 46
Self-Supervised Image Classification ImageNet (finetuned) ResNet-152 (SparK pre-training) Number of Params 60M # 45
Top 1 Accuracy 82.7% # 49
Self-Supervised Image Classification ImageNet (finetuned) ResNet-101 (SparK pre-training) Number of Params 44M # 47
Top 1 Accuracy 82.2% # 52
Self-Supervised Image Classification ImageNet (finetuned) ResNet-50 (SparK pre-training) Number of Params 26M # 48
Top 1 Accuracy 80.6% # 55
Self-Supervised Image Classification ImageNet (finetuned) SparK (ConvNeXt-Large, 384) Number of Params 198M # 25
Top 1 Accuracy 86.0% # 18
Self-Supervised Image Classification ImageNet (finetuned) SparK (ConvNeXt-Large) Number of Params 198M # 25
Top 1 Accuracy 85.4% # 23