Bottleneck Transformers for Visual Recognition

We present BoTNet, a conceptually simple yet powerful backbone architecture that incorporates self-attention for multiple computer vision tasks including image classification, object detection and instance segmentation. By just replacing the spatial convolutions with global self-attention in the final three bottleneck blocks of a ResNet and no other changes, our approach improves upon the baselines significantly on instance segmentation and object detection while also reducing the parameters, with minimal overhead in latency. Through the design of BoTNet, we also point out how ResNet bottleneck blocks with self-attention can be viewed as Transformer blocks. Without any bells and whistles, BoTNet achieves 44.4% Mask AP and 49.7% Box AP on the COCO Instance Segmentation benchmark using the Mask R-CNN framework; surpassing the previous best published single model and single scale results of ResNeSt evaluated on the COCO validation set. Finally, we present a simple adaptation of the BoTNet design for image classification, resulting in models that achieve a strong performance of 84.7% top-1 accuracy on the ImageNet benchmark while being up to 1.64x faster in compute time than the popular EfficientNet models on TPU-v3 hardware. We hope our simple and effective approach will serve as a strong baseline for future research in self-attention models for vision

PDF Abstract CVPR 2021 PDF CVPR 2021 Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Object Detection COCO minival BoTNet 152 (Mask R-CNN, single scale, 72 epochs) box AP 49.5 # 57
AP50 71 # 11
AP75 54.2 # 19
Instance Segmentation COCO minival BoTNet 152 (Mask R-CNN, single scale, 72 epochs) mask AP 43.7 # 37
Object Detection COCO minival BoTNet 200 (Mask R-CNN, single scale, 72 epochs) box AP 49.7 # 56
AP50 71.3 # 10
AP75 54.6 # 18
Object Detection COCO minival BoTNet 50 (72 epochs) box AP 45.9 # 83
Instance Segmentation COCO minival BoTNet 50 (72 epochs) mask AP 40.7 # 49
Instance Segmentation COCO minival BoTNet 200 (Mask R-CNN, single scale, 72 epochs) mask AP 44.4 # 33
Image Classification ImageNet SENet-152 Top 1 Accuracy 82.2% # 377
Top 5 Accuracy 95.9% # 91
Number of params 66.6M # 624
Image Classification ImageNet SENet-350 Top 1 Accuracy 83.8% # 251
Top 5 Accuracy 96.6% # 63
Image Classification ImageNet BoTNet T6 Top 1 Accuracy 84% # 236
Top 5 Accuracy 96.7% # 58
Number of params 53.9M # 586
Image Classification ImageNet BoTNet T7-320 Top 1 Accuracy 84.2% # 218
Top 5 Accuracy 96.9% # 50
Number of params 75.1M # 639
Image Classification ImageNet ResNet-101 Top 1 Accuracy 80% # 505
Top 5 Accuracy 95% # 131
Number of params 44.4M # 553
Image Classification ImageNet ResNet-50 Top 1 Accuracy 78.8% # 571
Top 5 Accuracy 94.5% # 156
Number of params 25.5M # 470
Image Classification ImageNet SENet-50 Top 1 Accuracy 79.4% # 531
Top 5 Accuracy 94.6% # 149
Number of params 28.02M # 502
Image Classification ImageNet SENet-101 Top 1 Accuracy 81.4% # 440
Top 5 Accuracy 95.7% # 102
Number of params 49.2M # 574

Results from Other Papers


Task Dataset Model Metric Name Metric Value Rank Source Paper Compare
Image Classification ImageNet BoTNet T5 Top 1 Accuracy 83.5% # 277
Top 5 Accuracy 96.5% # 67
Number of params 75.1 # 16
GFLOPs 19.3 # 322
Image Classification ImageNet BoTNet T3 Top 1 Accuracy 81.7% # 421
Top 5 Accuracy 95.8% # 94
Number of params 33.5 # 13
GFLOPs 7.3 # 229
Image Classification ImageNet BoTNet T4 Top 1 Accuracy 82.8% # 331
Top 5 Accuracy 96.3% # 80
Number of params 54.7 # 15
GFLOPs 10.9 # 273
Image Classification ImageNet BoTNet T7 Top 1 Accuracy 84.7% # 195
Top 5 Accuracy 97% # 45
Number of params 75.1M # 639
Hardware Burden None # 1
Operations per network pass None # 1

Methods