Vision Transformer with Deformable Attention

Transformers have recently shown superior performances on various vision tasks. The large, sometimes even global, receptive field endows Transformer models with higher representation power over their CNN counterparts. Nevertheless, simply enlarging receptive field also gives rise to several concerns. On the one hand, using dense attention e.g., in ViT, leads to excessive memory and computational cost, and features can be influenced by irrelevant parts which are beyond the region of interests. On the other hand, the sparse attention adopted in PVT or Swin Transformer is data agnostic and may limit the ability to model long range relations. To mitigate these issues, we propose a novel deformable self-attention module, where the positions of key and value pairs in self-attention are selected in a data-dependent way. This flexible scheme enables the self-attention module to focus on relevant regions and capture more informative features. On this basis, we present Deformable Attention Transformer, a general backbone model with deformable attention for both image classification and dense prediction tasks. Extensive experiments show that our models achieve consistently improved results on comprehensive benchmarks. Code is available at https://github.com/LeapLabTHU/DAT.

PDF Abstract CVPR 2022 PDF CVPR 2022 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Semantic Segmentation ADE20K DAT-B (UperNet) Validation mIoU 49.38 # 126
Params (M) 121 # 27
Semantic Segmentation ADE20K DAT-S (UperNet) Validation mIoU 48.31 # 140
Params (M) 81 # 35
Semantic Segmentation ADE20K DAT-T (UperNet) Validation mIoU 45.54 # 182
Params (M) 60 # 41
Object Detection COCO test-dev DAT-S (RetinaNet) box mAP 47.9 # 107
AP50 69.6 # 39
AP75 51.2 # 63
APS 32.3 # 34
APM 51.8 # 48
APL 63.4 # 35
Image Classification ImageNet DAT-S Top 1 Accuracy 83.7% # 365
Number of params 50M # 725
GFLOPs 9.0 # 285
Image Classification ImageNet DAT-B (384 res, IN-1K only) Top 1 Accuracy 84.8% # 270
Number of params 88M # 832
GFLOPs 49.8 # 422
Image Classification ImageNet DAT-T Top 1 Accuracy 82.0% # 530
Number of params 29M # 641
GFLOPs 4.6 # 215

Methods