XCiT: Cross-Covariance Image Transformers

Following their success in natural language processing, transformers have recently shown much promise for computer vision. The self-attention operation underlying transformers yields global interactions between all tokens ,i.e. words or image patches, and enables flexible modelling of image data beyond the local interactions of convolutions... This flexibility, however, comes with a quadratic complexity in time and memory, hindering application to long sequences and high-resolution images. We propose a "transposed" version of self-attention that operates across feature channels rather than tokens, where the interactions are based on the cross-covariance matrix between keys and queries. The resulting cross-covariance attention (XCA) has linear complexity in the number of tokens, and allows efficient processing of high-resolution images. Our cross-covariance image transformer (XCiT) is built upon XCA. It combines the accuracy of conventional transformers with the scalability of convolutional architectures. We validate the effectiveness and generality of XCiT by reporting excellent results on multiple vision benchmarks, including image classification and self-supervised feature learning on ImageNet-1k, object detection and instance segmentation on COCO, and semantic segmentation on ADE20k. read more

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Semantic Segmentation ADE20K XCiT-S24/8 (UperNet) Validation mIoU 48.1 # 13
Semantic Segmentation ADE20K XCiT-M24/8 (Semantic-FPN) Validation mIoU 46.9 # 17
Semantic Segmentation ADE20K XCiT-M24/8 (UperNet) Validation mIoU 48.4 # 11
Semantic Segmentation ADE20K XCiT-S12/8 (Semantic-FPN) Validation mIoU 44.2 # 30
Semantic Segmentation ADE20K XCiT-S12/8 (UperNet) Validation mIoU 46.6 # 18
Semantic Segmentation ADE20K XCiT-S24/8 (Semantic-FPN) Validation mIoU 47.1 # 15
Object Detection COCO minival XCiT-S24/8 box AP 48.1 # 29
Instance Segmentation COCO minival XCiT-M24/8 mask AP 43.7 # 18
Instance Segmentation COCO minival XCiT-S24/8 mask AP 43.0 # 20
Object Detection COCO minival XCiT-M24/8 box AP 48.5 # 27
Self-Supervised Image Classification ImageNet DINO (XCiT-M24/8) Top 1 Accuracy 80.3% # 5
Number of Params 84M # 29
Top 1 Accuracy (kNN, k=20) 77.9 # 4
Self-Supervised Image Classification ImageNet DINO (XCiT-S12/8) Top 1 Accuracy 79.2% # 10
Number of Params 26M # 36
Top 1 Accuracy (kNN, k=20) 77.1 # 6
Self-Supervised Image Classification ImageNet DINO (XCiT-M24/16) Top 1 Accuracy 78.8% # 12
Number of Params 84M # 29
Top 1 Accuracy (kNN, k=20) 76.4 # 7
Self-Supervised Image Classification ImageNet DINO (XCiT-S12/16) Top 1 Accuracy 77.8% # 17
Number of Params 26M # 36
Top 1 Accuracy (kNN, k=20) 76.0 # 9
Image Classification ImageNet XCiT-S24 Top 1 Accuracy 85.6% # 59
Image Classification ImageNet XCiT-S12 Top 1 Accuracy 85.1% # 74
Image Classification ImageNet XCiT-M24 Top 1 Accuracy 85.8% # 52
Image Classification ImageNet XCiT-L24 Top 1 Accuracy 86% # 46
Self-Supervised Image Classification ImageNet DINO (XCiT-M24/8 384) Top 1 Accuracy 80.9% # 3
Number of Params 84M # 29

Methods


No methods listed for this paper. Add relevant methods here