FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer

27 Nov 2021  ·  Yang Lin, Tianyu Zhang, Peiqin Sun, Zheng Li, Shuchang Zhou ·

Network quantization significantly reduces model inference complexity and has been widely used in real-world deployments. However, most existing quantization methods have been developed mainly on Convolutional Neural Networks (CNNs), and suffer severe degradation when applied to fully quantized vision transformers. In this work, we demonstrate that many of these difficulties arise because of serious inter-channel variation in LayerNorm inputs, and present, Power-of-Two Factor (PTF), a systematic method to reduce the performance degradation and inference complexity of fully quantized vision transformers. In addition, observing an extreme non-uniform distribution in attention maps, we propose Log-Int-Softmax (LIS) to sustain that and simplify inference by using 4-bit quantization and the BitShift operator. Comprehensive experiments on various transformer-based architectures and benchmarks show that our Fully Quantized Vision Transformer (FQ-ViT) outperforms previous works while even using lower bit-width on attention maps. For instance, we reach 84.89% top-1 accuracy with ViT-L on ImageNet and 50.8 mAP with Cascade Mask R-CNN (Swin-S) on COCO. To our knowledge, we are the first to achieve lossless accuracy degradation (~1%) on fully quantized vision transformers. The code is available at https://github.com/megvii-research/FQ-ViT.

PDF Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Quantization ImageNet FQ-ViT (ViT-L) Top-1 Accuracy (%) 85.03 # 1
Weight bits 8 # 10
Activation bits 8 # 9
Quantization ImageNet FQ-ViT (Swin-B) Top-1 Accuracy (%) 82.97 # 3
Weight bits 8 # 10
Activation bits 8 # 9
Quantization ImageNet FQ-ViT (Swin-S) Top-1 Accuracy (%) 82.71 # 4
Weight bits 8 # 10
Activation bits 8 # 9
Quantization ImageNet FQ-ViT (Swin-T) Top-1 Accuracy (%) 80.51 # 6
Weight bits 8 # 10
Activation bits 8 # 9
Quantization ImageNet FQ-ViT (ViT-B) Top-1 Accuracy (%) 83.31 # 2
Weight bits 8 # 10
Activation bits 8 # 9
Quantization ImageNet FQ-ViT (DeiT-B) Top-1 Accuracy (%) 81.20 # 5
Weight bits 8 # 10
Activation bits 8 # 9
Quantization ImageNet FQ-ViT (DeiT-S) Top-1 Accuracy (%) 79.17 # 7
Weight bits 8 # 10
Activation bits 8 # 9
Quantization ImageNet FQ-ViT (DeiT-T) Top-1 Accuracy (%) 71.61 # 21
Weight bits 8 # 10
Activation bits 8 # 9

Methods