Global Filter Networks for Image Classification

Recent advances in self-attention and pure multi-layer perceptrons (MLP) models for vision have shown great potential in achieving promising performance with fewer inductive biases. These models are generally based on learning interaction among spatial locations from raw data. The complexity of self-attention and MLP grows quadratically as the image size increases, which makes these models hard to scale up when high-resolution features are required. In this paper, we present the Global Filter Network (GFNet), a conceptually simple yet computationally efficient architecture, that learns long-term spatial dependencies in the frequency domain with log-linear complexity. Our architecture replaces the self-attention layer in vision transformers with three key operations: a 2D discrete Fourier transform, an element-wise multiplication between frequency-domain features and learnable global filters, and a 2D inverse Fourier transform. We exhibit favorable accuracy/complexity trade-offs of our models on both ImageNet and downstream tasks. Our results demonstrate that GFNet can be a very competitive alternative to transformer-style models and CNNs in efficiency, generalization ability and robustness. Code is available at https://github.com/raoyongming/GFNet

PDF Abstract NeurIPS 2021 PDF NeurIPS 2021 Abstract

Results from the Paper


Ranked #8 on Image Classification on Stanford Cars (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Benchmark
Image Classification CIFAR-10 GFNet-H-B Percentage correct 99.0 # 12
PARAMS 54M # 198
Image Classification CIFAR-100 GFNet-H-B Percentage correct 90.3 # 22
PARAMS 54M # 161
Image Classification Flowers-102 GFNet-H-B Accuracy 98.8 # 14
PARAMS 54M # 44
Image Classification ImageNet GFNet-H-B Top 1 Accuracy 82.9% # 235
Top 5 Accuracy 96.2% # 71
Number of params 54M # 154
Hardware Burden None # 1
Operations per network pass None # 1
Domain Generalization ImageNet-A GFNet-S Top-1 accuracy % 14.3 # 12
Domain Generalization ImageNet-C GFNet-S mean Corruption Error (mCE) 53.8 # 17
Image Classification Stanford Cars GFNet-H-B Accuracy 93.2 # 8

Methods


No methods listed for this paper. Add relevant methods here