Fastformer: Additive Attention Can Be All You Need

20 Aug 2021  ·  Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang, Xing Xie ·

Transformer is a powerful model for text understanding. However, it is inefficient due to its quadratic complexity to input sequence length. Although there are many methods on Transformer acceleration, they are still either inefficient on long sequences or not effective enough. In this paper, we propose Fastformer, which is an efficient Transformer model based on additive attention. In Fastformer, instead of modeling the pair-wise interactions between tokens, we first use additive attention mechanism to model global contexts, and then further transform each token representation based on its interaction with global context representations. In this way, Fastformer can achieve effective context modeling with linear complexity. Extensive experiments on five datasets show that Fastformer is much more efficient than many existing Transformer models and can meanwhile achieve comparable or even better long text modeling performance.

PDF Abstract

Results from the Paper


 Ranked #1 on News Recommendation on MIND (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Text Summarization CNN / Daily Mail (Anonymized) Fastformer ROUGE-1 38.54 # 11
ROUGE-2 16.22 # 6
ROUGE-L 36.21 # 8
News Recommendation MIND Fastformer+PLM-NR AUC 72.68 # 1
MRR 37.45 # 1
nDCG@5 41.51 # 2
nDCG@10 46.84 # 1
News Recommendation MIND Poolingformer AUC 68.54 # 4
MRR 33.6 # 5
nDCG@5 36.69 # 5
nDCG@10 42.6 # 4
News Recommendation MIND Fastformer AUC 69.11 # 3
MRR 34.25 # 3
nDCG@5 37.26 # 3
nDCG@10 43.38 # 3
Text Summarization Pubmed Fastformer ROUGE-1 38.09 # 27
ROUGE-2 15.44 # 20
ROUGE-L 34.81 # 19

Methods