FNet: Mixing Tokens with Fourier Transforms

9 May 2021  ·  James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon ·

We show that Transformer encoder architectures can be massively sped up, with limited accuracy costs, by replacing the self-attention sublayers with simple linear transformations that "mix" input tokens. These linear transformations, along with simple nonlinearities in feed-forward layers, are sufficient to model semantic relationships in several text classification tasks... Perhaps most surprisingly, we find that replacing the self-attention sublayer in a Transformer encoder with a standard, unparameterized Fourier Transform achieves 92% of the accuracy of BERT on the GLUE benchmark, but pre-trains and runs up to seven times faster on GPUs and twice as fast on TPUs. The resulting model, which we name FNet, scales very efficiently to long inputs, matching the accuracy of the most accurate "efficient" Transformers on the Long Range Arena benchmark, but training and running faster across all sequence lengths on GPUs and relatively shorter sequence lengths on TPUs. Finally, FNet has a light memory footprint and is particularly efficient at smaller model sizes: for a fixed speed and accuracy budget, small FNet models outperform Transformer counterparts. read more

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Linguistic Acceptability CoLA FNet-Large Accuracy 78% # 2
Semantic Textual Similarity MRPC FNet-Large Accuracy 88% # 15
Natural Language Inference MultiNLI BERT-Large Matched 88 # 12
Mismatched 88 # 9
Natural Language Inference MultiNLI FNet-Large Matched 78 # 28
Mismatched 76 # 24
Natural Language Inference QNLI FNet-Large Accuracy 85% # 25
Paraphrase Identification Quora Question Pairs FNet-Large F1 85 # 3
Natural Language Inference RTE FNet-Large Accuracy 69% # 20
Sentiment Analysis SST-2 Binary classification FNet-Large Accuracy 94 # 26
Semantic Textual Similarity STS Benchmark FNet-Large Spearman Correlation 0.84 # 15

Methods