Generating Long Sequences with Sparse Transformers

Transformers are powerful sequence models, but require time and memory that grows quadratically with the sequence length. In this paper we introduce sparse factorizations of the attention matrix which reduce this to $O(n \sqrt{n})$. We also introduce a) a variation on architecture and initialization to train deeper networks, b) the recomputation of attention matrices to save memory, and c) fast attention kernels for training. We call networks with these changes Sparse Transformers, and show they can model sequences tens of thousands of timesteps long using hundreds of layers. We use the same architecture to model images, audio, and text from raw bytes, setting a new state of the art for density modeling of Enwik8, CIFAR-10, and ImageNet-64. We generate unconditional samples that demonstrate global coherence and great diversity, and show it is possible in principle to use self-attention to model sequences of length one million or more.

PDF Abstract Preprint 2019 PDF Preprint 2019 Abstract

Code Add Remove Mark official

 ↳ Quickstart in
475

Results from the Paper Edit

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Generation CIFAR-10 Sparse Transformer 59M (strided) bits/dimension 2.80 # 8
Audio Generation Classical music, 5 seconds at 12 kHz Sparse Transformer 152M (strided) Bits per byte 1.97 # 1
Image Generation ImageNet 64x64 Sparse Transformer 59M (strided) Bits per dim 3.44 # 6

Results from Other Papers

Task Dataset Model Metric Name Metric Value Rank Source Paper Compare
Language Modelling enwik8 Sparse Transformer (30 layers, fixed attn) Bit per Character (BPC) 0.99 # 11
Number of params 95M # 14
Question Answering Natural Questions (long) Sparse Attention F1 74.5 # 4
Question Answering Quasart-T Sparse Attention EM 52.1 # 3
Open-Domain Question Answering SearchQA Sparse Attention EM 64.7 # 4