Adaptive Attention Span in Transformers

We propose a novel self-attention mechanism that can learn its optimal attention span. This allows us to extend significantly the maximum context size used in Transformer, while maintaining control over their memory footprint and computational time. We show the effectiveness of our approach on the task of character level language modeling, where we achieve state-of-the-art performances on text8 and enwiki8 by using a maximum context of 8k characters.

PDF Abstract ACL 2019 PDF ACL 2019 Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Language Modelling enwik8 Transformer (24 layers, 8k adaptive span) Bit per Character (BPC) 0.98 # 11
Number of params 209M # 6
Language Modelling enwik8 Transformer (12 layers, 8k adaptive span) Bit per Character (BPC) 1.02 # 20
Number of params 39M # 30
Language Modelling Text8 24L Transformer + 8K adaptive span Bit per Character (BPC) 1.07 # 4
Number of params 209M # 5
Language Modelling Text8 12L Transformer + 8K adaptive span Bit per Character (BPC) 1.11 # 8
Number of params 38M # 13

Methods