Pay Less Attention with Lightweight and Dynamic Convolutions

Self-attention is a useful mechanism to build generative models for language and images. It determines the importance of context elements by comparing each element to the current time step. In this paper, we show that a very lightweight convolution can perform competitively to the best reported self-attention results. Next, we introduce dynamic convolutions which are simpler and more efficient than self-attention. We predict separate convolution kernels based solely on the current time-step in order to determine the importance of context elements. The number of operations required by this approach scales linearly in the input length, whereas self-attention is quadratic. Experiments on large-scale machine translation, language modeling and abstractive summarization show that dynamic convolutions improve over strong self-attention models. On the WMT'14 English-German test set dynamic convolutions achieve a new state of the art of 29.7 BLEU.

PDF Abstract ICLR 2019 PDF ICLR 2019 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Document Summarization CNN / Daily Mail DynamicConv ROUGE-1 39.84 # 18
ROUGE-2 16.25 # 17
ROUGE-L 36.73 # 16
Document Summarization CNN / Daily Mail LightConv ROUGE-1 39.52 # 19
ROUGE-2 15.97 # 19
ROUGE-L 36.51 # 18
Abstractive Text Summarization CNN / Daily Mail Dynamic Conv ROUGE-1 39.84 # 38
ROUGE-2 16.25 # 42
ROUGE-L 36.73 # 36
Machine Translation IWSLT2014 German-English LightConv BLEU score 34.8 # 22
Machine Translation IWSLT2014 German-English DynamicConv BLEU score 35.2 # 20
Language Modelling One Billion Word DynamicConv PPL 26.67 # 13
Number of params 0.34B # 1
Machine Translation WMT2014 English-French LightConv BLEU score 43.1 # 14
Hardware Burden None # 1
Operations per network pass None # 1
Machine Translation WMT2014 English-German DynamicConv BLEU score 29.7 # 17
Hardware Burden None # 1
Operations per network pass None # 1
Machine Translation WMT 2017 English-Chinese LightConv BLEU score 24.3 # 2
Machine Translation WMT 2017 English-Chinese DynamicConv BLEU score 24.4 # 1

Methods