Improving Transformer Models by Reordering their Sublayers

ACL 2020  ·  Ofir Press, Noah A. Smith, Omer Levy ·

Multilayer transformer networks consist of interleaved self-attention and feedforward sublayers. Could ordering the sublayers in a different pattern lead to better performance? We generate randomly ordered transformers and train them with the language modeling objective. We observe that some of these models are able to achieve better performance than the interleaved baseline, and that those successful variants tend to have more self-attention at the bottom and more feedforward sublayers at the top. We propose a new transformer pattern that adheres to this property, the sandwich transformer, and show that it improves perplexity on multiple word-level and character-level language modeling benchmarks, at no cost in parameters, memory, or training time. However, the sandwich reordering pattern does not guarantee performance gains across every task, as we demonstrate on machine translation models. Instead, we suggest that further exploration of task-specific sublayer reorderings is needed in order to unlock additional gains.

PDF Abstract ACL 2020 PDF ACL 2020 Abstract

Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Language Modelling enwik8 Sandwich Transformer (adaptive span) Bit per Character (BPC) 0.968 # 7
Number of params 209M # 6
Language Modelling WikiText-103 Sandwich Transformer Test perplexity 17.96 # 18
Number of params 247M # 18