Addressing Some Limitations of Transformers with Feedback Memory

21 Feb 2020  ·  Angela Fan, Thibaut Lavril, Edouard Grave, Armand Joulin, Sainbayar Sukhbaatar ·

Transformers have been successfully applied to sequential, auto-regressive tasks despite being feedforward networks. Unlike recurrent neural networks, Transformers use attention to capture temporal relations while processing input tokens in parallel. While this parallelization makes them computationally efficient, it restricts the model from fully exploiting the sequential nature of the input. The representation at a given layer can only access representations from lower layers, rather than the higher level representations already available. In this work, we propose the Feedback Transformer architecture that exposes all previous representations to all future representations, meaning the lowest representation of the current timestep is formed from the highest-level abstract representation of the past. We demonstrate on a variety of benchmarks in language modeling, machine translation, and reinforcement learning that the increased representation capacity can create small, shallow models with much stronger performance than comparable Transformers.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Language Modelling enwik8 Feedback Transformer Bit per Character (BPC) 0.96 # 6
Number of params 77M # 16
Language Modelling Penn Treebank (Character Level) Feedback Transformer Bit per Character (BPC) 1.160 # 5
Number of params 10.7M # 12
Language Modelling WikiText-103 Feedback Transformer (8 layers) Validation perplexity 17.5 # 8
Test perplexity 18.2 # 21
Number of params 139M # 30
Language Modelling WikiText-103 Feedback Transformer (4 layers) Validation perplexity 21.4 # 16
Test perplexity 22.4 # 34
Number of params 44M # 38