Can Active Memory Replace Attention?

NeurIPS 2016  ·  Łukasz Kaiser, Samy Bengio ·

Several mechanisms to focus attention of a neural network on selected parts of its input or memory have been used successfully in deep learning models in recent years. Attention has improved image classification, image captioning, speech recognition, generative models, and learning algorithmic tasks, but it had probably the largest impact on neural machine translation. Recently, similar improvements have been obtained using alternative mechanisms that do not focus on a single part of a memory but operate on all of it in parallel, in a uniform way. Such mechanism, which we call active memory, improved over attention in algorithmic tasks, image processing, and in generative modelling. So far, however, active memory has not improved over attention for most natural language processing tasks, in particular for machine translation. We analyze this shortcoming in this paper and propose an extended model of active memory that matches existing attention models on neural machine translation and generalizes better to longer sentences. We investigate this model and explain why previous active memory models did not succeed. Finally, we discuss when active memory brings most benefits and where attention can be a better choice.

PDF Abstract NeurIPS 2016 PDF NeurIPS 2016 Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Machine Translation WMT2014 English-French GRU+Attention BLEU score 26.4 # 53

Methods


No methods listed for this paper. Add relevant methods here