Pervasive Attention: 2D Convolutional Neural Networks for Sequence-to-Sequence Prediction

CONLL 2018  ·  Maha Elbayad, Laurent Besacier, Jakob Verbeek ·

Current state-of-the-art machine translation systems are based on encoder-decoder architectures, that first encode the input sequence, and then generate an output sequence based on the input encoding. Both are interfaced with an attention mechanism that recombines a fixed encoding of the source tokens based on the decoder state... We propose an alternative approach which instead relies on a single 2D convolutional neural network across both sequences. Each layer of our network re-codes source tokens on the basis of the output sequence produced so far. Attention-like properties are therefore pervasive throughout the network. Our model yields excellent results, outperforming state-of-the-art encoder-decoder systems, while being conceptually simpler and having fewer parameters. read more

PDF Abstract CONLL 2018 PDF CONLL 2018 Abstract

Datasets


  Add Datasets introduced or used in this paper
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Machine Translation IWSLT2015 English-German Pervasive Attention BLEU score 27.99 # 3
Machine Translation IWSLT2015 German-English Pervasive Attention BLEU score 34.18 # 2

Methods


No methods listed for this paper. Add relevant methods here