Convolutional Sequence to Sequence Learning

The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.

PDF Abstract ICML 2017 PDF ICML 2017 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Bangla Spelling Error Correction DPCSpell-Bangla-SEC-Corpus ConvSeq2Seq Exact Match Accuracy 78.85% # 3
Machine Translation IWSLT2015 English-German ConvS2S BLEU score 26.73 # 6
Machine Translation IWSLT2015 German-English ConvS2S BLEU score 32.31 # 6
Image Classification MNIST CNN Model by Som Percentage error 1.41 # 70
Accuracy 98.59 # 21
Machine Translation WMT2014 English-French ConvS2S (ensemble) BLEU score 41.3 # 24
Hardware Burden None # 1
Operations per network pass None # 1
Machine Translation WMT2014 English-German ConvS2S (ensemble) BLEU score 26.4 # 59
Hardware Burden 54G # 1
Operations per network pass None # 1
Machine Translation WMT2016 English-Romanian ConvS2S BPE40k BLEU score 29.9 # 7

Results from Other Papers


Task Dataset Model Metric Name Metric Value Rank Source Paper Compare
Machine Translation WMT2014 English-French ConvS2S BLEU score 40.46 # 32
Hardware Burden 143G # 1
Operations per network pass None # 1
Machine Translation WMT2014 English-German ConvS2S BLEU score 25.16 # 71
Hardware Burden 72G # 1
Operations per network pass None # 1

Methods