An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling

4 Mar 2018  ·  Shaojie Bai, J. Zico Kolter, Vladlen Koltun ·

For most deep learning practitioners, sequence modeling is synonymous with recurrent networks. Yet recent results indicate that convolutional architectures can outperform recurrent networks on tasks such as audio synthesis and machine translation. Given a new sequence modeling task or dataset, which architecture should one use? We conduct a systematic evaluation of generic convolutional and recurrent architectures for sequence modeling. The models are evaluated across a broad range of standard tasks that are commonly used to benchmark recurrent networks. Our results indicate that a simple convolutional architecture outperforms canonical recurrent networks such as LSTMs across a diverse range of tasks and datasets, while demonstrating longer effective memory. We conclude that the common association between sequence modeling and recurrent networks should be reconsidered, and convolutional networks should be regarded as a natural starting point for sequence modeling tasks. To assist related work, we have made code available at http://github.com/locuslab/TCN .

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Music Modeling JSB Chorales TCN NLL 8.10 # 7
Music Modeling Nottingham LSTM NLL 3.29 # 5
Music Modeling Nottingham TCN NLL 3.07 # 4
Music Modeling Nottingham RNN NLL 4.05 # 8
Music Modeling Nottingham GRU NLL 3.46 # 7
Language Modelling Penn Treebank (Character Level) Temporal Convolutional Network Bit per Character (BPC) 1.31 # 18
Language Modelling Penn Treebank (Word Level) LSTM (Bai et al., 2018) Test perplexity 78.93 # 37
Language Modelling Penn Treebank (Word Level) GRU (Bai et al., 2018) Test perplexity 92.48 # 41
Sequential Image Classification Sequential MNIST Temporal Convolutional Network Unpermuted Accuracy 99.0% # 16
Permuted Accuracy 97.2% # 12
Language Modelling WikiText-103 TCN Test perplexity 45.19 # 83

Methods


No methods listed for this paper. Add relevant methods here