Sliced Recurrent Neural Networks

COLING 2018  ·  Zeping Yu, Gongshen Liu ·

Recurrent neural networks have achieved great success in many NLP tasks. However, they have difficulty in parallelization because of the recurrent structure, so it takes much time to train RNNs. In this paper, we introduce sliced recurrent neural networks (SRNNs), which could be parallelized by slicing the sequences into many subsequences. SRNNs have the ability to obtain high-level information through multiple layers with few extra parameters. We prove that the standard RNN is a special case of the SRNN when we use linear activation functions. Without changing the recurrent units, SRNNs are 136 times as fast as standard RNNs and could be even faster when we train longer sequences. Experiments on six largescale sentiment analysis datasets show that SRNNs achieve better performance than standard RNNs.

PDF Abstract COLING 2018 PDF COLING 2018 Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Sentiment Analysis Amazon Review Full SRNN Accuracy 61.65 # 6
Sentiment Analysis Amazon Review Polarity SRNN Accuracy 95.26 # 6
Sentiment Analysis Yelp Binary classification SRNN Error 3.96 # 15

Methods


No methods listed for this paper. Add relevant methods here