Recurrent neural networks (RNNs) have been widely used for processing
sequential data. However, RNNs are commonly difficult to train due to the
well-known gradient vanishing and exploding problems and hard to learn
long-term patterns. Long short-term memory (LSTM) and gated recurrent unit
(GRU) were developed to address these problems, but the use of hyperbolic
tangent and the sigmoid action functions results in gradient decay over layers.
Consequently, construction of an efficiently trainable deep network is
challenging. In addition, all the neurons in an RNN layer are entangled
together and their behaviour is hard to interpret. To address these problems, a
new type of RNN, referred to as independently recurrent neural network
(IndRNN), is proposed in this paper, where neurons in the same layer are
independent of each other and they are connected across layers. We have shown
that an IndRNN can be easily regulated to prevent the gradient exploding and
vanishing problems while allowing the network to learn long-term dependencies.
Moreover, an IndRNN can work with non-saturated activation functions such as
relu (rectified linear unit) and be still trained robustly. Multiple IndRNNs
can be stacked to construct a network that is deeper than the existing RNNs.
Experimental results have shown that the proposed IndRNN is able to process
very long sequences (over 5000 time steps), can be used to construct very deep
networks (21 layers used in the experiment) and still be trained robustly.
Better performances have been achieved on various tasks by using IndRNNs
compared with the traditional RNN and LSTM. The code is available at