Bidirectional Recurrent Neural Networks

# Asymmetrical Bi-RNN

Introduced by Rozenberg et al. in Asymmetrical Bi-RNN for pedestrian trajectory encoding

An aspect of Bi-RNNs that could be undesirable is the architecture's symmetry in both time directions.

Bi-RNNs are often used in natural language processing, where the order of the words is almost exclusively determined by grammatical rules and not by temporal sequentiality. However, in some cases, the data has a preferred direction in time: the forward direction.

Another potential drawback of Bi-RNNs is that their output is simply the concatenation of two naive readings of the input in both directions. In consequence, Bi-RNNs never actually read an input by knowing what happens in the future. Conversely, the idea behind U-RNN, is to first do a backward pass, and then use during the forward pass information about the future.

We accumulate information while knowing which part of the information will be useful in the future as it should be relevant to do so if the forward direction is the preferred direction of the data.

The backward and forward hidden states $(h^b_t)$ and $(h^f_t)$ are obtained according to these equations:

\begin{aligned} &h_{t-1}^{b}=R N N\left(h_{t}^{b}, e_{t}, W_{b}\right) \ &h_{t+1}^{f}=R N N\left(h_{t}^{f},\left[e_{t}, h_{t}^{b}\right], W_{f}\right) \end{aligned}

where $W_b$ and $W_f$ are learnable weights that are shared among pedestrians, and $[\cdot, \cdot]$ denotes concatenation. The last hidden state $h^f_{T_{obs}}$ is then used as the encoding of the sequence.

#### Papers

Paper Code Results Date Stars