Temporal Activation Regularization (TAR) is a type of slowness regularization for RNNs that penalizes differences between states that have been explored in the past. Formally we minimize:
$$\beta{L_{2}}\left(h_{t} - h_{t+1}\right)$$
where $L_{2}$ is the $L_{2}$ norm, $h_{t}$ is the output of the RNN at timestep $t$, and $\beta$ is a scaling coefficient.
Source: Revisiting Activation Regularization for Language RNNsPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Language Modelling | 20 | 14.49% |
Language Modeling | 18 | 13.04% |
Text Classification | 15 | 10.87% |
General Classification | 14 | 10.14% |
Classification | 8 | 5.80% |
Sentiment Analysis | 8 | 5.80% |
Language Identification | 4 | 2.90% |
Translation | 4 | 2.90% |
Hate Speech Detection | 3 | 2.17% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |