NT-ASGD, or Non-monotonically Triggered ASGD, is an averaged stochastic gradient descent technique.
In regular ASGD, we take steps identical to regular SGD but instead of returning the last iterate as the solution, we return $\frac{1}{\left(K-T+1\right)}\sum^{T}_{i=T}w_{i}$, where $K$ is the total number of iterations and $T < K$ is a user-specified averaging trigger.
NT-ASGD has a non-monotonic criterion that conservatively triggers the averaging when the validation metric fails to improve for multiple cycles. Given that the choice of triggering is irreversible, this conservatism ensures that the randomness of training does not play a major role in the decision.
Source: Regularizing and Optimizing LSTM Language ModelsPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Language Modeling | 4 | 20.00% |
Language Modelling | 4 | 20.00% |
Translation | 3 | 15.00% |
Image Classification | 2 | 10.00% |
Machine Translation | 2 | 10.00% |
Sentence | 1 | 5.00% |
Few-Shot Image Classification | 1 | 5.00% |
General Classification | 1 | 5.00% |
Text Classification | 1 | 5.00% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |