Nesterov Accelerated Gradient is a momentum-based SGD optimizer that "looks ahead" to where the parameters will be to calculate the gradient ex post rather than ex ante:
$$ v_{t} = \gamma{v}_{t-1} - \eta\nabla_{\theta}J\left(\theta_{t-1}+\gamma{v_{t-1}}\right) $$ $$ \theta_{t} = \theta_{t-1} + v_{t} $$ $$ \gamma, \eta \in \mathbb{R}^+ $$
Like SGD with momentum $\gamma$ is usually set to $0.9$. $\eta$ and $\gamma$ are usually less than $1$.
The intuition is that the standard momentum method first computes the gradient at the current location and then takes a big jump in the direction of the updated accumulated gradient. In contrast Nesterov momentum first makes a big jump in the direction of the previous accumulated gradient and then measures the gradient where it ends up and makes a correction. The idea being that it is better to correct a mistake after you have made it.
Image Source: Geoff Hinton lecture notes
Paper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Image Classification | 8 | 14.81% |
General Classification | 3 | 5.56% |
Object Recognition | 3 | 5.56% |
Denoising | 2 | 3.70% |
Semantic Segmentation | 2 | 3.70% |
Bilevel Optimization | 1 | 1.85% |
Text Classification | 1 | 1.85% |
Time Series Forecasting | 1 | 1.85% |
Time Series Prediction | 1 | 1.85% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |