AdaDelta is a stochastic optimization technique that allows for perdimension learning rate method for SGD. It is an extension of Adagrad that seeks to reduce its aggressive, monotonically decreasing learning rate. Instead of accumulating all past squared gradients, Adadelta restricts the window of accumulated past gradients to a fixed size $w$.
Instead of inefficiently storing $w$ previous squared gradients, the sum of gradients is recursively defined as a decaying average of all past squared gradients. The running average $E\left[g^{2}\right]_{t}$ at time step $t$ then depends only on the previous average and current gradient:
$$E\left[g^{2}\right]_{t} = \gamma{E}\left[g^{2}\right]_{t1} + \left(1\gamma\right)g^{2}_{t}$$
Usually $\gamma$ is set to around $0.9$. Rewriting SGD updates in terms of the parameter update vector:
$$ \Delta\theta_{t} = \eta\cdot{g_{t, i}}$$ $$\theta_{t+1} = \theta_{t} + \Delta\theta_{t}$$
AdaDelta takes the form:
$$ \Delta\theta_{t} = \frac{\eta}{\sqrt{E\left[g^{2}\right]_{t} + \epsilon}}g_{t} $$
The main advantage of AdaDelta is that we do not need to set a default learning rate.
Source: ADADELTA: An Adaptive Learning Rate MethodPaper  Code  Results  Date  Stars 

Component  Type 


🤖 No Components Found  You can add them if they exist; e.g. Mask RCNN uses RoIAlign 