Forward gradients are unbiased estimators of the gradient $\nabla f(\theta)$ for a function $f: \mathbb{R}^n \rightarrow \mathbb{R}$, given by $g(\theta) = \langle \nabla f(\theta) , v \rangle v$.
Here $v = (v_1, \ldots, v_n)$ is a random vector, which must satisfy the following conditions in order for $g(\theta)$ to be an unbiased estimator of $\nabla f(\theta)$
Forward gradients can be computed with a single jvp (Jacobian Vector Product), which enables the use of the forward mode of autodifferentiation instead of the usual reverse mode, which has worse computational characteristics.
Source: Gradients without BackpropagationPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Benchmarking | 1 | 20.00% |
Imitation Learning | 1 | 20.00% |
Offline RL | 1 | 20.00% |
Reinforcement Learning (RL) | 1 | 20.00% |
Memorization | 1 | 20.00% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |