Regularization

Manifold Mixup is a regularization method that encourages neural networks to predict less confidently on interpolations of hidden representations. It leverages semantic interpolations as an additional training signal, obtaining neural networks with smoother decision boundaries at multiple levels of representation. As a result, neural networks trained with Manifold Mixup learn class-representations with fewer directions of variance.

Consider training a deep neural network $f\left(x\right) = f_{k}\left(g_{k}\left(x\right)\right)$, where $g_{k}$ denotes the part of the neural network mapping the input data to the hidden representation at layer $k$, and $f_{k}$ denotes the part mapping such hidden representation to the output $f\left(x\right)$. Training $f$ using Manifold Mixup is performed in five steps:

(1) Select a random layer $k$ from a set of eligible layers $S$ in the neural network. This set may include the input layer $g_{0}\left(x\right)$.

(2) Process two random data minibatches $\left(x, y\right)$ and $\left(x', y'\right)$ as usual, until reaching layer $k$. This provides us with two intermediate minibatches $\left(g_{k}\left(x\right), y\right)$ and $\left(g_{k}\left(x'\right), y'\right)$.

(3) Perform Input Mixup on these intermediate minibatches. This produces the mixed minibatch:

$$ \left(\tilde{g}_{k}, \tilde{y}\right) = \left(\text{Mix}_{\lambda}\left(g_{k}\left(x\right), g_{k}\left(x'\right)\right), \text{Mix}_{\lambda}\left(y, y'\right )\right), $$

where $\text{Mix}_{\lambda}\left(a, b\right) = \lambda \cdot a + \left(1 − \lambda\right) \cdot b$. Here, $\left(y, y' \right)$ are one-hot labels, and the mixing coefficient $\lambda \sim \text{Beta}\left(\alpha, \alpha\right)$ as in mixup. For instance, $\alpha = 1.0$ is equivalent to sampling $\lambda \sim U\left(0, 1\right)$.

(4) Continue the forward pass in the network from layer $k$ until the output using the mixed minibatch $\left(\tilde{g}_{k}, \tilde{y}\right)$.

(5) This output is used to compute the loss value and gradients that update all the parameters of the neural network.

Source: Manifold Mixup: Better Representations by Interpolating Hidden States

Papers


Paper Code Results Date Stars

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories