Mixup is a data augmentation technique that generates a weighted combination of random image pairs from the training data. Given two images and their ground truth labels: $\left(x_{i}, y_{i}\right), \left(x_{j}, y_{j}\right)$, a synthetic training example $\left(\hat{x}, \hat{y}\right)$ is generated as:
$$ \hat{x} = \lambda{x_{i}} + \left(1 − \lambda\right){x_{j}} $$ $$ \hat{y} = \lambda{y_{i}} + \left(1 − \lambda\right){y_{j}} $$
where $\lambda \sim \text{Beta}\left(\alpha = 0.2\right)$ is independently sampled for each augmented example.
Source: mixup: Beyond Empirical Risk MinimizationPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Image Classification | 71 | 9.23% |
Domain Adaptation | 49 | 6.37% |
Classification | 32 | 4.16% |
Unsupervised Domain Adaptation | 27 | 3.51% |
General Classification | 25 | 3.25% |
Semantic Segmentation | 22 | 2.86% |
Object Detection | 19 | 2.47% |
Domain Generalization | 17 | 2.21% |
Graph Classification | 12 | 1.56% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |