mixup: Beyond Empirical Risk Minimization

Large deep neural networks are powerful, but exhibit undesirable behaviors such as memorization and sensitivity to adversarial examples. In this work, we propose mixup, a simple learning principle to alleviate these issues. In essence, mixup trains a neural network on convex combinations of pairs of examples and their labels. By doing so, mixup regularizes the neural network to favor simple linear behavior in-between training examples. Our experiments on the ImageNet-2012, CIFAR-10, CIFAR-100, Google commands and UCI datasets show that mixup improves the generalization of state-of-the-art neural network architectures. We also find that mixup reduces the memorization of corrupt labels, increases the robustness to adversarial examples, and stabilizes the training of generative adversarial networks.

PDF Abstract ICLR 2018 PDF ICLR 2018 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Image Classification CIFAR-10 DenseNet-BC-190 + Mixup Percentage correct 97.3 # 78
PARAMS 25.6M # 211
Image Classification CIFAR-100 DenseNet-BC-190 + Mixup Percentage correct 83.20 # 89
Semi-Supervised Image Classification CIFAR-10, 250 Labels MixUp Percentage error 47.43 # 22
Domain Generalization ImageNet-A Mixup (ResNet-50) Top-1 accuracy % 6.6 # 35
Out-of-Distribution Generalization ImageNet-W Mixup (ResNet-50) IN-W Gap -18.6 # 1
Carton Gap +38 # 1
Image Classification Kuzushiji-MNIST PreActResNet-18 + Input Mixup Accuracy 98.41 # 16
Out-of-Distribution Generalization UrbanCars Mixup BG Gap -12.6 # 1
CoObj Gap -9.3 # 1
BG+CoObj Gap -61.8 # 1

Results from Other Papers


Task Dataset Model Metric Name Metric Value Rank Uses Extra
Training Data
Source Paper Compare
Semi-Supervised Image Classification SVHN, 250 Labels MixUp Accuracy 60.03 # 15

Methods