Improved Regularization of Convolutional Neural Networks with Cutout

15 Aug 2017  ·  Terrance DeVries, Graham W. Taylor ·

Convolutional neural networks are capable of learning powerful representational spaces, which are necessary for tackling complex learning tasks. However, due to the model capacity required to capture such representations, they are often susceptible to overfitting and therefore require proper regularization in order to generalize well. In this paper, we show that the simple regularization technique of randomly masking out square regions of input during training, which we call cutout, can be used to improve the robustness and overall performance of convolutional neural networks. Not only is this method extremely easy to implement, but we also demonstrate that it can be used in conjunction with existing forms of data augmentation and other regularizers to further improve model performance. We evaluate this method by applying it to current state-of-the-art architectures on the CIFAR-10, CIFAR-100, and SVHN datasets, yielding new state-of-the-art results of 2.56%, 15.20%, and 1.30% test error respectively. Code is available at https://github.com/uoguelph-mlrg/Cutout

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Domain Generalization ImageNet-A Cutout (ResNet-50) Top-1 accuracy % 4.4 # 36
Out-of-Distribution Generalization ImageNet-W Cutout (ResNet-50) IN-W Gap -18.0 # 1
Carton Gap +32 # 1
Semi-Supervised Image Classification STL-10 CutOut Accuracy 87.26 # 3
Image Classification STL-10 Cutout Percentage correct 87.26 # 47
Image Classification SVHN Cutout Percentage error 1.30 # 7
Out-of-Distribution Generalization UrbanCars Cutout BG Gap -15.8 # 1
CoObj Gap -10.4 # 1
BG+CoObj Gap -71.4 # 1

Methods