Paper

A Simple Non-i.i.d. Sampling Approach for Efficient Training and Better Generalization

While training on samples drawn from independent and identical distribution has been a de facto paradigm for optimizing image classification networks, humans learn new concepts in an easy-to-hard manner and on the selected examples progressively. Driven by this fact, we investigate the training paradigms where the samples are not drawn from independent and identical distribution. We propose a data sampling strategy, named Drop-and-Refresh (DaR), motivated by the learning behaviors of humans that selectively drop easy samples and refresh them only periodically. We show in our experiments that the proposed DaR strategy can maintain (and in many cases improve) the predictive accuracy even when the training cost is reduced by 15% on various datasets (CIFAR 10, CIFAR 100 and ImageNet) and with different backbone architectures (ResNets, DenseNets and MobileNets). Furthermore and perhaps more importantly, we find the ImageNet pre-trained models using our DaR sampling strategy achieves better transferability for the downstream tasks including object detection (+0.3 AP), instance segmentation (+0.3 AP), scene parsing (+0.5 mIoU) and human pose estimation (+0.6 AP). Our investigation encourages people to rethink the connections between the sampling strategy for training and the transferability of its learned features for pre-training ImageNet models.

Results in Papers With Code
(↓ scroll down to see all results)