See Better Before Looking Closer: Weakly Supervised Data Augmentation Network for Fine-Grained Visual Classification

26 Jan 2019  ·  Tao Hu, Honggang Qi, Qingming Huang, Yan Lu ·

Data augmentation is usually adopted to increase the amount of training data, prevent overfitting and improve the performance of deep models. However, in practice, random data augmentation, such as random image cropping, is low-efficiency and might introduce many uncontrolled background noises. In this paper, we propose Weakly Supervised Data Augmentation Network (WS-DAN) to explore the potential of data augmentation. Specifically, for each training image, we first generate attention maps to represent the object's discriminative parts by weakly supervised learning. Next, we augment the image guided by these attention maps, including attention cropping and attention dropping. The proposed WS-DAN improves the classification accuracy in two folds. In the first stage, images can be seen better since more discriminative parts' features will be extracted. In the second stage, attention regions provide accurate location of object, which ensures our model to look at the object closer and further improve the performance. Comprehensive experiments in common fine-grained visual classification datasets show that our WS-DAN surpasses the state-of-the-art methods, which demonstrates its effectiveness.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Fine-Grained Image Classification CUB-200-2011 WS-DAN Accuracy 89.4 # 12
Fine-Grained Image Classification FGVC Aircraft WS-DAN Accuracy 93.0% # 28
Fine-Grained Image Classification Stanford Cars WS-DAN Accuracy 94.5% # 38


No methods listed for this paper. Add relevant methods here