Domain Adaptive Transfer Learning on Visual Attention Aware Data Augmentation for Fine-grained Visual Categorization

6 Oct 2020  ·  Ashiq Imran, Vassilis Athitsos ·

Fine-Grained Visual Categorization (FGVC) is a challenging topic in computer vision. It is a problem characterized by large intra-class differences and subtle inter-class differences. In this paper, we tackle this problem in a weakly supervised manner, where neural network models are getting fed with additional data using a data augmentation technique through a visual attention mechanism. We perform domain adaptive knowledge transfer via fine-tuning on our base network model. We perform our experiment on six challenging and commonly used FGVC datasets, and we show competitive improvement on accuracies by using attention-aware data augmentation techniques with features derived from deep learning model InceptionV3, pre-trained on large scale datasets. Our method outperforms competitor methods on multiple FGVC datasets and showed competitive results on other datasets. Experimental studies show that transfer learning from large scale datasets can be utilized effectively with visual attention based data augmentation, which can obtain state-of-the-art results on several FGVC datasets. We present a comprehensive analysis of our experiments. Our method achieves state-of-the-art results in multiple fine-grained classification datasets including challenging CUB200-2011 bird, Flowers-102, and FGVC-Aircrafts datasets.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Fine-Grained Image Classification CUB-200-2011 DATL Accuracy 91.2 # 7
Fine-Grained Image Classification FGVC Aircraft ImageNet + iNat on WS-DAN Top-1 91.5 # 1
Image Classification Flowers-102 DAT Accuracy 98.9% # 16
Fine-Grained Image Classification Food-101 ImageNet + iNat on WS-DAN Top 1 Accuracy 88.7 # 2
Image Classification Stanford Cars ImageNet + iNat on WS-DAN Accuracy 94.1 # 6
Fine-Grained Image Classification Stanford Dogs ImageNet + iNat on WS-DAN Accuracy 90% # 13

Methods


No methods listed for this paper. Add relevant methods here