Learning Attentive Pairwise Interaction for Fine-Grained Classification

24 Feb 2020  ·  Peiqin Zhuang, Yali Wang, Yu Qiao ·

Fine-grained classification is a challenging problem, due to subtle differences among highly-confused categories. Most approaches address this difficulty by learning discriminative representation of individual input image. On the other hand, humans can effectively identify contrastive clues by comparing image pairs. Inspired by this fact, this paper proposes a simple but effective Attentive Pairwise Interaction Network (API-Net), which can progressively recognize a pair of fine-grained images by interaction. Specifically, API-Net first learns a mutual feature vector to capture semantic differences in the input pair. It then compares this mutual vector with individual vectors to generate gates for each input image. These distinct gate vectors inherit mutual context on semantic differences, which allow API-Net to attentively capture contrastive clues by pairwise interaction between two images. Additionally, we train API-Net in an end-to-end manner with a score ranking regularization, which can further generalize API-Net by taking feature priorities into account. We conduct extensive experiments on five popular benchmarks in fine-grained classification. API-Net outperforms the recent SOTA methods, i.e., CUB-200-2011 (90.0%), Aircraft(93.9%), Stanford Cars (95.3%), Stanford Dogs (90.3%), and NABirds (88.1%).

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Fine-Grained Image Classification CUB-200-2011 API-Net Accuracy 90.0% # 24
Fine-Grained Image Classification FGVC Aircraft API-Net Accuracy 93.9% # 13
Fine-Grained Image Classification NABirds API-Net Accuracy 88.1% # 16
Fine-Grained Image Classification Stanford Cars API-Net Accuracy 95.3% # 13
Fine-Grained Image Classification Stanford Dogs API-Net Accuracy 90.3% # 12

Methods


No methods listed for this paper. Add relevant methods here