Grad-CAM guided channel-spatial attention module for fine-grained visual classification

24 Jan 2021  ·  Shuai Xu, Dongliang Chang, Jiyang Xie, Zhanyu Ma ·

Fine-grained visual classification (FGVC) is becoming an important research field, due to its wide applications and the rapid development of computer vision technologies. The current state-of-the-art (SOTA) methods in the FGVC usually employ attention mechanisms to first capture the semantic parts and then discover their subtle differences between distinct classes. The channel-spatial attention mechanisms, which focus on the discriminative channels and regions simultaneously, have significantly improved the classification performance. However, the existing attention modules are poorly guided since part-based detectors in the FGVC depend on the network learning ability without the supervision of part annotations. As obtaining such part annotations is labor-intensive, some visual localization and explanation methods, such as gradient-weighted class activation mapping (Grad-CAM), can be utilized for supervising the attention mechanism. We propose a Grad-CAM guided channel-spatial attention module for the FGVC, which employs the Grad-CAM to supervise and constrain the attention weights by generating the coarse localization maps. To demonstrate the effectiveness of the proposed method, we conduct comprehensive experiments on three popular FGVC datasets, including CUB-$200$-$2011$, Stanford Cars, and FGVC-Aircraft datasets. The proposed method outperforms the SOTA attention modules in the FGVC task. In addition, visualizations of feature maps also demonstrate the superiority of the proposed method against the SOTA approaches.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Fine-Grained Image Classification CUB-200-2011 Grad-CAM Accuracy 88.45% # 45
Fine-Grained Image Classification FGVC Aircraft Grad-CAM Accuracy 93.42% # 21
Fine-Grained Image Classification Stanford Cars Grad-CAM Accuracy 94.41% # 41

Methods


No methods listed for this paper. Add relevant methods here