Learn from Each Other to Classify Better: Cross-layer Mutual Attention Learning for Fine-grained Visual Classification
Fine-grained visual classification (FGVC) is valuable yet challenging. The difficulty of FGVC mainly lies in its intrinsic inter-class similarity, intra-class variation, and limited training data. Moreover, with the popularity of deep convolutional neural networks, researchers have mainly used deep, abstract, semantic information for FGVC, while shallow, detailed information has been neglected. This work proposes a cross-layer mutual attention learning network (CMAL-Net) to solve the above problems. Specifically, this work views the shallow to deep layers of CNNs as “experts” knowledgeable about different perspectives. We let each expert give a category prediction and an attention region indicating the found clues. Attention regions are treated as information carriers among experts, bringing three benefits: (ⅰ) helping the model focus on discriminative regions; (ⅱ) providing more training data; (ⅲ) allowing experts to learn from each other to improve the overall performance. CMAL-Net achieves state-of-the-art performance on three competitive datasets: FGVC-Aircraft, Stanford Cars, and Food-11.
PDF AbstractCode
Datasets
Results from the Paper
Ranked #2 on
Fine-Grained Image Classification
on Stanford Cars
(using extra training data)