Attention-Driven Dynamic Graph Convolutional Network for Multi-Label Image Recognition

ECCV 2020  ·  Jin Ye, Junjun He, Xiaojiang Peng, Wenhao Wu, Yu Qiao ·

Recent studies often exploit Graph Convolutional Network (GCN) to model label dependencies to improve recognition accuracy for multi-label image recognition. However, constructing a graph by counting the label co-occurrence possibilities of the training data may degrade model generalizability, especially when there exist occasional co-occurrence objects in test images. Our goal is to eliminate such bias and enhance the robustness of the learnt features. To this end, we propose an Attention-Driven Dynamic Graph Convolutional Network (ADD-GCN) to dynamically generate a specific graph for each image. ADD-GCN adopts a Dynamic Graph Convolutional Network (D-GCN) to model the relation of content-aware category representations that are generated by a Semantic Attention Module (SAM). Extensive experiments on public multi-label benchmarks demonstrate the effectiveness of our method, which achieves mAPs of 85.2%, 96.0%, and 95.5% on MS-COCO, VOC2007, and VOC2012, respectively, and outperforms current state-of-the-art methods with a clear margin. All codes can be found at

PDF Abstract ECCV 2020 PDF ECCV 2020 Abstract


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Multi-Label Classification MS-COCO ADD-GCN mAP 85.2 # 22


No methods listed for this paper. Add relevant methods here