Comprehensive Attention Self-Distillation for Weakly-Supervised Object Detection

Weakly Supervised Object Detection (WSOD) has emerged as an effective tool to train object detectors using only the image-level category labels. However, without object-level labels, WSOD detectors are prone to detect bounding boxes on salient objects, clustered objects and discriminative object parts. Moreover, the image-level category labels do not enforce consistent object detection across different transformations of the same images. To address the above issues, we propose a Comprehensive Attention Self-Distillation (CASD) training approach for WSOD. To balance feature learning among all object instances, CASD computes the comprehensive attention aggregated from multiple transformations and feature layers of the same images. To enforce consistent spatial supervision on objects, CASD conducts self-distillation on the WSOD networks, such that the comprehensive attention is approximated simultaneously by multiple transformations and feature layers of the same images. CASD produces new state-of-the-art WSOD results on standard benchmarks such as PASCAL VOC 2007/2012 and MS-COCO.

PDF Abstract NeurIPS 2020 PDF NeurIPS 2020 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Weakly Supervised Object Detection MSCOCO CASD(ResNet50) mAP 13.9 # 1
mAP@50 27.8 # 1
Weakly Supervised Object Detection PASCAL VOC 2007 CASD(VGG16) MAP 56.8 # 2
Weakly Supervised Object Detection PASCAL VOC 2012 test CASD(VGG16) MAP 53.6 # 2

Methods


No methods listed for this paper. Add relevant methods here