Self-produced Guidance for Weakly-supervised Object Localization

Weakly supervised methods usually generate localization results based on attention maps produced by classification networks. However, the attention maps exhibit the most discriminative parts of the object which are small and sparse. We propose to generate Self-produced Guidance (SPG) masks which separate the foreground, the object of interest, from the background to provide the classification networks with spatial correlation information of pixels. A stagewise approach is proposed to incorporate high confident object regions to learn the SPG masks. The high confident regions within attention maps are utilized to progressively learn the SPG masks. The masks are then used as an auxiliary pixel-level supervision to facilitate the training of classification networks. Extensive experiments on ILSVRC demonstrate that SPG is effective in producing high-quality object localizations maps. Particularly, the proposed SPG achieves the Top-1 localization error rate of 43.83% on the ILSVRC validation set, which is a new state-of-the-art error rate.

PDF Abstract ECCV 2018 PDF ECCV 2018 Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Weakly-Supervised Object Localization CUB-200-2011 SPG Top-1 Error Rate 53.36 # 3
Top-5 Error 42.28 # 1
MaxBoxAccV2 60.4 # 4
Weakly-Supervised Object Localization ILSVRC 2015 SPG Top-1 Error Rate 51.40 # 1
Weakly-Supervised Object Localization ILSVRC 2016 SPG Top-5 Error 40.00 # 1

Methods


No methods listed for this paper. Add relevant methods here