Paper

Recovering Accurate Labeling Information from Partially Valid Data for Effective Multi-Label Learning

Partial Multi-label Learning (PML) aims to induce the multi-label predictor from datasets with noisy supervision, where each training instance is associated with several candidate labels but only partially valid. To address the noisy issue, the existing PML methods basically recover the ground-truth labels by leveraging the ground-truth confidence of the candidate label, \ie the likelihood of a candidate label being a ground-truth one. However, they neglect the information from non-candidate labels, which potentially contributes to the ground-truth label recovery. In this paper, we propose to recover the ground-truth labels, \ie estimating the ground-truth confidences, from the label enrichment, composed of the relevance degrees of candidate labels and irrelevance degrees of non-candidate labels. Upon this observation, we further develop a novel two-stage PML method, namely \emph{\underline{P}artial \underline{M}ulti-\underline{L}abel \underline{L}earning with \underline{L}abel \underline{E}nrichment-\underline{R}ecovery} (\baby), where in the first stage, it estimates the label enrichment with unconstrained label propagation, then jointly learns the ground-truth confidence and multi-label predictor given the label enrichment. Experimental results validate that \baby outperforms the state-of-the-art PML methods.

Results in Papers With Code
(↓ scroll down to see all results)