Multi-Complementary and Unlabeled Learning for Arbitrary Losses and Models

13 Jan 2020  ·  Yuzhou Cao, Shuqi Liu, Yitian Xu ·

A weakly-supervised learning framework named as complementary-label learning has been proposed recently, where each sample is equipped with a single complementary label that denotes one of the classes the sample does not belong to. However, the existing complementary-label learning methods cannot learn from the easily accessible unlabeled samples and samples with multiple complementary labels, which are more informative. In this paper, to remove these limitations, we propose the novel multi-complementary and unlabeled learning framework that allows unbiased estimation of classification risk from samples with any number of complementary labels and unlabeled samples, for arbitrary loss functions and models. We first give an unbiased estimator of the classification risk from samples with multiple complementary labels, and then further improve the estimator by incorporating unlabeled samples into the risk formulation. The estimation error bounds show that the proposed methods are in the optimal parametric convergence rate. Finally, the experiments on both linear and deep models show the effectiveness of our methods.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Classification Kuzushiji-MNIST linear/flexible model Accuracy 79.90 # 20
Image Classification Kuzushiji-MNIST FWD Accuracy 79.5 # 21

Methods


No methods listed for this paper. Add relevant methods here