Complementary-Label Learning for Arbitrary Losses and Models

In contrast to the standard classification paradigm where the true class is given to each training pattern, complementary-label learning only uses training patterns each equipped with a complementary label, which only specifies one of the classes that the pattern does not belong to. The goal of this paper is to derive a novel framework of complementary-label learning with an unbiased estimator of the classification risk, for arbitrary losses and models---all existing methods have failed to achieve this goal. Not only is this beneficial for the learning stage, it also makes model/hyper-parameter selection (through cross-validation) possible without the need of any ordinarily labeled validation data, while using any linear/non-linear models or convex/non-convex loss functions. We further improve the risk estimator by a non-negative correction and gradient ascent trick, and demonstrate its superiority through experiments.

PDF Abstract Proceedings of 2019 PDF Proceedings of 2019 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Classification Kuzushiji-MNIST Complementary-Label Learning Accuracy 67.1 # 22

Methods


No methods listed for this paper. Add relevant methods here