C+1 Loss: Learn to Classify C Classes of Interest and the Background Class Differentially

29 Sep 2021  ·  Changhuai Chen, Xile Shen, Mengyu Ye, Yi Lu, Jun Che, ShiLiang Pu ·

There is one kind of problem all around the classification area, where we want to classify C+1 classes of samples, including C semantically deterministic classes which we call classes of interest and the (C+1)th semantically undeterministic class which we call background class. In spite of most classification algorithm use softmax-based cross-entropy loss to supervise the classifier training process without differentiating the background class from the classes of interest, it is unreasonable as each of the classes of interest has its own inherent characteristics, but the background class dosen’t. We figure out that the background class should be treated differently from the classes of interest during training. Motivated by this, firstly we define the C+1 classification problem. Then, we propose three properties that a good C+1 classifier should have: basic discriminability, compactness and background margin. Based on them we define a uniform general C+1 loss, composed of three parts, driving the C+1 classifier to satisfy those properties. Finally, we instantialize a C+1 loss and experiment it in semantic segmentation, human parsing and object detection tasks. The proposed approach shows its superiority over the traditional cross-entropy loss.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here