Paper

Supervised Contrastive Learning with Hard Negative Samples

Unsupervised contrastive learning (UCL) is a self-supervised learning technique that aims to learn a useful representation function by pulling positive samples close to each other while pushing negative samples far apart in the embedding space. To improve the performance of UCL, several works introduced hard-negative unsupervised contrastive learning (H-UCL) that aims to select the "hard" negative samples in contrast to a random sampling strategy used in UCL. In another approach, under the assumption that the label information is available, supervised contrastive learning (SCL) has developed recently by extending the UCL to a fully-supervised setting. In this paper, motivated by the effectiveness of hard-negative sampling strategies in H-UCL and the usefulness of label information in SCL, we propose a contrastive learning framework called hard-negative supervised contrastive learning (H-SCL). Our numerical results demonstrate the effectiveness of H-SCL over both SCL and H-UCL on several image datasets. In addition, we theoretically prove that, under certain conditions, the objective function of H-SCL can be bounded by the objective function of H-UCL but not by the objective function of UCL. Thus, minimizing the H-UCL loss can act as a proxy to minimize the H-SCL loss while minimizing UCL loss cannot. As we numerically showed that H-SCL outperforms other contrastive learning methods, our theoretical result (bounding H-SCL loss by H-UCL loss) helps to explain why H-UCL outperforms UCL in practice.

Results in Papers With Code
(↓ scroll down to see all results)