Center Contrastive Loss for Metric Learning

1 Aug 2023  ·  Bolun Cai, Pengfei Xiong, Shangxuan Tian ·

Contrastive learning is a major studied topic in metric learning. However, sampling effective contrastive pairs remains a challenge due to factors such as limited batch size, imbalanced data distribution, and the risk of overfitting. In this paper, we propose a novel metric learning function called Center Contrastive Loss, which maintains a class-wise center bank and compares the category centers with the query data points using a contrastive loss. The center bank is updated in real-time to boost model convergence without the need for well-designed sample mining. The category centers are well-optimized classification proxies to re-balance the supervisory signal of each class. Furthermore, the proposed loss combines the advantages of both contrastive and classification methods by reducing intra-class variations and enhancing inter-class differences to improve the discriminative power of embeddings. Our experimental results, as shown in Figure 1, demonstrate that a standard network (ResNet50) trained with our loss achieves state-of-the-art performance and faster convergence.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Metric Learning CARS196 CCL (ResNet-50) R@1 91.02 # 7
Metric Learning CUB-200-2011 CCL (ResNet-50) R@1 73.45 # 5
Metric Learning In-Shop CCL (ResNet-50) R@1 92.31 # 6
Metric Learning Stanford Online Products CCL (ResNet-50) R@1 83.10 # 9

Methods


No methods listed for this paper. Add relevant methods here