Correcting the Triplet Selection Bias for Triplet Loss

Triplet loss, popular for metric learning, has made a great success in many computer vision tasks, such as fine-grained image classification, image retrieval, and face recognition. Considering that the number of triplets grows cubically with the size of training data, triplet mining is thus indispensable for efficiently training with triplet loss. However, in practice, the training is usually very sensitive to the selected triplets, e.g., it almost does not converge with randomly selected triplets and selecting hardest triplets also leads to bad local minima. We argue that the bias in sampling of triplets degrades the performance of learning with triplet loss. In this paper, we propose a new variant of triplet loss, which tries to reduce the bias in triplet sampling by adaptively correcting the distribution shift on sampled triplets. We refer to this new triplet loss as adapted triplet loss. We conduct a number of experiments on MNIST and Fashion-MNIST for image classification, and on CARS196, CUB200-2011, and Stanford Online Products for image retrieval. The experimental results demonstrate the effectiveness of the proposed method.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods