GradML: A Gradient-based Loss for Deep Metric Learning

Deep metric learning (ML) uses a carefully designed loss function to learn distance metrics for improving the discriminatory ability for tasks like clustering and retrieval. Most loss functions are designed by considering the distance between the embeddings to induce certain properties without exploring how such losses would impact the movement of the said embeddings via their gradients during optimization. In this work, we analyze the gradients of various ML loss functions and propose a gradient-based loss for ML (GradML). Instead of directly formulating the loss, we first formulate the gradients of the loss and use them to derive the loss to be optimized. It has a simple formulation and lowers the computational cost as compared to other methods. We evaluate our approach on three datasets and find that the performance is data-dependent on properties like inter-class variance.

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.


No methods listed for this paper. Add relevant methods here