Metric Learning
530 papers with code • 8 benchmarks • 32 datasets
The goal of Metric Learning is to learn a representation function that maps objects into an embedded space. The distance in the embedded space should preserve the objects’ similarity — similar objects get close and dissimilar objects get far away. Various loss functions have been developed for Metric Learning. For example, the contrastive loss guides the objects from the same class to be mapped to the same point and those from different classes to be mapped to different points whose distances are larger than a margin. Triplet loss is also popular, which requires the distance between the anchor sample and the positive sample to be smaller than the distance between the anchor sample and the negative sample.
Source: Road Network Metric Learning for Estimated Time of Arrival
Libraries
Use these libraries to find Metric Learning models and implementationsDatasets
Most implemented papers
In Defense of the Triplet Loss for Person Re-Identification
In the past few years, the field of computer vision has gone through a revolution fueled mainly by the advent of large datasets and the adoption of deep convolutional neural networks for end-to-end learning.
Matching Networks for One Shot Learning
Our algorithm improves one-shot accuracy on ImageNet from 87. 6% to 93. 2% and from 88. 0% to 93. 8% on Omniglot compared to competing approaches.
Circle Loss: A Unified Perspective of Pair Similarity Optimization
This paper provides a pair similarity optimization viewpoint on deep feature learning, aiming to maximize the within-class similarity $s_p$ and minimize the between-class similarity $s_n$.
Additive Margin Softmax for Face Verification
In this work, we introduce a novel additive angular margin for the Softmax loss, which is intuitively appealing and more interpretable than the existing works.
Semantic Instance Segmentation with a Discriminative Loss Function
In this work we propose to tackle the problem with a discriminative loss function, operating at the pixel level, that encourages a convolutional network to produce a representation of the image that can easily be clustered into instances with a simple post-processing step.
Revisiting Training Strategies and Generalization Performance in Deep Metric Learning
Deep Metric Learning (DML) is arguably one of the most influential lines of research for learning visual similarities with many proposed approaches every year.
Time-Contrastive Networks: Self-Supervised Learning from Video
While representations are learned from an unlabeled collection of task-related videos, robot behaviors such as pouring are learned by watching a single 3rd-person demonstration by a human.
Sampling Matters in Deep Embedding Learning
In addition, we show that a simple margin based loss is sufficient to outperform all other loss functions.
Batch DropBlock Network for Person Re-identification and Beyond
In this paper, we propose the Batch DropBlock (BDB) Network which is a two branch network composed of a conventional ResNet-50 as the global branch and a feature dropping branch.
Deep Cosine Metric Learning for Person Re-Identification
Metric learning aims to construct an embedding where two extracted features corresponding to the same identity are likely to be closer than features from different identities.