Metric Learning
593 papers with code • 8 benchmarks • 33 datasets
The goal of Metric Learning is to learn a representation function that maps objects into an embedded space. The distance in the embedded space should preserve the objects’ similarity — similar objects get close and dissimilar objects get far away. Various loss functions have been developed for Metric Learning. For example, the contrastive loss guides the objects from the same class to be mapped to the same point and those from different classes to be mapped to different points whose distances are larger than a margin. Triplet loss is also popular, which requires the distance between the anchor sample and the positive sample to be smaller than the distance between the anchor sample and the negative sample.
Source: Road Network Metric Learning for Estimated Time of Arrival
Libraries
Use these libraries to find Metric Learning models and implementationsDatasets
Most implemented papers
Human Motion Analysis with Deep Metric Learning
Nevertheless, we believe that traditional approaches such as L2 distance or Dynamic Time Warping based on hand-crafted local pose metrics fail to appropriately capture the semantic relationship across motions and, as such, are not suitable for being employed as metrics within these tasks.
RelationNet2: Deep Comparison Columns for Few-Shot Learning
We thus propose a new deep comparison network comprised of embedding and relation modules that learn multiple non-linear distance metrics based on different levels of features simultaneously.
Classification is a Strong Baseline for Deep Metric Learning
Deep metric learning aims to learn a function mapping image pixels to embedding feature vectors that model the similarity between images.
Ranked List Loss for Deep Metric Learning
To address this, we propose to build a set-based similarity structure by exploiting all instances in the gallery.
Hardness-Aware Deep Metric Learning
This paper presents a hardness-aware deep metric learning (HDML) framework.
Correlation Congruence for Knowledge Distillation
Most teacher-student frameworks based on knowledge distillation (KD) depend on a strong congruent constraint on instance level.
Multi-Similarity Loss with General Pair Weighting for Deep Metric Learning
A family of loss functions built on pair-based computation have been proposed in the literature which provide a myriad of solutions for deep metric learning.
Learning to Approximate a Bregman Divergence
Bregman divergences generalize measures such as the squared Euclidean distance and the KL divergence, and arise throughout many areas of machine learning.
MIC: Mining Interclass Characteristics for Improved Metric Learning
In contrast, we propose to explicitly learn the latent characteristics that are shared by and go across object classes.
Learning Invariant Representations of Social Media Users
The evolution of social media users' behavior over time complicates user-level comparison tasks such as verification, classification, clustering, and ranking.