Embedding Transfer via Smooth Contrastive Loss

1 Jan 2021  ·  Sungyeon Kim, Dongwon Kim, Minsu Cho, Suha Kwak ·

This paper presents a novel method for embedding transfer, a task of transferring knowledge of a learned embedding model to another. Our method exploits pairwise similarities between samples in the source embedding space as the knowledge, and transfers it through a loss function used for learning target embedding models. To this end, we design a new loss called smooth contrastive loss, which pulls together or pushes apart a pair of samples in a target embedding space with strength determined by their semantic similarity in the source embedding space; an analysis of the loss reveals that this property enables more important pairs to contribute more to learning the target embedding space. Experiments on metric learning benchmarks demonstrate that our method improves performance, or reduces sizes and embedding dimensions of target models effectively. Moreover, we show that deep networks trained in a self-supervised manner can be further enhanced by our method with no additional supervision. In all the experiments, our method clearly outperforms existing embedding transfer techniques.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here