Attributable Visual Similarity Learning

CVPR 2022  ·  Borui Zhang, Wenzhao Zheng, Jie zhou, Jiwen Lu ·

This paper proposes an attributable visual similarity learning (AVSL) framework for a more accurate and explainable similarity measure between images. Most existing similarity learning methods exacerbate the unexplainability by mapping each sample to a single point in the embedding space with a distance metric (e.g., Mahalanobis distance, Euclidean distance). Motivated by the human semantic similarity cognition, we propose a generalized similarity learning paradigm to represent the similarity between two images with a graph and then infer the overall similarity accordingly. Furthermore, we establish a bottom-up similarity construction and top-down similarity inference framework to infer the similarity based on semantic hierarchy consistency. We first identify unreliable higher-level similarity nodes and then correct them using the most coherent adjacent lower-level similarity nodes, which simultaneously preserve traces for similarity attribution. Extensive experiments on the CUB-200-2011, Cars196, and Stanford Online Products datasets demonstrate significant improvements over existing deep similarity learning methods and verify the interpretability of our framework. Code is available at https://github.com/zbr17/AVSL.

PDF Abstract CVPR 2022 PDF CVPR 2022 Abstract

Results from the Paper


Ranked #3 on Metric Learning on CARS196 (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Metric Learning CARS196 ResNet-50 + AVSL R@1 91.5 # 3
Metric Learning CUB-200-2011 ResNet-50 + AVSL R@1 71.9 # 6
Metric Learning Stanford Online Products ResNet50 + AVSL R@1 79.6 # 25

Methods


No methods listed for this paper. Add relevant methods here