Fine-Grained Visual Comparisons with Local Learning

CVPR 2014  ·  Aron Yu, Kristen Grauman ·

Given two images, we want to predict which exhibits a particular visual attribute more than the other---even when the two images are quite similar. Existing relative attribute methods rely on global ranking functions; yet rarely will the visual cues relevant to a comparison be constant for all data, nor will humans' perception of the attribute necessarily permit a global ordering. To address these issues, we propose a local learning approach for fine-grained visual comparisons. Given a novel pair of images, we learn a local ranking model on the fly, using only analogous training comparisons. We show how to identify these analogous pairs using learned metrics. With results on three challenging datasets -- including a large newly curated dataset for fine-grained comparisons -- our method outperforms state-of-the-art methods for relative attribute prediction.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Introduced in the Paper:

UT Zappos50K

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here