Deep Shape Matching

We cast shape matching as metric learning with convolutional networks. We break the end-to-end process of image representation into two parts. Firstly, well established efficient methods are chosen to turn the images into edge maps. Secondly, the network is trained with edge maps of landmark images, which are automatically obtained by a structure-from-motion pipeline. The learned representation is evaluated on a range of different tasks, providing improvements on challenging cases of domain generalization, generic sketch-based image retrieval or its fine-grained counterpart. In contrast to other methods that learn a different model per task, object category, or domain, we use the same network throughout all our experiments, achieving state-of-the-art results in multiple benchmarks.

PDF Abstract ECCV 2018 PDF ECCV 2018 Abstract

Results from the Paper

 Ranked #1 on Sketch-Based Image Retrieval on Chairs (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Sketch-Based Image Retrieval Chairs EdgeMAC + whitening R@1 85.6 # 1
R@10 97.9 # 2
Sketch-Based Image Retrieval Handbags EdgeMAC + whitening R@1 51.2 # 1
R@10 85.7 # 1
Sketch-Based Image Retrieval Shoes EdgeMAC + whitening R@1 54.8 # 1
R@10 92.2 # 1


No methods listed for this paper. Add relevant methods here