Deep Matching Prior: Test-Time Optimization for Dense Correspondence

ICCV 2021  ·  Sunghwan Hong, Seungryong Kim ·

Conventional techniques to establish dense correspondences across visually or semantically similar images focused on designing a task-specific matching prior, which is difficult to model. To overcome this, recent learning-based methods have attempted to learn a good matching prior within a model itself on large training data. The performance improvement was apparent, but the need for sufficient training data and intensive learning hinders their applicability. Moreover, using the fixed model at test time does not account for the fact that a pair of images may require their own prior, thus providing limited performance and poor generalization to unseen images. In this paper, we show that an image pair-specific prior can be captured by solely optimizing the untrained matching networks on an input pair of images. Tailored for such test-time optimization for dense correspondence, we present a residual matching network and a confidence-aware contrastive loss to guarantee a meaningful convergence. Experiments demonstrate that our framework, dubbed Deep Matching Prior (DMP), is competitive, or even outperforms, against the latest learning-based methods on several benchmarks for geometric matching and semantic matching, even though it requires neither large training data nor intensive learning. With the networks pre-trained, DMP attains state-of-the-art performance on all benchmarks.

PDF Abstract ICCV 2021 PDF ICCV 2021 Abstract

Results from the Paper


 Ranked #1 on Dense Pixel Correspondence Estimation on HPatches (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Dense Pixel Correspondence Estimation HPatches RANSAC-DMP+ Viewpoint I AEPE 0.48 # 1
Viewpoint II AEPE 2.24 # 1
Viewpoint III AEPE 2.41 # 1
Viewpoint IV AEPE 4.32 # 1
Viewpoint V AEPE 5.16 # 1
PCK-5px 97.52 # 1

Methods


No methods listed for this paper. Add relevant methods here