DGC-Net: Dense Geometric Correspondence Network

This paper addresses the challenge of dense pixel correspondence estimation between two images. This problem is closely related to optical flow estimation task where ConvNets (CNNs) have recently achieved significant progress. While optical flow methods produce very accurate results for the small pixel translation and limited appearance variation scenarios, they hardly deal with the strong geometric transformations that we consider in this work. In this paper, we propose a coarse-to-fine CNN-based framework that can leverage the advantages of optical flow approaches and extend them to the case of large transformations providing dense and subpixel accurate estimates. It is trained on synthetic transformations and demonstrates very good performance to unseen, realistic, data. Further, we apply our method to the problem of relative camera pose estimation and demonstrate that the model outperforms existing dense approaches.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Dense Pixel Correspondence Estimation HPatches DGC-Net aff+tps+homo Viewpoint I AEPE 1.55 # 2
Viewpoint II AEPE 5.53 # 3
Viewpoint III AEPE 8.98 # 2
Viewpoint IV AEPE 11.66 # 2
Viewpoint V AEPE 16.70 # 2

Methods


No methods listed for this paper. Add relevant methods here