Deep Image Homography Estimation

13 Jun 2016  ·  Daniel DeTone, Tomasz Malisiewicz, Andrew Rabinovich ·

We present a deep convolutional neural network for estimating the relative homography between a pair of images. Our feed-forward network has 10 layers, takes two stacked grayscale images as input, and produces an 8 degree of freedom homography which can be used to map the pixels from the first image to the second. We present two convolutional neural network architectures for HomographyNet: a regression network which directly estimates the real-valued homography parameters, and a classification network which produces a distribution over quantized homographies. We use a 4-point homography parameterization which maps the four corners from one image into the second image. Our networks are trained in an end-to-end fashion using warped MS-COCO images. Our approach works without the need for separate local feature detection and transformation estimation stages. Our deep models are compared to a traditional homography estimator based on ORB features and we highlight the scenarios where HomographyNet outperforms the traditional technique. We also describe a variety of applications powered by deep homography estimation, thus showcasing the flexibility of a deep learning approach.

PDF Abstract

Datasets


Introduced in the Paper:

S-COCO

Used in the Paper:

MS COCO PDS-COCO

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Homography Estimation PDS-COCO HomographyNet MACE 2.50 # 3
Homography Estimation S-COCO HomographyNet MACE 1.96 # 3

Methods


No methods listed for this paper. Add relevant methods here