Rotation-Equivariant Keypoint Detection

29 Sep 2021  ·  Jongmin Lee, Byungjin Kim, Minsu Cho ·

We show how to train a rotation-equivariant representation to extract local keypoints for image matching. Existing learning-based methods focused on extracting translation-equivariant keypoints using conventional convolutional neural networks (CNNs), but rotation-equivariant keypoint detectors have not been studied extensively. Therefore, we propose a rotation-invariant keypoint detection method using rotation-equivariant CNNs. Our rotation-equivariant representation enables us to estimate local orientations to image keypoints accurately. We propose a dense histogram alignment loss to assign an orientation to keypoints more consistently. We validate the effectiveness compared to existing keypoint detection methods. Furthermore, we check the transferability of our method on public image matching benchmarks.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here