no code implementations • 21 Aug 2022 • Shuai Su, Zhongkai Zhao, Yixin Fei, Shuda Li, Qijun Chen, Rui Fan
The experimental results demonstrate the importance of group equivariant algorithms for correspondence matching on various sim(2) transformation conditions.
1 code implementation • 17 Dec 2020 • Georgi Tinchev, Shuda Li, Kai Han, David Mitchell, Rigas Kouskouridas
In this paper, we aim at establishing accurate dense correspondences between a pair of images with overlapping field of view under challenging illumination variation, viewpoint changes, and style differences.
1 code implementation • NeurIPS 2020 • Xinghui Li, Kai Han, Shuda Li, Victor Adrian Prisacariu
The fine-resolution feature maps are used to obtain the final dense correspondences guided by the refined coarse 4D correlation tensor.
1 code implementation • CVPR 2020 • Shuda Li, Kai Han, Theo W. Costain, Henry Howard-Jenkins, Victor Prisacariu
This is a challenging task due to large intra-class variations and a lack of dense pixel level annotations.
Ranked #11 on Semantic correspondence on PF-PASCAL
no code implementations • 3 Dec 2019 • Zirui Wang, Shuda Li, Henry Howard-Jenkins, Victor Adrian Prisacariu, Min Chen
We present FlowNet3D++, a deep scene flow estimation network.
no code implementations • 8 May 2019 • Henry Howard-Jenkins, Shuda Li, Victor Prisacariu
We propose a method for room layout estimation that does not rely on the typical box approximation or Manhattan world assumption.
no code implementations • ECCV 2018 • Vassileios Balntas, Shuda Li, Victor Prisacariu
We propose a method of learning suitable convolutional representations for camera pose retrieval based on nearest neighbour matching and continuous metric learning-based feature descriptors.
no code implementations • 4 Apr 2016 • Shuda Li, Ankur Handa, Yang Zhang, Andrew Calway
We describe a new method for comparing frame appearance in a frame-to-model 3-D mapping and tracking system using an low dynamic range (LDR) RGB-D camera which is robust to brightness changes caused by auto exposure.