Search Results for author: Changkun Liu

Found 8 papers, 2 papers with code

GSLoc: Efficient Camera Pose Refinement via 3D Gaussian Splatting

no code implementations20 Aug 2024 Changkun Liu, Shuai Chen, Yash Bhalgat, Siyan Hu, Ming Cheng, ZiRui Wang, Victor Adrian Prisacariu, Tristan Braud

We leverage 3D Gaussian Splatting (3DGS) as a scene representation and propose a novel test-time camera pose refinement framework, GSLoc.

Pose Estimation regression +1

AIR-HLoc: Adaptive Retrieved Images Selection for Efficient Visual Localisation

no code implementations27 Mar 2024 Changkun Liu, Jianhao Jiao, Huajian Huang, Zhengyang Ma, Dimitrios Kanoulas, Tristan Braud

State-of-the-art hierarchical localisation pipelines (HLoc) employ image retrieval (IR) to establish 2D-3D correspondences by selecting the top-$k$ most similar images from a reference database.

Image Retrieval Retrieval

MobileARLoc: On-device Robust Absolute Localisation for Pervasive Markerless Mobile AR

no code implementations21 Jan 2024 Changkun Liu, Yukun Zhao, Tristan Braud

To address APR accuracy and reduce VIO drift, MobileARLoc creates a feedback loop where VIO pose estimations refine the APR predictions.

Camera Pose Estimation Pose Estimation

360Loc: A Dataset and Benchmark for Omnidirectional Visual Localization with Cross-device Queries

1 code implementation CVPR 2024 Huajian Huang, Changkun Liu, Yipeng Zhu, Hui Cheng, Tristan Braud, Sai-Kit Yeung

We propose a virtual camera approach to generate lower-FoV query frames from 360$^\circ$ images, which ensures a fair comparison of performance among different query types in visual localization tasks.

Visual Localization

KS-APR: Keyframe Selection for Robust Absolute Pose Regression

no code implementations10 Aug 2023 Changkun Liu, Yukun Zhao, Tristan Braud

However, APR methods tend to yield significant inaccuracies for input images that are too distant from the training set.

regression Visual Localization

Cannot find the paper you are looking for? You can Submit a new open access paper.