no code implementations • 15 Apr 2024 • Jiadi Cui, Junming Cao, Yuhui Zhong, Liao Wang, Fuqiang Zhao, Penghao Wang, Yifan Chen, Zhipeng He, Lan Xu, Yujiao Shi, Yingliang Zhang, Jingyi Yu
We demonstrate that the collected LiDAR point cloud by the Polar device enhances a suite of 3D Gaussian splatting algorithms for garage scene modeling and rendering.
no code implementations • NeurIPS 2023 • Zhenbo Song, Xianghui Ze, Jianfeng Lu, Yujiao Shi
We propose a novel end-to-end approach that leverages the learning of dense pixel-wise flow fields in pairs of ground and satellite images to calculate the camera pose.
1 code implementation • ICCV 2023 • Yujiao Shi, Fei Wu, Akhil Perincherry, Ankit Vora, Hongdong Li
In this paper, we propose a method to increase the accuracy of a ground camera's location and orientation by estimating the relative rotation and translation between the ground-level image and its matched/retrieved satellite image.
no code implementations • 7 Aug 2022 • Yujiao Shi, Xin Yu, Shan Wang, Hongdong Li
The critical challenge of this task is to learn a powerful global feature descriptor for the sequential ground-view images while considering its domain alignment with reference satellite images.
1 code implementation • CVPR 2022 • Yujiao Shi, Hongdong Li
This paper addresses the problem of vehicle-mounted camera localization by matching a ground-level image with an overhead-view satellite map.
1 code implementation • 26 Mar 2022 • Yujiao Shi, Xin Yu, Liu Liu, Dylan Campbell, Piotr Koniusz, Hongdong Li
We address the problem of ground-to-satellite image geo-localization, that is, estimating the camera latitude, longitude and orientation (azimuth angle) by matching a query image captured at the ground level against a large-scale database with geotagged satellite images.
no code implementations • 29 Sep 2021 • Junxuan Li, Yujiao Shi, Hongdong Li
It encodes a complete light-field (\ie, lumigraph) therefore allows one to freely roam in the space and view the scene from any location in any direction.
1 code implementation • CVPR 2021 • Yujiao Shi, Hongdong Li, Xin Yu
We then warp and aggregate source view pixels to synthesize a novel view based on the estimated source-view visibility and target-view depth.
1 code implementation • 2 Mar 2021 • Yujiao Shi, Dylan Campbell, Xin Yu, Hongdong Li
Specifically, we observe that when a 3D point in the real world is visible in both views, there is a deterministic mapping between the projected points in the two-view images given the height information of this 3D point.
1 code implementation • CVPR 2020 • Yujiao Shi, Xin Yu, Dylan Campbell, Hongdong Li
Cross-view geo-localization is the problem of estimating the position and orientation (latitude, longitude and azimuth angle) of a camera at ground level given a large-scale database of geo-tagged aerial (e. g., satellite) images.
1 code implementation • NeurIPS 2019 • Yujiao Shi, Liu Liu, Xin Yu, Hongdong Li
The first step is to apply a regular polar transform to warp an aerial image such that its domain is closer to that of a ground-view panorama.
Ranked #4 on Image-Based Localization on VIGOR Cross Area
1 code implementation • 11 Jul 2019 • Yujiao Shi, Xin Yu, Liu Liu, Tong Zhang, Hongdong Li
This paper proposes a novel Cross-View Feature Transport (CVFT) technique to explicitly establish cross-view domain transfer that facilitates feature alignment between ground and aerial images.