no code implementations • 17 Jun 2024 • Huaiji Zhou, Bing Wang, Changhao Chen
It ensures that only the most relevant NeRF sub-block generates key features for a specific pose.
no code implementations • 22 May 2024 • Minghao Zhang, Bifeng Song, Changhao Chen, Xinyu Lang
In control problems for insect-scale direct-drive experimental platforms under tandem wing influence, the primary challenge facing existing reinforcement learning models is their limited safety in the exploration process and the stability of the continuous training process.
no code implementations • 21 Feb 2024 • Zhendong Xiao, Changhao Chen, Shan Yang, Wu Wei
Camera relocalization is pivotal in computer vision, with applications in AR, drones, robotics, and autonomous driving.
no code implementations • 17 Jan 2024 • Hao Qu, Lilian Zhang, Jun Mao, Junbo Tie, Xiaofeng He, Xiaoping Hu, Yifei Shi, Changhao Chen
The performance of visual SLAM in complex, real-world scenarios is often compromised by unreliable feature extraction and matching when using handcrafted features.
no code implementations • 4 Sep 2023 • Zongyang Chen, Xianfei Pan, Changhao Chen
Accurately and reliably positioning pedestrians in satellite-denied conditions remains a significant challenge.
no code implementations • 30 Aug 2023 • Zhihao Jia, Bing Wang, Changhao Chen
In this work, we propose the Drone-NeRF framework to enhance the efficient reconstruction of unbounded large-scale scenes suited for drone oblique photography using Neural Radiance Fields (NeRF).
no code implementations • 27 Aug 2023 • Changhao Chen, Bing Wang, Chris Xiaoxuan Lu, Niki Trigoni, Andrew Markham
Deep learning based localization and mapping approaches have recently emerged as a new research direction and receive significant attentions from both industry and academia.
no code implementations • 7 Mar 2023 • Changhao Chen, Xianfei Pan
Inertial sensors are widely utilized in smartphones, drones, robots, and IoT devices, playing a crucial role in enabling ubiquitous and reliable localization.
no code implementations • 16 Nov 2022 • Hao Qu, Lilian Zhang, Xiaoping Hu, Xiaofeng He, Xianfei Pan, Changhao Chen
To address this, we propose SelfOdom, a self-supervised dual-network framework that can robustly and consistently learn and generate pose and depth estimates in global scale from monocular images.
no code implementations • 18 Sep 2022 • Zheming Tu, Changhao Chen, Xianfei Pan, Ruochen Liu, Jiarui Cui, Jun Mao
Accurate and robust localization is a fundamental need for mobile agents.
1 code implementation • 14 Sep 2022 • Kaichen Zhou, Lanqing Hong, Changhao Chen, Hang Xu, Chaoqiang Ye, Qingyong Hu, Zhenguo Li
Self-supervised depth learning from monocular images normally relies on the 2D pixel-wise photometric relation between temporally adjacent image frames.
1 code implementation • ICCV 2021 • Bing Wang, Changhao Chen, Zhaopeng Cui, Jie Qin, Chris Xiaoxuan Lu, Zhengdi Yu, Peijun Zhao, Zhen Dong, Fan Zhu, Niki Trigoni, Andrew Markham
Accurately describing and detecting 2D and 3D keypoints is crucial to establishing correspondences across images and point clouds.
1 code implementation • 22 Jun 2020 • Changhao Chen, Bing Wang, Chris Xiaoxuan Lu, Niki Trigoni, Andrew Markham
Deep learning based localization and mapping has recently attracted significant attention.
1 code implementation • 12 Mar 2020 • Kaichen Zhou, Changhao Chen, Bing Wang, Muhamad Risqi U. Saputra, Niki Trigoni, Andrew Markham
We conjecture that this is because of the naive approaches to feature space fusion through summation or concatenation which do not take into account the different strengths of each modality.
2 code implementations • 5 Mar 2020 • Wei Wang, Bing Wang, Peijun Zhao, Changhao Chen, Ronald Clark, Bo Yang, Andrew Markham, Niki Trigoni
In this paper, we present a novel end-to-end learning-based LiDAR relocalization framework, termed PointLoc, which infers 6-DoF poses directly using only a single point cloud as input, without requiring a pre-built map.
Robotics
no code implementations • 13 Jan 2020 • Changhao Chen, Peijun Zhao, Chris Xiaoxuan Lu, Wei Wang, Andrew Markham, Niki Trigoni
Modern inertial measurements units (IMUs) are small, cheap, energy efficient, and widely employed in smart devices and mobile robots.
no code implementations • 30 Dec 2019 • Changhao Chen, Stefano Rosa, Chris Xiaoxuan Lu, Bing Wang, Niki Trigoni, Andrew Markham
By integrating the observations from different sensors, these mobile agents are able to perceive the environment and estimate system states, e. g. locations and orientations.
1 code implementation • 1 Nov 2019 • Chris Xiaoxuan Lu, Stefano Rosa, Peijun Zhao, Bing Wang, Changhao Chen, John A. Stankovic, Niki Trigoni, Andrew Markham
This paper presents the design, implementation and evaluation of milliMap, a single-chip millimetre wave (mmWave) radar based indoor mapping system targetted towards low-visibility environments to assist in emergency response.
no code implementations • 13 Oct 2019 • Wei Wang, Muhamad Risqi U. Saputra, Peijun Zhao, Pedro Gusmao, Bo Yang, Changhao Chen, Andrew Markham, Niki Trigoni
There is considerable work in the area of visual odometry (VO), and recent advances in deep learning have brought novel approaches to VO, which directly learn salient features from raw images.
no code implementations • 16 Sep 2019 • Muhamad Risqi U. Saputra, Pedro P. B. de Gusmao, Chris Xiaoxuan Lu, Yasin Almalioglu, Stefano Rosa, Changhao Chen, Johan Wahlström, Wei Wang, Andrew Markham, Niki Trigoni
The hallucination network is taught to predict fake visual features from thermal images by using Huber loss.
1 code implementation • 8 Sep 2019 • Bing Wang, Changhao Chen, Chris Xiaoxuan Lu, Peijun Zhao, Niki Trigoni, Andrew Markham
Deep learning has achieved impressive results in camera localization, but current single-image techniques typically suffer from a lack of robustness, leading to large outliers.
Ranked #2 on
Visual Localization
on Oxford RobotCar Full
1 code implementation • 14 Aug 2019 • Chris Xiaoxuan Lu, Xuan Kan, Bowen Du, Changhao Chen, Hongkai Wen, Andrew Markham, Niki Trigoni, John Stankovic
Inspired by the fact that most people carry smart wireless devices with them, e. g. smartphones, we propose to use this wireless identifier as a supervisory label.
no code implementations • 11 Aug 2019 • Changhao Chen, Chris Xiaoxuan Lu, Bing Wang, Niki Trigoni, Andrew Markham
In addition we show how DynaNet can indicate failures through investigation of properties such as the rate of innovation (Kalman Gain).
no code implementations • 2019 15th International Conference on Distributed Computing in Sensor Systems (DCOSS) 2019 • Peijun Zhao, Chris Xiaoxuan Lu, Jianan Wang, Changhao Chen, Wei Wang, Niki Trigoni, and Andrew Markham
The key to offering personalised services in smart spaces is knowing where a particular person is with a high degree of accuracy.
no code implementations • CVPR 2019 • Changhao Chen, Stefano Rosa, Yishu Miao, Chris Xiaoxuan Lu, Wei Wu, Andrew Markham, Niki Trigoni
Deep learning approaches for Visual-Inertial Odometry (VIO) have proven successful, but they rarely focus on incorporating robust fusion strategies for dealing with imperfect input sensory data.
1 code implementation • 27 Nov 2018 • Linhai Xie, Yishu Miao, Sen Wang, Phil Blunsom, Zhihua Wang, Changhao Chen, Andrew Markham, Niki Trigoni
Due to the sparse rewards and high degree of environment variation, reinforcement learning approaches such as Deep Deterministic Policy Gradient (DDPG) are plagued by issues of high variance when applied in complex real world environments.
Robotics
no code implementations • 4 Oct 2018 • Changhao Chen, Yishu Miao, Chris Xiaoxuan Lu, Phil Blunsom, Andrew Markham, Niki Trigoni
Inertial information processing plays a pivotal role in ego-motion awareness for mobile agents, as inertial measurements are entirely egocentric and not environment dependent.
no code implementations • 20 Sep 2018 • Changhao Chen, Peijun Zhao, Chris Xiaoxuan Lu, Wei Wang, Andrew Markham, Niki Trigoni
Advances in micro-electro-mechanical (MEMS) techniques enable inertial measurements units (IMUs) to be small, cheap, energy efficient, and widely used in smartphones, robots, and drones.
no code implementations • 30 Jan 2018 • Changhao Chen, Xiaoxuan Lu, Andrew Markham, Niki Trigoni
Inertial sensors play a pivotal role in indoor localization, which in turn lays the foundation for pervasive personal applications.