no code implementations • 10 Aug 2023 • Yulin Yang, Patrick Geneva, Guoquan Huang
To this end, we develop a new analytic combined IMU integration with intrinsics-termed ACI3-to preintegrate IMU measurements, which is leveraged to fuse auxiliary IMUs and(or) gyroscopes alongside a base IMU.
1 code implementation • 9 Sep 2022 • Peng Yin, Shiqi Zhao, Ivan Cisneros, Abulikemu Abuduweili, Guoquan Huang, Micheal Milford, Changliu Liu, Howie Choset, Sebastian Scherer
A summary of this work and our datasets and evaluation API is publicly available to the robotics community at: https://github. com/MetaSLAM/GPRS.
Loop Closure Detection Simultaneous Localization and Mapping
no code implementations • 23 Mar 2021 • Pengxiang Zhu, Patrick Geneva, Wei Ren, Guoquan Huang
In this paper we present a consistent and distributed state estimator for multi-robot cooperative localization (CL) which efficiently fuses environmental features and loop-closure constraints across time and robots.
no code implementations • 18 Dec 2020 • Xingxing Zuo, Nathaniel Merrill, Wei Li, Yong liu, Marc Pollefeys, Guoquan Huang
In this work, we present a lightweight, tightly-coupled deep depth network and visual-inertial odometry (VIO) system, which can provide accurate state estimates and dense depth maps of the immediate surroundings.
no code implementations • 17 Aug 2020 • Xingxing Zuo, Yulin Yang, Patrick Geneva, Jiajun Lv, Yong liu, Guoquan Huang, Marc Pollefeys
Only the tracked planar points belonging to the same plane will be used for plane initialization, which makes the plane extraction efficient and robust.
Robotics
no code implementations • 28 Jun 2020 • Kevin Eckenhoff, Patrick Geneva, Guoquan Huang
As cameras and inertial sensors are becoming ubiquitous in mobile devices and robots, it holds great potential to design visual-inertial navigation systems (VINS) for efficient versatile 3D motion tracking which utilize any (multiple) available cameras and inertial measurement units (IMUs) and are resilient to sensor failures or measurement depletion.
no code implementations • 13 Nov 2019 • Xingxing Zuo, Mingming Zhang, Yiming Chen, Yong liu, Guoquan Huang, Mingyang Li
While visual localization or SLAM has witnessed great progress in past decades, when deploying it on a mobile robot in practice, few works have explicitly considered the kinematic (or dynamic) constraints of the real robotic system when designing state estimators.
no code implementations • 9 Sep 2019 • Xingxing Zuo, Patrick Geneva, Woosik Lee, Yong liu, Guoquan Huang
This paper presents a tightly-coupled multi-sensor fusion algorithm termed LiDAR-inertial-camera fusion (LIC-Fusion), which efficiently fuses IMU measurements, sparse visual features, and extracted LiDAR points.
Robotics
no code implementations • CVPR 2019 • Patrick Geneva, James Maley, Guoquan Huang
It holds great implications for practical applications to enable centimeter-accuracy positioning for mobile and wearable sensor systems.
1 code implementation • 20 May 2018 • Nate Merrill, Guoquan Huang
Robust efficient loop closure detection is essential for large-scale real-time SLAM.
Robotics 68T40 I.2.9; I.2.6; I.2.10; I.4; I.5
1 code implementation • 10 May 2018 • Zheng Huai, Guoquan Huang
In this paper, we propose a novel robocentric formulation of the visual-inertial navigation system (VINS) within a sliding-window filtering framework and design an efficient, lightweight, robocentric visual-inertial odometry (R-VIO) algorithm for consistent motion tracking even in challenging environments using only a monocular camera and a 6-axis IMU.
Robotics
1 code implementation • 7 May 2018 • Kevin Eckenhoff, Patrick Geneva, Guoquan Huang
In this paper we propose a new continuous preintegration theory for graph-based sensor fusion with an inertial measurement unit (IMU) and a camera (or other aiding sensors).
Robotics
no code implementations • 23 Nov 2017 • Xingxing Zuo, Xiaojia Xie, Yong liu, Guoquan Huang
In this paper, we develop a robust efficient visual SLAM system that utilizes heterogeneous point and line features.