Search Results for author: Yuhang Ming

Found 10 papers, 6 papers with code

Vox-Fusion++: Voxel-based Neural Implicit Dense Tracking and Mapping with Multi-maps

no code implementations19 Mar 2024 Hongjia Zhai, Hai Li, Xingrui Yang, Gan Huang, Yuhang Ming, Hujun Bao, Guofeng Zhang

In this paper, we introduce Vox-Fusion++, a multi-maps-based robust dense tracking and mapping system that seamlessly fuses neural implicit representations with traditional volumetric fusion techniques.

AEGIS-Net: Attention-guided Multi-Level Feature Aggregation for Indoor Place Recognition

1 code implementation15 Dec 2023 Yuhang Ming, Jian Ma, Xingrui Yang, Weichen Dai, Yong Peng, Wanzeng Kong

We evaluate our AEGIS-Net on the ScanNetPR dataset and compare its performance with a pre-deep-learning feature-based method and five state-of-the-art deep-learning-based methods.

Semantic Segmentation

EDI: ESKF-based Disjoint Initialization for Visual-Inertial SLAM Systems

no code implementations4 Aug 2023 Weihan Wang, Jiani Li, Yuhang Ming, Philippos Mordohai

Our method incorporates an Error-state Kalman Filter (ESKF) to estimate gyroscope bias and correct rotation estimates from monocular SLAM, overcoming dependence on pure monocular SLAM for rotation estimation.

Vox-Fusion: Dense Tracking and Mapping with Voxel-based Neural Implicit Representation

1 code implementation28 Oct 2022 Xingrui Yang, Hai Li, Hongjia Zhai, Yuhang Ming, Yuqian Liu, Guofeng Zhang

In this work, we present a dense tracking and mapping system named Vox-Fusion, which seamlessly fuses neural implicit representations with traditional volumetric fusion methods.

iDF-SLAM: End-to-End RGB-D SLAM with Neural Implicit Mapping and Deep Feature Tracking

no code implementations16 Sep 2022 Yuhang Ming, Weicai Ye, Andrew Calway

The neural implicit mapper is trained on-the-fly, while though the neural tracker is pretrained on the ScanNet dataset, it is also finetuned along with the training of the neural implicit mapper.

DeFlowSLAM: Self-Supervised Scene Motion Decomposition for Dynamic Dense SLAM

1 code implementation18 Jul 2022 Weicai Ye, Xingyuan Yu, Xinyue Lan, Yuhang Ming, Jinyu Li, Hujun Bao, Zhaopeng Cui, Guofeng Zhang

We present a novel dual-flow representation of scene motion that decomposes the optical flow into a static flow field caused by the camera motion and another dynamic flow field caused by the objects' movements in the scene.

Pose Estimation Simultaneous Localization and Mapping

PVO: Panoptic Visual Odometry

1 code implementation CVPR 2023 Weicai Ye, Xinyue Lan, Shuo Chen, Yuhang Ming, Xingyuan Yu, Hujun Bao, Zhaopeng Cui, Guofeng Zhang

We present PVO, a novel panoptic visual odometry framework to achieve more comprehensive modeling of the scene motion, geometry, and panoptic segmentation information.

Optical Flow Estimation Pose Estimation +3

FD-SLAM: 3-D Reconstruction Using Features and Dense Matching

no code implementations25 Mar 2022 Xingrui Yang, Yuhang Ming, Zhaopeng Cui, Andrew Calway

It is well known that visual SLAM systems based on dense matching are locally accurate but are also susceptible to long-term drift and map corruption.

Pose Estimation

CGiS-Net: Aggregating Colour, Geometry and Implicit Semantic Features for Indoor Place Recognition

1 code implementation4 Feb 2022 Yuhang Ming, Xingrui Yang, Guofeng Zhang, Andrew Calway

We describe a novel approach to indoor place recognition from RGB point clouds based on aggregating low-level colour and geometry features with high-level implicit semantic features.

Semantic Segmentation

Object-Augmented RGB-D SLAM for Wide-Disparity Relocalisation

1 code implementation5 Aug 2021 Yuhang Ming, Xingrui Yang, Andrew Calway

During the map construction, we use a pre-trained neural network to detect objects and estimate 6D poses from RGB-D data.

Geometric Matching Object

Cannot find the paper you are looking for? You can Submit a new open access paper.