no code implementations • 9 Apr 2024 • Kai Luan, Chenghao Shi, Neng Wang, Yuwei Cheng, Huimin Lu, Xieyuanli Chen
The millimeter-wave radar sensor maintains stable performance under adverse environmental conditions, making it a promising solution for all-weather perception tasks, such as outdoor mobile robotics.
1 code implementation • 2 Apr 2024 • Yehui Shen, Mingmin Liu, Huimin Lu, Xieyuanli Chen
Visual place recognition (VPR) plays a pivotal role in autonomous exploration and navigation of mobile robots within complex outdoor environments.
1 code implementation • 27 Mar 2024 • Weidong Xie, Lun Luo, Nanfei Ye, Yi Ren, Shaoyi Du, Minhang Wang, Jintao Xu, Rui Ai, Weihao Gu, Xieyuanli Chen
Experimental results on the KITTI dataset show that our proposed methods achieve state-of-the-art performance while running in real time.
1 code implementation • 27 Feb 2024 • Jingyi Xu, Junyi Ma, Qi Wu, Zijie Zhou, Yue Wang, Xieyuanli Chen, Ling Pei
Fusion-based place recognition is an emerging technique jointly utilizing multi-modal perception data, to recognize previously visited places in GPS-denied scenarios for robots and autonomous vehicles.
1 code implementation • 21 Feb 2024 • Yutong Wang, Chaoyang Jiang, Xieyuanli Chen
Meanwhile, local bundle adjustment is performed utilizing the objects and points-based covisibility graphs in our visual object mapping process.
1 code implementation • 30 Jan 2024 • Jintao Cheng, Kang Zeng, Zhuoxu Huang, Xiaoyu Tang, Jin Wu, Chengxi Zhang, Xieyuanli Chen, Rui Fan
Moving object segmentation (MOS) provides a reliable solution for detecting traffic participants and thus is of great interest in the autonomous driving field.
1 code implementation • 29 Nov 2023 • Junyi Ma, Xieyuanli Chen, Jiawei Huang, Jingyi Xu, Zhen Luo, Jintao Xu, Weihao Gu, Rui Ai, Hesheng Wang
Furthermore, the standardized evaluation protocol for preset multiple tasks is also provided to compare the performance of all the proposed baselines on present and future occupancy estimation with respect to objects of interest in autonomous driving scenarios.
1 code implementation • 21 Nov 2023 • Youqi Liao, Shuhao Kang, Jianping Li, Yang Liu, Yun Liu, Zhen Dong, Bisheng Yang, Xieyuanli Chen
Our framework features a two-stream encoder, an active fusion decoder (AFD) and a dual-task regularization approach.
no code implementations • 15 Sep 2023 • Chenghao Shi, Xieyuanli Chen, Junhao Xiao, Bin Dai, Huimin Lu
In the end, we integrate our LCR-Net into a SLAM system and achieve robust and accurate online LiDAR SLAM in outdoor driving environments.
no code implementations • 14 Sep 2023 • Rong Li, Shijie Li, Xieyuanli Chen, Teli Ma, Juergen Gall, Junwei Liang
In this paper, we present TFNet, a range-image-based LiDAR semantic segmentation method that utilizes temporal information to address this issue.
Ranked #1 on Semantic Segmentation on SemanticPOSS
1 code implementation • 19 Jun 2023 • Peizheng Li, Shuxiao Ding, Xieyuanli Chen, Niklas Hanselmann, Marius Cordts, Juergen Gall
Accurately perceiving instances and predicting their future motion are key tasks for autonomous vehicles, enabling them to navigate safely in complex urban traffic.
no code implementations • 31 Mar 2023 • Chenghao Shi, Xieyuanli Chen, Huimin Lu, Wenbang Deng, Junhao Xiao, Bin Dai
The proposed 3D-RoFormer fuses 3D position information into the transformer network, efficiently exploiting point clouds' contextual and geometric information to generate robust superpoint correspondences.
1 code implementation • 24 Mar 2023 • Jiafeng Cui, Xieyuanli Chen
The experimental results show that our CCL consistently improves the performance of different methods in different environments outperforming the state-of-the-art continual learning method.
1 code implementation • ICCV 2023 • Junyuan Deng, Xieyuanli Chen, Songpengcheng Xia, Zhen Sun, Guoqing Liu, Wenxian Yu, Ling Pei
To bridge this gap, in this paper, we propose a novel NeRF-based LiDAR odometry and mapping approach, NeRF-LOAM, consisting of three modules neural odometry, neural mapping, and mesh reconstruction.
1 code implementation • 8 Mar 2023 • Wenbang Deng, Kaihong Huang, Qinghua Yu, Huimin Lu, Zhiqiang Zheng, Xieyuanli Chen
In this paper, we present a flexible and effective OIS framework for LiDAR point cloud that can accurately segment both known and unknown instances (i. e., seen and unseen instance categories during training).
1 code implementation • 7 Mar 2023 • Neng Wang, Chenghao Shi, Ruibin Guo, Huimin Lu, Zhiqiang Zheng, Xieyuanli Chen
We evaluated our approach on the LiDAR-MOS benchmark based on SemanticKITTI and achieved better moving object segmentation performance compared to state-of-the-art methods, demonstrating the effectiveness of our approach in integrating instance information for moving object segmentation.
1 code implementation • 3 Feb 2023 • Junyi Ma, Guangming Xiong, Jingyi Xu, Xieyuanli Chen
LiDAR-based place recognition (LPR) is one of the most crucial components of autonomous vehicles to identify previously visited places in GPS-denied environments.
1 code implementation • CVPR 2023 • Lucas Nunes, Louis Wiesmann, Rodrigo Marcuzzi, Xieyuanli Chen, Jens Behley, Cyrill Stachniss
Especially in autonomous driving, point clouds are sparse, and objects appearance depends on its distance from the sensor, making it harder to acquire large amounts of labeled training data.
1 code implementation • 28 Nov 2022 • Hao Dong, Xianjing Zhang, Jintao Xu, Rui Ai, Weihao Gu, Huimin Lu, Juho Kannala, Xieyuanli Chen
However, current works are based on raw data or network feature-level fusion and only consider short-range HD map generation, limiting their deployment to realistic autonomous driving applications.
1 code implementation • 6 Oct 2022 • Haofei Kuang, Xieyuanli Chen, Tiziano Guadagnino, Nicky Zimmerman, Jens Behley, Cyrill Stachniss
The experiments suggest that the presented implicit representation is able to predict more accurate 2D LiDAR scans leading to an improved observation model for our particle filter-based localization.
1 code implementation • 27 Sep 2022 • Hao Dong, Xieyuanli Chen, Mihai Dusmanu, Viktor Larsson, Marc Pollefeys, Cyrill Stachniss
A distinctive representation of image patches in form of features is a key component of many computer vision and robotics tasks, such as image matching, image retrieval, and visual localization.
1 code implementation • 16 Sep 2022 • Junyi Ma, Xieyuanli Chen, Jingyi Xu, Guangming Xiong
It uses multi-scale transformers to generate a global descriptor for each sequence of LiDAR range images in an end-to-end fashion.
2 code implementations • 15 Aug 2022 • Yunge Cui, Xieyuanli Chen, Yinlong Zhang, Jiahua Dong, Qingxiao Wu, Feng Zhu
To address this limitation, we present a novel Bag of Words for real-time loop closing in 3D LiDAR SLAM, called BoW3D.
1 code implementation • 15 Aug 2022 • Hao Dong, Xieyuanli Chen, Simo Särkkä, Cyrill Stachniss
We further use the extracted poles as pseudo labels to train a deep neural network for online range image-based pole segmentation.
1 code implementation • 5 Jul 2022 • Jiadai Sun, Yuchao Dai, Xianjing Zhang, Jintao Xu, Rui Ai, Weihao Gu, Xieyuanli Chen
We also use a point refinement module via 3D sparse convolution to fuse the information from both LiDAR range image and point cloud representations and reduce the artifacts on the borders of the objects.
2 code implementations • 13 Jun 2022 • Yunge Cui, Yinlong Zhang, Jiahua Dong, Haibo Sun, Xieyuanli Chen, Feng Zhu
Feature extraction and matching are the basic parts of many robotic vision tasks, such as 2D or 3D object detection, recognition, and registration.
1 code implementation • 8 Jun 2022 • Benedikt Mersch, Xieyuanli Chen, Ignacio Vizzo, Lucas Nunes, Jens Behley, Cyrill Stachniss
A key challenge for autonomous vehicles is to navigate in unseen dynamic environments.
no code implementations • 22 Apr 2022 • Si Yang, Lihua Zheng, Xieyuanli Chen, Laura Zabawa, Man Zhang, Minjuan Wang
In the first step, we finetune an instance segmentation network pretrained by a source domain (MS COCO dataset) with a synthetic target domain (in-vitro soybean pods dataset).
1 code implementation • 28 Sep 2021 • Benedikt Mersch, Xieyuanli Chen, Jens Behley, Cyrill Stachniss
In this paper, we address the problem of predicting future 3D LiDAR point clouds given a sequence of past LiDAR scans.
2 code implementations • 20 Aug 2020 • Shijie Li, Xieyuanli Chen, Yun Liu, Dengxin Dai, Cyrill Stachniss, Juergen Gall
Real-time semantic segmentation of LiDAR data is crucial for autonomously driving vehicles, which are usually equipped with an embedded platform and have limited computational resources.
Ranked #2 on Real-Time 3D Semantic Segmentation on SemanticKITTI
no code implementations • 9 Sep 2019 • Yucai Bai, Qin Zou, Xieyuanli Chen, Lingxi Li, Zhengming Ding, Long Chen
Given the fact that one same activity may be represented by videos in both high resolution (HR) and extreme low resolution (eLR), it is worth studying to utilize the relevant HR data to improve the eLR activity recognition.