no code implementations • 22 Nov 2024 • Linrui Gong, Jiuming Liu, Junyi Ma, Lihao Liu, Yaonan Wang, Hesheng Wang
To address this issue, we propose a novel framework named EADReg for efficient and robust registration of LiDAR point clouds based on autoregressive diffusion models.
no code implementations • 21 Nov 2024 • Jingyi Xu, Xieyuanli Chen, Junyi Ma, Jiawei Huang, Jintao Xu, Yue Wang, Ling Pei
Existing 3D OCF approaches struggle to predict plausible spatial details for movable objects and suffer from slow inference speeds due to neglecting the bias and uneven distribution of changing occupancy states in both space and time.
1 code implementation • 1 Oct 2024 • Zhangshuo Qi, Junyi Ma, Jingyi Xu, Zijie Zhou, Luqi Cheng, Guangming Xiong
Place recognition is a crucial module to ensure autonomous vehicles obtain usable localization information in GPS-denied environments.
no code implementations • 23 Sep 2024 • Rui Gan, Haotian Shi, Pei Li, Keshu Wu, Bocheng An, Linheng Li, Junyi Ma, Chengyuan Ma, Bin Ran
To address these challenges, this paper proposes a Goal-based Neural Physics Vehicle Trajectory Prediction Model (GNP).
no code implementations • 4 Sep 2024 • Junyi Ma, Xieyuanli Chen, Wentao Bao, Jingyi Xu, Hesheng Wang
Understanding human intentions and actions through egocentric videos is important on the path to embodied artificial intelligence.
1 code implementation • 7 May 2024 • Junyi Ma, Jingyi Xu, Xieyuanli Chen, Hesheng Wang
Understanding how humans would behave during hand-object interaction is vital for applications in service robot manipulation and extended reality.
2 code implementations • 27 Feb 2024 • Jingyi Xu, Junyi Ma, Qi Wu, Zijie Zhou, Yue Wang, Xieyuanli Chen, Ling Pei
Fusion-based place recognition is an emerging technique jointly utilizing multi-modal perception data, to recognize previously visited places in GPS-denied scenarios for robots and autonomous vehicles.
1 code implementation • 14 Feb 2024 • Xiuzhong Hu, Guangming Xiong, Zheng Zang, Peng Jia, Yuxuan Han, Junyi Ma
With extensive experiments, PC-NeRF is proven to achieve high-precision novel LiDAR view synthesis and 3D reconstruction in large-scale scenes.
1 code implementation • CVPR 2024 • Junyi Ma, Xieyuanli Chen, Jiawei Huang, Jingyi Xu, Zhen Luo, Jintao Xu, Weihao Gu, Rui Ai, Hesheng Wang
Furthermore, the standardized evaluation protocol for preset multiple tasks is also provided to compare the performance of all the proposed baselines on present and future occupancy estimation with respect to objects of interest in autonomous driving scenarios.
1 code implementation • 6 Nov 2023 • Zijie Zhou, Jingyi Xu, Guangming Xiong, Junyi Ma
However, most existing multimodal place recognition methods only use limited field-of-view camera images, which leads to an imbalance between features from different modalities and limits the effectiveness of sensor fusion.
1 code implementation • 2 Oct 2023 • Xiuzhong Hu, Guangming Xiong, Zheng Zang, Peng Jia, Yuxuan Han, Junyi Ma
Reconstructing large-scale 3D scenes is essential for autonomous vehicles, especially when partial sensor data is lost.
1 code implementation • 16 Apr 2023 • Zhen Luo, Junyi Ma, Zijie Zhou, Guangming Xiong
In this letter, we propose a novel efficient Transformer-based network to predict the future LiDAR point clouds exploiting the past point cloud sequences.
1 code implementation • 3 Feb 2023 • Junyi Ma, Guangming Xiong, Jingyi Xu, Xieyuanli Chen
LiDAR-based place recognition (LPR) is one of the most crucial components of autonomous vehicles to identify previously visited places in GPS-denied environments.
1 code implementation • 16 Sep 2022 • Junyi Ma, Xieyuanli Chen, Jingyi Xu, Guangming Xiong
It uses multi-scale transformers to generate a global descriptor for each sequence of LiDAR range images in an end-to-end fashion.
1 code implementation • 22 Feb 2022 • Jingyi Xu, Zirui Li, Li Gao, Junyi Ma, Qi Liu, Yanan Zhao
Different exploration methods of DRL, including adding action space noise and parameter space noise, are compared against each other in the transfer learning process in this work.