1 code implementation • 23 Feb 2024 • Kechun Xu, Zhongxiang Zhou, Jun Wu, Haojian Lu, Rong Xiong, Yue Wang
For the inner loop, we learn an active seeing policy for self-confident object matching to improve the perception of place.
1 code implementation • 15 Dec 2023 • Longzhong Lin, Xuewu Lin, Tianwei Lin, Lichao Huang, Rong Xiong, Yue Wang
Motion prediction is a crucial task in autonomous driving, and one of its major challenges lands in the multimodality of future behaviors.
no code implementations • 4 Dec 2023 • Haodong Zhang, ZhiKe Chen, Haocheng Xu, Lei Hao, Xiaofei Wu, Songcen Xu, Zhensong Zhang, Yue Wang, Rong Xiong
Capturing and preserving motion semantics is essential to motion retargeting between animation characters.
no code implementations • 17 Oct 2023 • Jun Wu, Sicheng Li, Sihui Ji, Yue Wang, Rong Xiong, Yiyi Liao
Decomposing a target object from a complex background while reconstructing is challenging.
no code implementations • 16 Aug 2023 • Yuhao Yang, Jun Wu, Yue Wang, Guangjian Zhang, Rong Xiong
Traditional geometric registration based estimation methods only exploit the CAD model implicitly, which leads to their dependence on observation quality and deficiency to occlusion.
1 code implementation • 23 May 2023 • Xuecheng Xu, Yanmei Jiao, Sha Lu, Xiaqing Ding, Rong Xiong, Yue Wang
In addition, the image and point cloud cues can be easily stated in the same coordinates, which benefits sensor fusion for place recognition.
no code implementations • 6 Apr 2023 • Zhixuan Xu, Kechun Xu, Yue Wang, Rong Xiong
We focus on the task of language-conditioned object placement, in which a robot should generate placements that satisfy all the spatial relational constraints in language instructions.
no code implementations • ICCV 2023 • Yuanbo Yang, Yifei Yang, Hanlei Guo, Rong Xiong, Yue Wang, Yiyi Liao
Generating photorealistic images with controllable camera pose and scene contents is essential for many applications including AR/VR and simulation.
no code implementations • 17 Mar 2023 • Bingqi Shen, Shuwei Dai, Yuyin Chen, Rong Xiong, Yue Wang, Yanmei Jiao
In this paper, we propose GOOD, a general optimization-based fusion framework that can achieve satisfying detection without training additional models and is available for any combinations of 2D and 3D detectors to improve the accuracy and robustness of 3D detection.
no code implementations • 21 Nov 2022 • Zhongxiang Zhou, Yifei Yang, Yue Wang, Rong Xiong
To disambiguate unknown objects and background in the first subtask, we propose to use classification-free region proposal network (CF-RPN) which estimates the objectness score of each region purely using cues from object's location and shape preventing overfitting to the training categories.
no code implementations • 20 Oct 2022 • Sha Lu, Xuecheng Xu, Li Tang, Rong Xiong, Yue Wang
In recent years, deep learning brings improvements to place recognition by learnable feature extraction.
1 code implementation • 12 Oct 2022 • Xuecheng Xu, Sha Lu, Jun Wu, Haojian Lu, Qiuguo Zhu, Yiyi Liao, Rong Xiong, Yue Wang
In addition, we derive sufficient conditions of feature extractors for the representation preserving the roto-translation invariance, making RING++ a framework applicable to generic multi-channel features.
no code implementations • 1 Jul 2022 • Jun Wu, Lilu Liu, Yue Wang, Rong Xiong
We ascertain the Mid- Fusion approach is the best approach to restore the most precise 3D keypoints useful for object pose estimation.
no code implementations • 12 Jun 2022 • Zexi Chen, Yiyi Liao, Haozhe Du, Haodong Zhang, Xuecheng Xu, Haojian Lu, Rong Xiong, Yue Wang
Next, the rotation, scale, and translation are independently and efficiently estimated in the spectrum step-by-step using the DPC solver.
1 code implementation • 9 May 2022 • Liang Xie, Hongxiang Yu, Kechun Xu, Tong Yang, Minhang Wang, Haojian Lu, Rong Xiong, Yue Wang
This paper proposes a learning-based visual peg-in-hole that enables training with several shapes in simulation, and adapting to arbitrary unseen shapes in real world with minimal sim-to-real cost.
1 code implementation • 25 Mar 2022 • Jiaxin Guo, Fangxun Zhong, Rong Xiong, Yunhui Liu, Yue Wang, Yiyi Liao
In this paper, we take a deeper look at the inference of analysis-by-synthesis from the perspective of visual navigation, and investigate what is a good navigation policy for this specific task.
no code implementations • 7 Mar 2022 • Xianze Fang, Yunkai Wang, Zexi Chen, Yue Wang, Rong Xiong
The depth completion task aims to complete a per-pixel dense depth map from a sparse depth map.
no code implementations • 2 Mar 2022 • Xiaqing Ding, Xuecheng Xu, Sha Lu, Yanmei Jiao, Mengwen Tan, Rong Xiong, Huanjun Deng, Mingyang Li, Yue Wang
Global point cloud registration is an essential module for localization, of which the main difficulty exists in estimating the rotation globally without initial value.
no code implementations • 25 Sep 2021 • Jun Wu, Lilu Liu, Yue Wang, Rong Xiong
Current monocular-based 6D object pose estimation methods generally achieve less competitive results than RGBD-based methods, mostly due to the lack of 3D information.
no code implementations • 25 Sep 2021 • Zexi Chen, Haozhe Du, Xuecheng Xu, Rong Xiong, Yiyi Liao, Yue Wang
Specifically, we first adopt Unscented Kalman Filter as a differentiable layer to predict the pitch and roll, where the covariance matrices of noise are learned to filter out the noise of the IMU raw data.
1 code implementation • 22 Sep 2021 • Yunkai Wang, Dongkun Zhang, Yuxiang Cui, Zexi Chen, Wei Jing, Junbo Chen, Rong Xiong, Yue Wang
In this paper, we propose a domain generalization method for vision-based driving trajectory generation for autonomous vehicles in urban environments, which can be seen as a solution to extend the Invariant Risk Minimization (IRM) method in complex problems.
no code implementations • 18 Jun 2021 • Huan Yin, Yue Wang, Rong Xiong
We present a heterogeneous localization framework for solving radar global localization and pose tracking on pre-built lidar maps.
1 code implementation • 9 Mar 2021 • Kechun Xu, Hongxiang Yu, Qianen Lai, Yue Wang, Rong Xiong
In this paper, a goal-conditioned hierarchical reinforcement learning formulation with high sample efficiency is proposed to learn a push-grasping policy for grasping a specific object in clutter.
Hierarchical Reinforcement Learning Robotics
1 code implementation • 7 Mar 2021 • Zexi Chen, Zheyuan Huang, Yunkai Wang, Xuecheng Xu, Yue Wang, Rong Xiong
In this paper, we propose the network SSDS that learns a way of distinguishing small defections between two images regardless of the context, so that the network can be trained once for all.
no code implementations • 1 Mar 2021 • Yunshuang Li, Zheyuan Huang, Zexi Chen, Yue Wang, Rong Xiong
Taking the aerial robots' advantages of having large scale variance of view points of the same route which the ground robots is on, the collaboration work provides global information of road segmentation for the ground robot, thus enabling it to obtain feasible region and adjust its pose ahead of time.
1 code implementation • 30 Jan 2021 • Huan Yin, Xuecheng Xu, Yue Wang, Rong Xiong
Place recognition is critical for both offline mapping and online localization.
no code implementations • 22 Dec 2020 • Weitong Hua, Jiaxin Guo, Yue Wang, Rong Xiong
In this paper, we propose a framework for 6D pose estimation from RGB-D data based on spatial structure characteristics of 3D keypoints.
1 code implementation • 14 Dec 2020 • Yiyuan Pan, Xuecheng Xu, Xiaqing Ding, Shoudong Huang, Yue Wang, Rong Xiong
As a result, this deformable global dense map representation is able to keep the global consistency online.
no code implementations • 22 Nov 2020 • Yiyuan Pan, Xuecheng Xu, Weijie Li, Yunxiang Cui, Yue Wang, Rong Xiong
In this way, we fuse the structural features and visual features in the consistent bird-eye view frame, yielding a semantic representation, namely CORAL.
1 code implementation • 31 Oct 2020 • Zexi Chen, Jiaxin Guo, Xuecheng Xu, Yunkai Wang, Yue Wang, Rong Xiong
Utilizing the trained model under different conditions without data annotation is attractive for robot applications.
no code implementations • 24 Oct 2020 • Xiaqing Ding, Yue Wang, Li Tang, Yanmei Jiao, Rong Xiong
Through experiments on real world RGBD datasets we validate the effectiveness of our design in terms of improving both generalization performance and robustness towards viewpoint change, and also show the potential of regression based visual localization networks towards challenging occasions that are difficult for geometry based visual localization methods.
1 code implementation • 24 Oct 2020 • Weitong Hua, Zhongxiang Zhou, Jun Wu, Huang Huang, Yue Wang, Rong Xiong
Object 6D pose estimation is a fundamental task in many applications.
1 code implementation • 21 Oct 2020 • Xuecheng Xu, Huan Yin, Zexi Chen, Yue Wang, Rong Xiong
In this paper, we propose a LiDAR-based place recognition method, named Differentiable Scan Context with Orientation (DiSCO), which simultaneously finds the scan at a similar place and estimates their relative orientation.
2 code implementations • 20 Oct 2020 • Yunkai Wang, Dongkun Zhang, Jingke Wang, Zexi Chen, Yue Wang, Rong Xiong
One of the challenges to reduce the gap between the machine and the human level driving is how to endow the system with the learning capacity to deal with the coupled complexity of environments, intentions, and dynamics.
Robotics
1 code implementation • 15 Sep 2020 • Huan Yin, Runjian Chen, Yue Wang, Rong Xiong
In this paper, we propose an end-to-end deep learning framework for Radar Localization on Lidar Map (RaLL) to bridge the gap, which not only achieves the robust radar localization but also exploits the mature lidar mapping technique, thus reducing the cost of radar mapping.
2 code implementations • 21 Aug 2020 • Zexi Chen, Xuecheng Xu, Yue Wang, Rong Xiong
The crucial step for localization is to match the current observation to the map.
1 code implementation • 8 May 2020 • Jingke Wang, Yue Wang, Dongkun Zhang, Yezhou Yang, Rong Xiong
To improve the tactical decision-making for learning-based driving solution, we introduce hierarchical behavior and motion planning (HBMP) to explicitly model the behavior in learning-based solution.
1 code implementation • 27 Nov 2019 • Huifang Ma, Yue Wang, Rong Xiong, Sarath Kodagoda, Qianhui Luo
Road attributes understanding is extensively researched to support vehicle's action for autonomous driving, whereas current works mainly focus on urban road nets and rely much on traffic signs.
Robotics
no code implementations • 20 Jun 2019 • Jingwei Song, Fang Bai, Liang Zhao, Shoudong Huang, Rong Xiong
In this paper, we propose an approach to decouple nodes of deformation graph in large scale dense deformable SLAM and keep the estimation time to be constant.
1 code implementation • 6 Jun 2019 • Huifang Ma, Yue Wang, Li Tang, Sarath Kodagoda, Rong Xiong
Autonomous navigation based on precise localization has been widely developed in both academic research and practical applications.
Robotics
1 code implementation • 22 May 2019 • Zheyuan Huang, Lingyun Chen, Jiacheng Li, Yunkai Wang, Zexi Chen, Licheng Wen, Jianyang Gu, Peng Hu, Rong Xiong
For the Small Size League of RoboCup 2018, Team ZJUNLict has won the champion and therefore, this paper thoroughly described the devotion which ZJUNLict has devoted and the effort that ZJUNLict has contributed.
Robotics 68T40
1 code implementation • 6 Dec 2017 • Huan Yin, Li Tang, Xiaqing Ding, Yue Wang, Rong Xiong
Global localization in 3D point clouds is a challenging problem of estimating the pose of vehicles without any prior knowledge.
no code implementations • 1 Nov 2017 • Qianhui Luo, Huifang Ma, Yue Wang, Li Tang, Rong Xiong
This paper aims at developing a faster and a more accurate solution to the amodal 3D object detection problem for indoor scenes.