L2V2T2Calib: Automatic and Unified Extrinsic Calibration Toolbox for Different 3D LiDAR, Visual Camera and Thermal Camera

Extrinsic calibration between LiDAR-Camera and LiDAR-LiDAR has been researched extensively, because it is the foundation for sensor fusion. Meanwhile, many projects are open-sourced and significantly promote related research. However, limited solutions can unify the calibration between repetitive scanning and non-repetitive scanning 3D LiDAR, sparse and dense 3D LiDAR, visual and thermal camera. Currently, to achieve that, we normally need to use different targets and extract different features for different sensor combinations. Sometimes, human intervention is required to locate the target. It is inconvenient and time-consuming. In this paper, L 2 V 2 T 2 Calib is introduced and open-sourced as a trial to unify the calibration. 1). A four-circular-holes board is adopted for all sensors. The four circle centers can be detected by all the sensors, thus are ideal common features. Previous works also use this target, but the algorithms don't consider non-repetitive scanning LiDARs, thus cannot be directly applied. 2). To unify the process, an important step is to automatically and robustly detect the target from different types of LiDARs. However, this does not receive enough attention. We propose a method based on template matching. It is simple, but effective and general to different depth sensors. 3). We provide two types of output, minimizing 2D re-projection error (Min2D) and minimizing 3D matching error (Min3D), for different users. And their performance is compared. Extensive experiments conducted in both simulation and real environment demonstrate L2V2T2Calib is accurate, robust, more importantly, unified. The code will be open-sourced to promote related research at: https://github.com/Clothooo/lvt2calib

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here