M$^2$-3DLaneNet: Exploring Multi-Modal 3D Lane Detection

13 Sep 2022  ·  Yueru Luo, Xu Yan, Chaoda Zheng, Chao Zheng, Shuqi Mei, Tang Kun, Shuguang Cui, Zhen Li ·

Estimating accurate lane lines in 3D space remains challenging due to their sparse and slim nature. Previous works mainly focused on using images for 3D lane detection, leading to inherent projection error and loss of geometry information. To address these issues, we explore the potential of leveraging LiDAR for 3D lane detection, either as a standalone method or in combination with existing monocular approaches. In this paper, we propose M$^2$-3DLaneNet to integrate complementary information from multiple sensors. Specifically, M$^2$-3DLaneNet lifts 2D features into 3D space by incorporating geometry information from LiDAR data through depth completion. Subsequently, the lifted 2D features are further enhanced with LiDAR features through cross-modality BEV fusion. Extensive experiments on the large-scale OpenLane dataset demonstrate the effectiveness of M$^2$-3DLaneNet, regardless of the range (75m or 100m).

PDF Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
3D Lane Detection OpenLane M^2-3DLaneNet (Camera + Lidar) F1 (all) 55.5 # 5
Up & Down 53.4 # 2
Curve 60.7 # 3
Extreme Weather 56.2 # 2
Night 51.6 # 3
Intersection 43.8 # 6
Merge & Split 51.4 # 5
FPS (pytorch) - # 2

Methods


No methods listed for this paper. Add relevant methods here