Rethinking Efficient Lane Detection via Curve Modeling

This paper presents a novel parametric curve-based method for lane detection in RGB images. Unlike state-of-the-art segmentation-based and point detection-based methods that typically require heuristics to either decode predictions or formulate a large sum of anchors, the curve-based methods can learn holistic lane representations naturally. To handle the optimization difficulties of existing polynomial curve methods, we propose to exploit the parametric B\'ezier curve due to its ease of computation, stability, and high freedom degrees of transformations. In addition, we propose the deformable convolution-based feature flip fusion, for exploiting the symmetry properties of lanes in driving scenes. The proposed method achieves a new state-of-the-art performance on the popular LLAMAS benchmark. It also achieves favorable accuracy on the TuSimple and CULane datasets, while retaining both low latency (> 150 FPS) and small model size (< 10M). Our method can serve as a new baseline, to shed the light on the parametric curves modeling for lane detection. Codes of our model and PytorchAutoDrive: a unified framework for self-driving perception, are available at: https://github.com/voldemortX/pytorch-auto-drive .

PDF Abstract CVPR 2022 PDF CVPR 2022 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Lane Detection CULane BézierLaneNet (ResNet-18) F1 score 73.67 # 45
Lane Detection CULane BézierLaneNet (ResNet-34) F1 score 75.57 # 36
Lane Detection LLAMAS BézierLaneNet (ResNet-34) F1 0.9611 # 2
Lane Detection LLAMAS BézierLaneNet (ResNet-18) F1 0.9552 # 5
Lane Detection TuSimple BézierLaneNet (ResNet-18) Accuracy 95.41% # 31
Lane Detection TuSimple BézierLaneNet (ResNet-34) Accuracy 95.65% # 25

Methods