Lane Detection
84 papers with code • 11 benchmarks • 15 datasets
Lane Detection is a computer vision task that involves identifying the boundaries of driving lanes in a video or image of a road scene. The goal is to accurately locate and track the lane markings in real-time, even in challenging conditions such as poor lighting, glare, or complex road layouts.
Lane detection is an important component of advanced driver assistance systems (ADAS) and autonomous vehicles, as it provides information about the road layout and the position of the vehicle within the lane, which is crucial for navigation and safety. The algorithms typically use a combination of computer vision techniques, such as edge detection, color filtering, and Hough transforms, to identify and track the lane markings in a road scene.
( Image credit: End-to-end Lane Detection )
Libraries
Use these libraries to find Lane Detection models and implementationsLatest papers with no code
TwinLiteNetPlus: A Stronger Model for Real-time Drivable Area and Lane Segmentation
Semantic segmentation is crucial for autonomous driving, particularly for Drivable Area and Lane Segmentation, ensuring safety and navigation.
LDTR: Transformer-based Lane Detection with Anchor-chain Representation
Despite recent advances in lane detection methods, scenarios with limited- or no-visual-clue of lanes due to factors such as lighting conditions and occlusion remain challenging and crucial for automated driving.
SparseFusion: Efficient Sparse Multi-Modal Fusion Framework for Long-Range 3D Perception
The versatility of SparseFusion is also validated in the temporal object detection task and 3D lane detection task.
A Survey of Vision Transformers in Autonomous Driving: Current Trends and Future Directions
This survey explores the adaptation of visual transformer models in Autonomous Driving, a transition inspired by their success in Natural Language Processing.
LanePtrNet: Revisiting Lane Detection as Point Voting and Grouping on Curves
from object detection and segmentation tasks, while these approaches require manual adjustments for curved objects, involve exhaustive searches on predefined anchors, require complex post-processing steps, and may lack flexibility when applied to real-world scenarios. In this paper, we propose a novel approach, LanePtrNet, which treats lane detection as a process of point voting and grouping on ordered sets: Our method takes backbone features as input and predicts a curve-aware centerness, which represents each lane as a point and assigns the most probable center point to it.
CurveFormer++: 3D Lane Detection by Curve Propagation with Temporal Curve Queries and Attention
A curve cross-attention module is introduced in the Transformer decoder to calculate similarities between image features and curve queries of lanes.
Improved Generalizability of CNN Based Lane Detection in Challenging Weather Using Adaptive Preprocessing Parameter Tuning
Ensuring the robustness of lane detection systems is essential for the reliability of autonomous vehicles, particularly in the face of diverse weather conditions.
PLCNet: Patch-wise Lane Correction Network for Automatic Lane Correction in High-definition Maps
Vision lane detection with LiDAR position assignment is a prevalent method to acquire initial lanes for HD maps.
3D Lane Detection from Front or Surround-View using Joint-Modeling & Matching
Therefore, accurate lane modeling is essential to align prediction results closely with the environment.
RainSD: Rain Style Diversification Module for Image Synthesis Enhancement using Feature-Level Style Distribution
Finally, we discuss the limitation and the future directions of the deep neural network-based perception algorithms and autonomous driving dataset generation based on image-to-image translation.