Lane Detection
93 papers with code • 13 benchmarks • 17 datasets
Lane Detection is a computer vision task that involves identifying the boundaries of driving lanes in a video or image of a road scene. The goal is to accurately locate and track the lane markings in real-time, even in challenging conditions such as poor lighting, glare, or complex road layouts.
Lane detection is an important component of advanced driver assistance systems (ADAS) and autonomous vehicles, as it provides information about the road layout and the position of the vehicle within the lane, which is crucial for navigation and safety. The algorithms typically use a combination of computer vision techniques, such as edge detection, color filtering, and Hough transforms, to identify and track the lane markings in a road scene.
( Image credit: End-to-end Lane Detection )
Libraries
Use these libraries to find Lane Detection models and implementationsDatasets
Most implemented papers
End to End Learning for Self-Driving Cars
The system automatically learns internal representations of the necessary processing steps such as detecting useful road features with only the human steering angle as the training signal.
Towards End-to-End Lane Detection: an Instance Segmentation Approach
By doing so, we ensure a lane fitting which is robust against road plane changes, unlike existing approaches that rely on a fixed, pre-defined transformation.
Key Points Estimation and Point Instance Segmentation Approach for Lane Detection
In the case of traffic line detection, an essential perception module, many condition should be considered, such as number of traffic lines and computing power of the target system.
Ultra Fast Structure-aware Deep Lane Detection
Modern methods mainly regard lane detection as a problem of pixel-wise segmentation, which is struggling to address the problem of challenging scenarios and speed.
Spatial As Deep: Spatial CNN for Traffic Scene Understanding
Although CNN has shown strong capability to extract semantics from raw pixels, its capacity to capture spatial relationships of pixels across rows and columns of an image is not fully explored.
Semantic Instance Segmentation with a Discriminative Loss Function
In this work we propose to tackle the problem with a discriminative loss function, operating at the pixel level, that encourages a convolutional network to produce a representation of the image that can easily be clustered into instances with a simple post-processing step.
YOLOP: You Only Look Once for Panoptic Driving Perception
A panoptic driving perception system is an essential part of autonomous driving.
BDD100K: A Diverse Driving Dataset for Heterogeneous Multitask Learning
Datasets drive vision progress, yet existing driving datasets are impoverished in terms of visual content and supported tasks to study multitask learning for autonomous driving.
RESA: Recurrent Feature-Shift Aggregator for Lane Detection
Lane detection is one of the most important tasks in self-driving.
CondLaneNet: a Top-to-down Lane Detection Framework Based on Conditional Convolution
Modern deep-learning-based lane detection methods are successful in most scenarios but struggling for lane lines with complex topologies.