Lane Detection
84 papers with code • 11 benchmarks • 15 datasets
Lane Detection is a computer vision task that involves identifying the boundaries of driving lanes in a video or image of a road scene. The goal is to accurately locate and track the lane markings in real-time, even in challenging conditions such as poor lighting, glare, or complex road layouts.
Lane detection is an important component of advanced driver assistance systems (ADAS) and autonomous vehicles, as it provides information about the road layout and the position of the vehicle within the lane, which is crucial for navigation and safety. The algorithms typically use a combination of computer vision techniques, such as edge detection, color filtering, and Hough transforms, to identify and track the lane markings in a road scene.
( Image credit: End-to-end Lane Detection )
Libraries
Use these libraries to find Lane Detection models and implementationsMost implemented papers
Structured Bird's-Eye-View Traffic Scene Understanding from Onboard Images
In this work, we study the problem of extracting a directed graph representing the local road network in BEV coordinates, from a single onboard camera image.
Sim-to-Real Domain Adaptation for Lane Detection and Classification in Autonomous Driving
In this paper, we propose UDA schemes using adversarial discriminative and generative methods for lane detection and classification applications in autonomous driving.
PersFormer: 3D Lane Detection via Perspective Transformer and the OpenLane Benchmark
Methods for 3D lane detection have been recently proposed to address the issue of inaccurate lane layouts in many autonomous driving scenarios (uphill/downhill, bump, etc.).
Eigenlanes: Data-Driven Lane Descriptors for Structurally Diverse Lanes
Second, we generate a set of lane candidates by clustering the training lanes in the eigenlane space.
ONCE-3DLanes: Building Monocular 3D Lane Detection
We present ONCE-3DLanes, a real-world autonomous driving dataset with lane layout annotation in 3D space.
Ultra Fast Deep Lane Detection with Hybrid Anchor Driven Ordinal Classification
With the help of the anchor-driven representation, we then reformulate the lane detection task as an ordinal classification problem to get the coordinates of lanes.
YOLOPv2: Better, Faster, Stronger for Panoptic Driving Perception
Over the last decade, multi-tasking learning approaches have achieved promising results in solving panoptic driving perception problems, providing both high-precision and high-efficiency performance.
End-to-End Lane detection with One-to-Several Transformer
We first propose the one-to-several label assignment, which combines one-to-many and one-to-one label assignment to solve label semantic conflicts while keeping end-to-end detection.
ADNet: Lane Shape Prediction via Anchor Decomposition
In this paper, we revisit the limitations of anchor-based lane detection methods, which have predominantly focused on fixed anchors that stem from the edges of the image, disregarding their versatility and quality.
Lane2Seq: Towards Unified Lane Detection via Sequence Generation
Experimental results demonstrate that such a simple sequence generation paradigm not only unifies lane detection but also achieves competitive performance on benchmarks.