Drivable Area Detection
6 papers with code • 1 benchmarks • 1 datasets
The drivable area detection is a subset topic of object detection. The model marks the safe and legal roads for regular driving in color blocks shaped by area.
Most implemented papers
YOLOP: You Only Look Once for Panoptic Driving Perception
A panoptic driving perception system is an essential part of autonomous driving.
BDD100K: A Diverse Driving Dataset for Heterogeneous Multitask Learning
Datasets drive vision progress, yet existing driving datasets are impoverished in terms of visual content and supported tasks to study multitask learning for autonomous driving.
HybridNets: End-to-End Perception Network
Based on these optimizations, we have developed an end-to-end perception network to perform multi-tasking, including traffic object detection, drivable area segmentation and lane detection simultaneously, called HybridNets, which achieves better accuracy than prior art.
YOLOPv2: Better, Faster, Stronger for Panoptic Driving Perception
Over the last decade, multi-tasking learning approaches have achieved promising results in solving panoptic driving perception problems, providing both high-precision and high-efficiency performance.
TwinLiteNet: An Efficient and Lightweight Model for Driveable Area and Lane Segmentation in Self-Driving Cars
Driveable Area Segmentation and Lane Detection are particularly important for safe and efficient navigation on the road.
You Only Look at Once for Real-time and Generic Multi-Task
In this study, we incorporate A-YOLOM, an adaptive, real-time, and lightweight multi-task model designed to concurrently address object detection, drivable area segmentation, and lane line segmentation tasks.