VectorMapNet: End-to-end Vectorized HD Map Learning

17 Jun 2022  ·  Yicheng Liu, Tianyuan Yuan, Yue Wang, Yilun Wang, Hang Zhao ·

Autonomous driving systems require High-Definition (HD) semantic maps to navigate around urban roads. Existing solutions approach the semantic mapping problem by offline manual annotation, which suffers from serious scalability issues. Recent learning-based methods produce dense rasterized segmentation predictions to construct maps. However, these predictions do not include instance information of individual map elements and require heuristic post-processing to obtain vectorized maps. To tackle these challenges, we introduce an end-to-end vectorized HD map learning pipeline, termed VectorMapNet. VectorMapNet takes onboard sensor observations and predicts a sparse set of polylines in the bird's-eye view. This pipeline can explicitly model the spatial relation between map elements and generate vectorized maps that are friendly to downstream autonomous driving tasks. Extensive experiments show that VectorMapNet achieve strong map learning performance on both nuScenes and Argoverse2 dataset, surpassing previous state-of-the-art methods by 14.2 mAP and 14.6mAP. Qualitatively, VectorMapNet is capable of generating comprehensive maps and capturing fine-grained details of road geometry. To the best of our knowledge, VectorMapNet is the first work designed towards end-to-end vectorized map learning from onboard observations. Our project website is available at \url{}.

PDF Abstract

Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
HD semantic map learning Argoverse2 VectorMapNet Frechet AP 44.6 # 1
Chamfer AP 35.8 # 1
HD semantic map learning Argoverse2 HDMapNet Chamfer AP 18.8 # 2
HD semantic map learning nuScenes VectorMapNet Chamfer AP 53.7 # 1
HD semantic map learning nuScenes HDMapNet Chamfer AP 31 # 2
3D Lane Detection OpenLane-V2 val VectorMapNet DET_l 11.1 # 6
OLS 20.8 # 6
DET_t 41.7 # 7
TOP_ll 0.4 # 6
TOP_lt 5.9 # 6


No methods listed for this paper. Add relevant methods here