Ground-aware Monocular 3D Object Detection for Autonomous Driving

1 Feb 2021  ·  Yuxuan Liu, Yuan Yixuan, Ming Liu ·

Estimating the 3D position and orientation of objects in the environment with a single RGB camera is a critical and challenging task for low-cost urban autonomous driving and mobile robots. Most of the existing algorithms are based on the geometric constraints in 2D-3D correspondence, which stems from generic 6D object pose estimation. We first identify how the ground plane provides additional clues in depth reasoning in 3D detection in driving scenes. Based on this observation, we then improve the processing of 3D anchors and introduce a novel neural network module to fully utilize such application-specific priors in the framework of deep learning. Finally, we introduce an efficient neural network embedded with the proposed module for 3D object detection. We further verify the power of the proposed module with a neural network designed for monocular depth prediction. The two proposed networks achieve state-of-the-art performances on the KITTI 3D object detection and depth prediction benchmarks, respectively. The code will be published in https://www.github.com/Owen-Liuyuxuan/visualDet3D

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Monocular 3D Object Detection KITTI Cars Hard GAC AP Hard 9.94 # 5
Monocular 3D Object Detection KITTI Cars Moderate GAC AP Medium 13.17 # 12

Methods


No methods listed for this paper. Add relevant methods here