Is Pseudo-Lidar needed for Monocular 3D Object detection?

ICCV 2021  ยท  Dennis Park, Rares Ambrus, Vitor Guizilini, Jie Li, Adrien Gaidon ยท

Recent progress in 3D object detection from single images leverages monocular depth estimation as a way to produce 3D pointclouds, turning cameras into pseudo-lidar sensors. These two-stage detectors improve with the accuracy of the intermediate depth estimation network, which can itself be improved without manual labels via large-scale self-supervised learning. However, they tend to suffer from overfitting more than end-to-end methods, are more complex, and the gap with similar lidar-based detectors remains significant. In this work, we propose an end-to-end, single stage, monocular 3D object detector, DD3D, that can benefit from depth pre-training like pseudo-lidar methods, but without their limitations. Our architecture is designed for effective information transfer between depth estimation and 3D detection, allowing us to scale with the amount of unlabeled pre-training data. Our method achieves state-of-the-art results on two challenging benchmarks, with 16.34% and 9.28% AP for Cars and Pedestrians (respectively) on the KITTI-3D benchmark, and 41.5% mAP on NuScenes.

PDF Abstract ICCV 2021 PDF ICCV 2021 Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Monocular 3D Object Detection KITTI Cars Easy DD3D AP Easy 23.22 # 3
Monocular 3D Object Detection KITTI Cars Hard DD3D AP Hard 14.20 # 3
Monocular 3D Object Detection KITTI Cars Moderate DD3D AP Medium 16.34 # 6
Monocular 3D Object Detection KITTI Pedestrian Easy DD3D AP Easy 13.91 # 2
Monocular 3D Object Detection KITTI Pedestrian Hard DD3D AP Hard 8.05 # 1
Monocular 3D Object Detection KITTI Pedestrian Moderate DD3D AP Medium 9.30 # 1

Methods


No methods listed for this paper. Add relevant methods here