A Dual-Cycled Cross-View Transformer Network for Unified Road Layout Estimation and 3D Object Detection in the Bird's-Eye-View

19 Sep 2022  ·  Curie Kim, Ue-Hwan Kim ·

The bird's-eye-view (BEV) representation allows robust learning of multiple tasks for autonomous driving including road layout estimation and 3D object detection. However, contemporary methods for unified road layout estimation and 3D object detection rarely handle the class imbalance of the training dataset and multi-class learning to reduce the total number of networks required. To overcome these limitations, we propose a unified model for road layout estimation and 3D object detection inspired by the transformer architecture and the CycleGAN learning framework. The proposed model deals with the performance degradation due to the class imbalance of the dataset utilizing the focal loss and the proposed dual cycle loss. Moreover, we set up extensive learning scenarios to study the effect of multi-class learning for road layout estimation in various situations. To verify the effectiveness of the proposed model and the learning scheme, we conduct a thorough ablation study and a comparative study. The experiment results attest the effectiveness of our model; we achieve state-of-the-art performance in both the road layout estimation and 3D object detection tasks.

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Monocular Cross-View Road Scene Parsing(Vehicle) Argoverse DCTNet mIoU 48.04% # 1
mAP 68.96% # 1
Monocular Cross-View Road Scene Parsing(Road) Argoverse DCTNet mAP 88.87% # 1
mIOU 76.71% # 1
Monocular Cross-View Road Scene Parsing(Vehicle) KITTI2012 DCTNet mIoU 39.44% # 1
mAP 58.89% # 1
Monocular Cross-View Road Scene Parsing(Road) Kitti Odometry DCTNet mAP 88.28% # 1
mIOU 77.15% # 2
Monocular Cross-View Road Scene Parsing(Road) Kitti Raw DCTNet mIoU 65.86% # 2
mAP 86.56% # 1

Methods