SpaRC: Sparse Radar-Camera Fusion for 3D Object Detection

29 Nov 2024  ·  Philipp Wolters, Johannes Gilg, Torben Teepe, Fabian Herzog, Felix Fent, Gerhard Rigoll ·

In this work, we present SpaRC, a novel Sparse fusion transformer for 3D perception that integrates multi-view image semantics with Radar and Camera point features. The fusion of radar and camera modalities has emerged as an efficient perception paradigm for autonomous driving systems. While conventional approaches utilize dense Bird's Eye View (BEV)-based architectures for depth estimation, contemporary query-based transformers excel in camera-only detection through object-centric methodology. However, these query-based approaches exhibit limitations in false positive detections and localization precision due to implicit depth modeling. We address these challenges through three key contributions: (1) sparse frustum fusion (SFF) for cross-modal feature alignment, (2) range-adaptive radar aggregation (RAR) for precise object localization, and (3) local self-attention (LSA) for focused query aggregation. In contrast to existing methods requiring computationally intensive BEV-grid rendering, SpaRC operates directly on encoded point features, yielding substantial improvements in efficiency and accuracy. Empirical evaluations on the nuScenes and TruckScenes benchmarks demonstrate that SpaRC significantly outperforms existing dense BEV-based and sparse query-based detectors. Our method achieves state-of-the-art performance metrics of 67.1 NDS and 63.1 AMOTA. The code and pretrained models are available at https://github.com/phi-wol/sparc.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
3D Object Detection nuScenes SpaRC NDS 0.699 # 99
mAP 0.646 # 92
3D Object Detection nuscenes Camera-Radar SpaRC NDS 69.9 # 1
3D Multi-Object Tracking nuscenes Camera-Radar SpaRC AMOTA 0.631 # 1
3D Object Detection TruckScenes SpaRC NDS 37.4 # 1
mAP 27.2 # 1

Methods


No methods listed for this paper. Add relevant methods here