Where2comm: Communication-Efficient Collaborative Perception via Spatial Confidence Maps

26 Sep 2022  ·  Yue Hu, Shaoheng Fang, Zixing Lei, Yiqi Zhong, Siheng Chen ·

Multi-agent collaborative perception could significantly upgrade the perception performance by enabling agents to share complementary information with each other through communication. It inevitably results in a fundamental trade-off between perception performance and communication bandwidth. To tackle this bottleneck issue, we propose a spatial confidence map, which reflects the spatial heterogeneity of perceptual information. It empowers agents to only share spatially sparse, yet perceptually critical information, contributing to where to communicate. Based on this novel spatial confidence map, we propose Where2comm, a communication-efficient collaborative perception framework. Where2comm has two distinct advantages: i) it considers pragmatic compression and uses less communication to achieve higher perception performance by focusing on perceptually critical areas; and ii) it can handle varying communication bandwidth by dynamically adjusting spatial areas involved in communication. To evaluate Where2comm, we consider 3D object detection in both real-world and simulation scenarios with two modalities (camera/LiDAR) and two agent types (cars/drones) on four datasets: OPV2V, V2X-Sim, DAIR-V2X, and our original CoPerception-UAVs. Where2comm consistently outperforms previous methods; for example, it achieves more than $100,000 \times$ lower communication volume and still outperforms DiscoNet and V2X-ViT on OPV2V. Our code is available at https://github.com/MediaBrain-SJTU/where2comm.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Monocular 3D Object Detection CoPerception-UAVs Where2comm AP50 65.71 # 1
3D Object Detection DAIR-V2X Where2comm AP50 63.71 # 2
Monocular 3D Object Detection OPV2V Where2comm AP50 47.14 # 1
3D Object Detection V2X-SIM Where2comm AP50 59.1 # 1

Methods


No methods listed for this paper. Add relevant methods here