V2VNet: Vehicle-to-Vehicle Communication for Joint Perception and Prediction

In this paper, we explore the use of vehicle-to-vehicle (V2V) communication to improve the perception and motion forecasting performance of self-driving vehicles. By intelligently aggregating the information received from multiple nearby vehicles, we can observe the same scene from different viewpoints. This allows us to see through occlusions and detect actors at long range, where the observations are very sparse or non-existent. We also show that our approach of sending compressed deep feature map activations achieves high accuracy while satisfying communication bandwidth requirements.

PDF Abstract ECCV 2020 PDF ECCV 2020 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
3D Object Detection OPV2V V2VNet (PointPillar backbone) AP@0.7@Default 0.822 # 1
AP@0.7@CulverCity 0.734 # 2
3D Object Detection V2XSet V2VNet AP0.5 (Perfect) 0.845 # 3
AP0.7 (Perfect) 0.677 # 5
AP0.5 (Noisy) 0.791 # 3
AP0.7 (Noisy) 0.493 # 3

Methods


No methods listed for this paper. Add relevant methods here