Multiview Detection with Feature Perspective Transformation

ECCV 2020  ·  Yunzhong Hou, Liang Zheng, Stephen Gould ·

Incorporating multiple camera views for detection alleviates the impact of occlusions in crowded scenes. In a multiview system, we need to answer two important questions when dealing with ambiguities that arise from occlusions. First, how should we aggregate cues from the multiple views? Second, how should we aggregate unreliable 2D and 3D spatial information that has been tainted by occlusions? To address these questions, we propose a novel multiview detection system, MVDet. For multiview aggregation, existing methods combine anchor box features from the image plane, which potentially limits performance due to inaccurate anchor box shapes and sizes. In contrast, we take an anchor-free approach to aggregate multiview information by projecting feature maps onto the ground plane (bird's eye view). To resolve any remaining spatial ambiguity, we apply large kernel convolutions on the ground plane feature map and infer locations from detection peaks. Our entire model is end-to-end learnable and achieves 88.2% MODA on the standard Wildtrack dataset, outperforming the state-of-the-art by 14.1%. We also provide detailed analysis of MVDet on a newly introduced synthetic dataset, MultiviewX, which allows us to control the level of occlusion. Code and MultiviewX dataset are available at https://github.com/hou-yz/MVDet.

PDF Abstract ECCV 2020 PDF ECCV 2020 Abstract

Datasets


Introduced in the Paper:

MultiviewX

Used in the Paper:

Wildtrack

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Multiview Detection MultiviewX MVDet MODA 93.6 # 5
MODP 79.6 # 6
Recall 86.7 # 5
Multiview Detection Wildtrack MVDet MODA 88.2 # 8
MODP 75.7 # 7
Recall 93.6 # 6

Methods


No methods listed for this paper. Add relevant methods here