Rethinking Spatial Invariance of Convolutional Networks for Object Counting

Previous work generally believes that improving the spatial invariance of convolutional networks is the key to object counting. However, after verifying several mainstream counting networks, we surprisingly found too strict pixel-level spatial invariance would cause overfit noise in the density map generation. In this paper, we try to use locally connected Gaussian kernels to replace the original convolution filter to estimate the spatial position in the density map. The purpose of this is to allow the feature extraction process to potentially stimulate the density map generation process to overcome the annotation noise. Inspired by previous work, we propose a low-rank approximation accompanied with translation invariance to favorably implement the approximation of massive Gaussian convolution. Our work points a new direction for follow-up research, which should investigate how to properly relax the overly strict pixel-level spatial invariance for object counting. We evaluate our methods on 4 mainstream object counting networks (i.e., MCNN, CSRNet, SANet, and ResNet-50). Extensive experiments were conducted on 7 popular benchmarks for 3 applications (i.e., crowd, vehicle, and plant counting). Experimental results show that our methods significantly outperform other state-of-the-art methods and achieve promising learning of the spatial position of objects.

PDF Abstract CVPR 2022 PDF CVPR 2022 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Crowd Counting JHU-CROWD++ GauNet (ResNet-50) MAE 58.2 # 2
MSE 245.1 # 1
Crowd Counting ShanghaiTech A GauNet (ResNet-50) MAE 54.8 # 4
MSE 89.1 # 3
Crowd Counting ShanghaiTech B GauNet (ResNet-50) MAE 6.2 # 2
Object Counting TRANCOS GauNet (ResNet-50) MAE 2.1 # 1
MSE 2.6 # 1
Crowd Counting UCF CC 50 GauNet (ResNet-50) MAE 186.3 # 2
Crowd Counting UCF-QNRF GauNet (ResNet-50) MAE 81.6 # 6

Methods