Paper

AutoScale: Learning to Scale for Crowd Counting and Localization

Recent works on crowd counting mainly leverage CNNs to count by regressing density maps, and have achieved great progress. In the density map, each person is represented by a Gaussian blob, and the final count is obtained from the integration of the whole map. However, it is difficult to accurately predict the density map on dense regions. A major issue is that the density map on dense regions usually accumulates density values from a number of nearby Gaussian blobs, yielding different large density values on a small set of pixels. This makes the density map present variant patterns with significant pattern shifts and brings a long-tailed distribution of pixel-wise density values. We propose a simple and effective Learning to Scale (L2S) module, which automatically scales dense regions into reasonable closeness levels (reflecting image-plane distance between neighboring people). L2S directly normalizes the closeness in different patches such that it dynamically separates the overlapped blobs, decomposes the accumulated values in the ground-truth density map, and thus alleviates the pattern shifts and long-tailed distribution of density values. This helps the model to better learn the density map. We also explore the effectiveness of L2S in localizing people by finding the local minima of the quantized distance (w.r.t. person location map). To the best of our knowledge, such a localization method is also novel in localization-based crowd counting. We further introduce a customized dynamic cross-entropy loss, significantly improving the localization-based model optimization. Extensive experiments demonstrate that the proposed framework termed AutoScale improves upon some state-of-the-art methods in both regression and localization benchmarks on three crowded datasets and achieves very competitive performance on two sparse datasets.

Results in Papers With Code
(↓ scroll down to see all results)