Logit Normalization for Long-tail Object Detection

31 Mar 2022  ·  Liang Zhao, Yao Teng, LiMin Wang ·

Real-world data exhibiting skewed distributions pose a serious challenge to existing object detectors. Moreover, the samplers in detectors lead to shifted training label distributions, while the tremendous proportion of background to foreground samples severely harms foreground classification. To mitigate these issues, in this paper, we propose Logit Normalization (LogN), a simple technique to self-calibrate the classified logits of detectors in a similar way to batch normalization. In general, our LogN is training- and tuning-free (i.e. require no extra training and tuning process), model- and label distribution-agnostic (i.e. generalization to different kinds of detectors and datasets), and also plug-and-play (i.e. direct application without any bells and whistles). Extensive experiments on the LVIS dataset demonstrate superior performance of LogN to state-of-the-art methods with various detectors and backbones. We also provide in-depth studies on different aspects of our LogN. Further experiments on ImageNet-LT reveal its competitiveness and generalizability. Our LogN can serve as a strong baseline for long-tail object detection and is expected to inspire future research in this field. Code and trained models will be publicly available at https://github.com/MCG-NJU/LogN.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here