Improving Calibration for Long-Tailed Recognition

CVPR 2021  ·  Zhisheng Zhong, Jiequan Cui, Shu Liu, Jiaya Jia ·

Deep neural networks may perform poorly when training datasets are heavily class-imbalanced. Recently, two-stage methods decouple representation learning and classifier learning to improve performance. But there is still the vital issue of miscalibration. To address it, we design two methods to improve calibration and performance in such scenarios. Motivated by the fact that predicted probability distributions of classes are highly related to the numbers of class instances, we propose label-aware smoothing to deal with different degrees of over-confidence for classes and improve classifier learning. For dataset bias between these two stages due to different samplers, we further propose shifted batch normalization in the decoupling framework. Our proposed methods set new records on multiple popular long-tailed recognition benchmark datasets, including CIFAR-10-LT, CIFAR-100-LT, ImageNet-LT, Places-LT, and iNaturalist 2018. Code will be available at https://github.com/Jia-Research-Lab/MiSLAS.

PDF Abstract CVPR 2021 PDF CVPR 2021 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Long-tail Learning CIFAR-100-LT (ρ=10) MiSLAS Error Rate 36.8 # 18
Long-tail Learning CIFAR-100-LT (ρ=100) MiSLAS Error Rate 53 # 37
Long-tail Learning CIFAR-100-LT (ρ=50) MiSLAS Error Rate 47.7 # 19
Long-tail Learning CIFAR-10-LT (ρ=10) MiSLAS Error Rate 10 # 18
Long-tail Learning CIFAR-10-LT (ρ=100) MiSLAS Error Rate 17.9 # 16
Long-tail Learning ImageNet-LT MiSLAS Top-1 Accuracy 52.7 # 44
Long-tail Learning iNaturalist 2018 MiSLAS Top-1 Accuracy 71.6% # 25

Methods