Global and Local Mixture Consistency Cumulative Learning for Long-tailed Visual Recognitions

CVPR 2023  ยท  Fei Du, Peng Yang, Qi Jia, Fengtao Nan, Xiaoting Chen, Yun Yang ยท

In this paper, our goal is to design a simple learning paradigm for long-tail visual recognition, which not only improves the robustness of the feature extractor but also alleviates the bias of the classifier towards head classes while reducing the training skills and overhead. We propose an efficient one-stage training strategy for long-tailed visual recognition called Global and Local Mixture Consistency cumulative learning (GLMC). Our core ideas are twofold: (1) a global and local mixture consistency loss improves the robustness of the feature extractor. Specifically, we generate two augmented batches by the global MixUp and local CutMix from the same batch data, respectively, and then use cosine similarity to minimize the difference. (2) A cumulative head tail soft label reweighted loss mitigates the head class bias problem. We use empirical class frequencies to reweight the mixed label of the head-tail class for long-tailed data and then balance the conventional loss and the rebalanced loss with a coefficient accumulated by epochs. Our approach achieves state-of-the-art accuracy on CIFAR10-LT, CIFAR100-LT, and ImageNet-LT datasets. Additional experiments on balanced ImageNet and CIFAR demonstrate that GLMC can significantly improve the generalization of backbones. Code is made publicly available at https://github.com/ynu-yangpeng/GLMC.

PDF Abstract CVPR 2023 PDF CVPR 2023 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Long-tail Learning CIFAR-100-LT (ฯ=10) GLMC (ResNet-32, channel x4) Error Rate 26.53 # 6
Long-tail Learning CIFAR-100-LT (ฯ=10) GLMC+MaxNorm (ResNet-32, channel x4) Error Rate 25.72 # 5
Long-tail Learning CIFAR-100-LT (ฯ=100) GLMC (ResNet-32, channel x4) Error Rate 42.01 # 8
Long-tail Learning CIFAR-100-LT (ฯ=100) GLMC+MaxNorm (ResNet-32, channel x4) Error Rate 41.59 # 7
Long-tail Learning CIFAR-100-LT (ฯ=50) GLMC (ResNet-32, channel x4) Error Rate 36.15 # 6
Long-tail Learning CIFAR-10-LT (ฯ=10) GLMC+MaxNorm (ResNet-32, channel x4) Error Rate 5 # 1
Long-tail Learning CIFAR-10-LT (ฯ=10) GLMC (ResNet-32, channel x4) Error Rate 5.15 # 3
Long-tail Learning CIFAR-10-LT (ฯ=100) GLMC (ResNet-32, channel x4) Error Rate 11.50 # 4
Long-tail Learning CIFAR-10-LT (ฯ=100) GLMC+MaxNorm (ResNet-32, channel x4) Error Rate 10.42 # 1
Long-tail Learning ImageNet-LT GLMC (ResNeXt-50) Top-1 Accuracy 56.3 # 30

Methods


CutMix โ€ข Mixup