MetaSAug: Meta Semantic Augmentation for Long-Tailed Visual Recognition

Real-world training data usually exhibits long-tailed distribution, where several majority classes have a significantly larger number of samples than the remaining minority classes. This imbalance degrades the performance of typical supervised learning algorithms designed for balanced training sets. In this paper, we address this issue by augmenting minority classes with a recently proposed implicit semantic data augmentation (ISDA) algorithm, which produces diversified augmented samples by translating deep features along many semantically meaningful directions. Importantly, given that ISDA estimates the class-conditional statistics to obtain semantic directions, we find it ineffective to do this on minority classes due to the insufficient training data. To this end, we propose a novel approach to learn transformed semantic directions with meta-learning automatically. In specific, the augmentation strategy during training is dynamically optimized, aiming to minimize the loss on a small balanced validation set, which is approximated via a meta update step. Extensive empirical results on CIFAR-LT-10/100, ImageNet-LT, and iNaturalist 2017/2018 validate the effectiveness of our method.

PDF Abstract CVPR 2021 PDF CVPR 2021 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Long-tail Learning CIFAR-100-LT (ρ=10) MetaSAug-LDAM Error Rate 38.72 # 22
Long-tail Learning CIFAR-100-LT (ρ=100) MetaSAug-LDAM Error Rate 51.99 # 31
Long-tail Learning CIFAR-100-LT (ρ=200) MetaSAug-LDAM Error Rate 56.91 # 2
Long-tail Learning CIFAR-100-LT (ρ=50) MetaSAug-LDAM Error Rate 47.73 # 18
Long-tail Learning CIFAR-10-LT (ρ=10) MetaSAug-LDAM Error Rate 10.32 # 24
Long-tail Learning CIFAR-10-LT (ρ=100) MetaSAug-LDAM Error Rate 19.34 # 16
Long-tail Learning CIFAR-10-LT (ρ=200) MetaSAug-LDAM Error Rate 22.65 # 2
Long-tail Learning CIFAR-10-LT (ρ=50) MetaSAug-LDAM Error Rate 15.66 # 5
Long-tail Learning ImageNet-LT MetaSAug (ResNet-152) Top-1 Accuracy 50.03 # 48
Long-tail Learning ImageNet-LT MetaSAug with CE loss Top-1 Accuracy 47.39 # 49
Image Classification iNaturalist MetaSAug Top 1 Accuracy 63.28% # 10
Image Classification iNaturalist 2018 MetaSAug Top-1 Accuracy 68.75% # 37
Long-tail Learning iNaturalist 2018 MetaSAug Top-1 Accuracy 68.75% # 32

Methods


No methods listed for this paper. Add relevant methods here