Test-Agnostic Long-Tailed Recognition by Test-Time Aggregating Diverse Experts with Self-Supervision

20 Jul 2021  ·  Yifan Zhang, Bryan Hooi, Lanqing Hong, Jiashi Feng ·

Existing long-tailed recognition methods, aiming to train class-balanced models from long-tailed data, generally assume the models would be evaluated on the uniform test class distribution. However, practical test class distributions often violate this assumption (e.g., being long-tailed or even inversely long-tailed), which would lead existing methods to fail in real-world applications. In this work, we study a more practical task setting, called test-agnostic long-tailed recognition, where the training class distribution is long-tailed while the test class distribution is unknown and can be skewed arbitrarily. In addition to the issue of class imbalance, this task poses another challenge: the class distribution shift between the training and test samples is unidentified. To handle this task, we propose a new method, called Test-time Aggregating Diverse Experts, that presents two solution strategies: (1) a new skill-diverse expert learning strategy that trains diverse experts to excel at handling different class distributions from a single long-tailed training distribution; (2) a novel test-time expert aggregation strategy that leverages self-supervision to aggregate multiple experts for handling various unknown test distributions. We theoretically show that our method has a provable ability to simulate the test class distribution. Extensive experiments verify that our method achieves new state-of-the-art performance on both vanilla and test-agnostic long-tailed recognition, where only three experts are sufficient to handle arbitrarily varied test class distributions. Code is available at https://github.com/Vanint/TADE-AgnosticLT.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Long-tail Learning CIFAR-100-LT (ρ=10) TADE Error Rate 36.4 # 2
Long-tail Learning CIFAR-100-LT (ρ=100) TADE Error Rate 50.2 # 3
Long-tail Learning CIFAR-100-LT (ρ=50) TADE Error Rate 46.1 # 3
Long-tail Learning CIFAR-10-LT (ρ=10) TADE Error Rate 9.2 # 1
Long-tail Learning CIFAR-10-LT (ρ=100) TADE Error Rate 16.2 # 2
Long-tail Learning ImageNet-LT TADE(ResNeXt101-32x4d) Top-1 Accuracy 61.4 # 7
Long-tail Learning ImageNet-LT TADE(ResNeXt-50) Top-1 Accuracy 58.8 # 9
Image Classification iNaturalist 2018 TADE (ResNet-50) Top-1 Accuracy 72.9% # 18
Long-tail Learning iNaturalist 2018 TADE Top-1 Accuracy 72.9% # 11
Long-tail Learning iNaturalist 2018 TADE(ResNet-152) Top-1 Accuracy 77% # 3
Long-tail Learning Places-LT TADE Top-1 Accuracy 41.3 # 9
Top 1 Accuracy 40.9 # 1

Methods


No methods listed for this paper. Add relevant methods here