Test-Agnostic Long-Tailed Recognition by Test-Time Aggregating Diverse Experts with Self-Supervision
Existing long-tailed recognition methods, aiming to train class-balanced models from long-tailed data, generally assume the models would be evaluated on the uniform test class distribution. However, practical test class distributions often violate this assumption (e.g., being long-tailed or even inversely long-tailed), which would lead existing methods to fail in real-world applications. In this work, we study a more practical task setting, called test-agnostic long-tailed recognition, where the training class distribution is long-tailed while the test class distribution is unknown and can be skewed arbitrarily. In addition to the issue of class imbalance, this task poses another challenge: the class distribution shift between the training and test samples is unidentified. To handle this task, we propose a new method, called Test-time Aggregating Diverse Experts, that presents two solution strategies: (1) a new skill-diverse expert learning strategy that trains diverse experts to excel at handling different class distributions from a single long-tailed training distribution; (2) a novel test-time expert aggregation strategy that leverages self-supervision to aggregate multiple experts for handling various unknown test distributions. We theoretically show that our method has a provable ability to simulate the test class distribution. Extensive experiments verify that our method achieves new state-of-the-art performance on both vanilla and test-agnostic long-tailed recognition, where only three experts are sufficient to handle arbitrarily varied test class distributions. Code is available at https://github.com/Vanint/TADE-AgnosticLT.
PDF Abstract