The Unreasonable Effectiveness of the Class-reversed Sampling in Tail Sample Memorization

1 Jan 2021  ·  Benyi Hu, Chi Zhang, Yuehu Liu, Le Wang, Li Liu ·

Long-tailed visual class recognition poses significant challenges to traditional machine learning and emerging deep networks due to its inherent class imbalance. A common belief is that tail classes with few samples cannot exhibit enough regularity for pattern extraction. What makes things worse, the limited cardinality may lead to low exposure of tail classes in the training stage. Resampling methods, especially those who naively enlarge the exposure frequency, eventually fail with head classes under-represented and tail classes overfitted. Arguing that long-tailed learning involves a trade-off between head class pattern extraction and tail class memorizing, we propose a simple yet effective combinational sampling method motivated by the recent success of a series works on the memorization-generalization mechanism. We empirically demonstrate that a naive switching of instance-balanced sampler to class-reversed sampler for the last several epochs of training can help neural networks better memorize tail classes of low-regularity. In our experiments, the proposed method can reach the state-of-the-art performance more efficiently than current methods, on several datasets. Further experiments also validate the superior performance of the proposed sampling strategy, implying that the long-tailed learning trade-off could be effectively tackled only in the memorization stage with a small learning rate and over-exposure of tail samples.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here