Learning to Balance with Incremental Learning

1 Jan 2021  ·  Joel Jang, Yoonjeon Kim, Jaewoo Kang ·

Classification tasks require balanced distribution of data in order to ensure the learner to be trained to generalize over all classes. In realistic settings, however, the number of instances vary substantially among classes. This typically leads to a learner that promotes bias towards the majority group due to its dominating property. Therefore, methods to handle imbalanced data is crucial for alleviating distributional skews and fully utilizing the under-represented data. We propose a novel training method, Sequential Targeting, that forces an incremental learning setting by splitting the data into mutually exclusive subset and adaptively balancing the data distribution as tasks develop. To address problems that arise within incremental learning, we apply dropout and elastic weight consolidation with our method. It is demonstrated in a variety of experiments on both text and image dataset (IMDB, CIFAR-10, MNIST) and has proven its superiority over traditional methods such as oversampling and under-sampling.

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods