Chameleon Sampling: Diverse and Pure Example Selection for Online Continual Learning with Noisy Labels

29 Sep 2021  ·  Jihwan Bang, Hyunseo Koh, Seulki Park, Hwanjun Song, Jung-Woo Ha, Jonghyun Choi ·

AI models suffer from continuously changing data distribution and noisy labels when applied to most real-world problems. Although many solutions have addressed issues for each problem of continual learning or noisy label, tackling both issues is of importance and yet underexplored. Here, we address the task of online continual learning with noisy labels, which is a more realistic, practical, and challenging continual learning setup by assuming ground-truth labels may be noisy. Specifically, we argue the importance of both diversity and purity of examples in the episodic memory of continual learning models. To balance diversity and purity in the memory, we propose to combine a novel memory management strategy and robust learning. Specifically, we propose a metric to balance the trade-off between diversity and purity in the episodic memory with noisy labels. We then refurbish or apply unsupervised learning by splitting noisy examples into multiple groups using the Gaussian mixture model for addressing label noise. We validate our approach on four real-world or synthetic benchmark datasets, including two CIFARs, Clothing1M, and mini-WebVision, demonstrate significant improvements over representative methods on this challenging task set-up.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here