1 code implementation • 1 Dec 2023 • Maorong Wang, Nicolas Michel, Ling Xiao, Toshihiko Yamasaki
To this end, we propose Collaborative Continual Learning (CCL), a collaborative learning based strategy to improve the model's capability in acquiring new concepts.
no code implementations • 13 Sep 2023 • Nicolas Michel, Romain Negrel, Giovanni Chierchia, Jean-François Bercher
Continual Learning has been challenging, especially when dealing with unsupervised scenarios such as Unsupervised Online General Continual Learning (UOGCL), where the learning agent has no prior knowledge of class boundaries or task change information.
no code implementations • 6 Sep 2023 • Nicolas Michel, Maorong Wang, Ling Xiao, Toshihiko Yamasaki
While Knowledge Distillation (KD) has been extensively used in offline Continual Learning, it remains under-exploited in OCL, despite its potential.
no code implementations • 1 Sep 2023 • Nicolas Michel, Giovanni Chierchia, Romain Negrel, Jean-François Bercher, Toshihiko Yamasaki
This scenario, known as Continual Learning (CL) poses challenges to standard learning algorithms which struggle to maintain knowledge of old tasks while learning new ones.
1 code implementation • 6 Jun 2023 • Nicolas Michel, Giovanni Chierchia, Romain Negrel, Jean-François Bercher
We propose to use the angular Gaussian distribution, which corresponds to a Gaussian projected on the unit-sphere and derive the associated loss function.
1 code implementation • 12 Jul 2022 • Nicolas Michel, Romain Negrel, Giovanni Chierchia, Jean-François Bercher
We study Online Continual Learning with missing labels and propose SemiCon, a new contrastive loss designed for partly labeled data.