no code implementations • 13 Sep 2023 • Nicolas Michel, Romain Negrel, Giovanni Chierchia, Jean-François Bercher
Continual Learning has been challenging, especially when dealing with unsupervised scenarios such as Unsupervised Online General Continual Learning (UOGCL), where the learning agent has no prior knowledge of class boundaries or task change information.
no code implementations • 1 Sep 2023 • Nicolas Michel, Giovanni Chierchia, Romain Negrel, Jean-François Bercher, Toshihiko Yamasaki
This scenario, known as Continual Learning (CL) poses challenges to standard learning algorithms which struggle to maintain knowledge of old tasks while learning new ones.
1 code implementation • 6 Jun 2023 • Nicolas Michel, Giovanni Chierchia, Romain Negrel, Jean-François Bercher
We propose to use the angular Gaussian distribution, which corresponds to a Gaussian projected on the unit-sphere and derive the associated loss function.
no code implementations • 31 Jan 2023 • Pierre Larrenie, Jean-François Bercher, Olivier Venard, Iyad Lahsen-Cherif
Software Defined Networks have opened the door to statistical and AI-based techniques to improve efficiency of networking.
no code implementations • 31 Jan 2023 • Pierre Larrenie, Jean-François Bercher, Olivier Venard, Iyad Lahsen-Cherif
In this paper, we improve our previously proposed low-cost estimators [6] by adding the adaptive dimension, and show that the performances are minimally modified while gaining the ability to track varying networks.
1 code implementation • 12 Jul 2022 • Nicolas Michel, Romain Negrel, Giovanni Chierchia, Jean-François Bercher
We study Online Continual Learning with missing labels and propose SemiCon, a new contrastive loss designed for partly labeled data.
no code implementations • 27 May 2013 • Jean-François Bercher
The Cram\'er-Rao inequality shows that the generalized $q$-Gaussians also minimize the generalized Fisher information among distributions with a fixed moment.
no code implementations • 27 May 2013 • Jean-François Bercher
We propose a modified $\chi^{\beta}$-divergence, give some of its properties, and show that this leads to the definition of a generalized Fisher information.