1 code implementation • 19 Feb 2024 • Robin Louiset, Edouard Duchesnay, Antoine Grigis, Pietro Gori
Then, we motivate a novel Mutual Information minimization strategy to prevent information leakage between common and salient distributions.
1 code implementation • 31 Jan 2024 • Florence Carton, Robin Louiset, Pietro Gori
Experimental results on four visual datasets, from simple synthetic examples to complex medical images, show that the proposed method outperforms SOTA CA-VAEs in terms of latent separation and image quality.
1 code implementation • 12 Jul 2023 • Robin Louiset, Edouard Duchesnay, Antoine Grigis, Benoit Dufumier, Pietro Gori
Contrastive Analysis VAE (CA-VAEs) is a family of Variational auto-encoders (VAEs) that aims at separating the common factors of variation between a background dataset (BG) (i. e., healthy subjects) and a target dataset (TG) (i. e., patients) from the ones that only exist in the target dataset.
1 code implementation • 3 Jun 2022 • Benoit Dufumier, Carlo Alberto Barbano, Robin Louiset, Edouard Duchesnay, Pietro Gori
To this end, we use kernel theory to propose a novel loss, called decoupled uniformity, that i) allows the integration of prior knowledge and ii) removes the negative-positive coupling in the original InfoNCE loss.
1 code implementation • 5 Jul 2021 • Robin Louiset, Pietro Gori, Benoit Dufumier, Josselin Houenou, Antoine Grigis, Edouard Duchesnay
Our method is generic, it can integrate any clustering method and can be driven by both binary classification and regression.