Search Results for author: Edouard Duchesnay

Found 11 papers, 6 papers with code

Separating common from salient patterns with Contrastive Representation Learning

1 code implementation19 Feb 2024 Robin Louiset, Edouard Duchesnay, Antoine Grigis, Pietro Gori

Then, we motivate a novel Mutual Information minimization strategy to prevent information leakage between common and salient distributions.

Contrastive Learning Representation Learning

SepVAE: a contrastive VAE to separate pathological patterns from healthy ones

1 code implementation12 Jul 2023 Robin Louiset, Edouard Duchesnay, Antoine Grigis, Benoit Dufumier, Pietro Gori

Contrastive Analysis VAE (CA-VAEs) is a family of Variational auto-encoders (VAEs) that aims at separating the common factors of variation between a background dataset (BG) (i. e., healthy subjects) and a target dataset (TG) (i. e., patients) from the ones that only exist in the target dataset.

Contrastive learning for regression in multi-site brain age prediction

no code implementations14 Nov 2022 Carlo Alberto Barbano, Benoit Dufumier, Edouard Duchesnay, Marco Grangetto, Pietro Gori

Building accurate Deep Learning (DL) models for brain age prediction is a very relevant topic in neuroimaging, as it could help better understand neurodegenerative disorders and find new biomarkers.

Contrastive Learning regression

Integrating Prior Knowledge in Contrastive Learning with Kernel

1 code implementation3 Jun 2022 Benoit Dufumier, Carlo Alberto Barbano, Robin Louiset, Edouard Duchesnay, Pietro Gori

To this end, we use kernel theory to propose a novel loss, called decoupled uniformity, that i) allows the integration of prior knowledge and ii) removes the negative-positive coupling in the original InfoNCE loss.

Contrastive Learning Data Augmentation

Conditional Alignment and Uniformity for Contrastive Learning with Continuous Proxy Labels

no code implementations10 Nov 2021 Benoit Dufumier, Pietro Gori, Julie Victor, Antoine Grigis, Edouard Duchesnay

However, a particularity of medical images is the availability of meta-data (such as age or sex) that can be exploited for learning representations.

Contrastive Learning

Structured Sparse Principal Components Analysis with the TV-Elastic Net penalty

no code implementations6 Sep 2016 Amicie de Pierrefeu, Tommy Löfstedt, Fouad Hadj-Selem, Mathieu Dubois, Philippe Ciuciu, Vincent Frouin, Edouard Duchesnay

However, in neuroimaging, it is essential to uncover clinically interpretable phenotypic markers that would account for the main variability in the brain images of a population.

Continuation of Nesterov's Smoothing for Regression with Structured Sparsity in High-Dimensional Neuroimaging

no code implementations31 May 2016 Fouad Hadj-Selem, Tommy Lofstedt, Elvis Dohmatob, Vincent Frouin, Mathieu Dubois, Vincent Guillemot, Edouard Duchesnay

Nesterov's smoothing technique can be used to minimize a large number of non-smooth convex structured penalties but reasonable precision requires a small smoothing parameter, which slows down the convergence speed.

regression

Predictive support recovery with TV-Elastic Net penalty and logistic regression: an application to structural MRI

no code implementations21 Jul 2014 Mathieu Dubois, Fouad Hadj-Selem, Tommy Lofstedt, Matthieu Perrot, Clara Fischer, Vincent Frouin, Edouard Duchesnay

This algorithm uses Nesterov's smoothing technique to approximate the TV penalty with a smooth function such that the loss and the penalties are minimized with an exact accelerated proximal gradient algorithm.

Cannot find the paper you are looking for? You can Submit a new open access paper.