no code implementations • 19 Feb 2024 • Louis Ohl, Pierre-Alexandre Mattei, Mickaël Leclercq, Arnaud Droit, Frédéric Precioso
Trees are convenient models for obtaining explainable predictions on relatively small datasets.
1 code implementation • 29 Nov 2023 • Pierre-Alexandre Mattei, Damien Garreau
More precisely, in that case, the average loss of the ensemble is a decreasing function of the number of models.
no code implementations • 6 Sep 2023 • Louis Ohl, Pierre-Alexandre Mattei, Charles Bouveyron, Warith Harchaoui, Mickaël Leclercq, Arnaud Droit, Frédéric Precioso
In the last decade, recent successes in deep clustering majorly involved the Mutual Information (MI) as an unsupervised objective for training neural networks with increasing regularisations.
no code implementations • 17 Apr 2023 • Irene Balelli, Aude Sportisse, Francesco Cremonesi, Pierre-Alexandre Mattei, Marco Lorenzi
In addition, thanks to the variational nature of Fed-MIWAE, our method is designed to perform multiple imputation, allowing for the quantification of the imputation uncertainty in the federated scenario.
no code implementations • 15 Feb 2023 • Aude Sportisse, Hugo Schmutz, Olivier Humbert, Charles Bouveyron, Pierre-Alexandre Mattei
Semi-supervised learning is a powerful technique for leveraging unlabeled data to improve machine learning models, but it can be affected by the presence of ``informative'' labels, which occur when some classes are more likely to be labeled than others.
no code implementations • 7 Feb 2023 • Louis Ohl, Pierre-Alexandre Mattei, Charles Bouveyron, Mickaël Leclercq, Arnaud Droit, Frédéric Precioso
Feature selection in clustering is a hard task which involves simultaneously the discovery of relevant clusters as well as relevant variables with respect to these clusters.
no code implementations • 6 Dec 2022 • Hugo Henri Joseph Senetaire, Damien Garreau, Jes Frellsen, Pierre-Alexandre Mattei
The model parameters can be learned via maximum likelihood, and the method can be adapted to any predictor network architecture and any type of prediction problem.
1 code implementation • 12 Oct 2022 • Louis Ohl, Pierre-Alexandre Mattei, Charles Bouveyron, Warith Harchaoui, Mickaël Leclercq, Arnaud Droit, Frederic Precioso
In the last decade, recent successes in deep clustering majorly involved the mutual information (MI) as an unsupervised objective for training neural networks with increasing regularisations.
no code implementations • 2 May 2022 • Melissa Sanabria, Frédéric Precioso, Pierre-Alexandre Mattei, Thomas Menguy
The results show that our method can detect the actions of the match, identify which of these actions should belong to the summary and then propose multiple candidate summaries which are similar enough but with relevant variability to provide different options to the final editor.
2 code implementations • 14 Mar 2022 • Hugo Schmutz, Olivier Humbert, Pierre-Alexandre Mattei
Our debiasing approach is straightforward to implement and applicable to most deep SSL methods.
no code implementations • 2 Mar 2022 • Federico Bergamin, Pierre-Alexandre Mattei, Jakob D. Havtorn, Hugo Senetaire, Hugo Schmutz, Lars Maaløe, Søren Hauberg, Jes Frellsen
These techniques, based on classical statistical tests, are model-agnostic in the sense that they can be applied to any differentiable generative model.
no code implementations • 26 Jan 2022 • Pierre-Alexandre Mattei, Jes Frellsen
Inspired by this simple monotonicity theorem, we present a series of nonasymptotic results that link properties of Monte Carlo estimates to tightness of MCOs.
no code implementations • ICLR 2022 • Niels Bruun Ipsen, Pierre-Alexandre Mattei, Jes Frellsen
To address supervised deep learning with missing values, we propose to marginalize over missing values in a joint model of covariates and outcomes.
no code implementations • 1 Jun 2021 • Rima Khouja, Pierre-Alexandre Mattei, Bernard Mourrain
In data processing and machine learning, an important challenge is to recover and exploit models that can represent accurately the data.
no code implementations • 3 Feb 2021 • Michael Fop, Pierre-Alexandre Mattei, Charles Bouveyron, Thomas Brendan Murphy
In supervised classification problems, the test set may contain data points belonging to classes not observed in the learning phase.
1 code implementation • ICLR 2021 • Niels Bruun Ipsen, Pierre-Alexandre Mattei, Jes Frellsen
When a missing process depends on the missing values themselves, it needs to be explicitly modelled and taken into account while doing likelihood-based inference.
1 code implementation • 29 Jan 2019 • Samuel Wiqvist, Pierre-Alexandre Mattei, Umberto Picchini, Jes Frellsen
We present a novel family of deep neural architectures, named partially exchangeable networks (PENs) that leverage probabilistic symmetries.
no code implementations • 6 Dec 2018 • Pierre-Alexandre Mattei, Jes Frellsen
Our approach, called MIWAE, is based on the importance-weighted autoencoder (IWAE), and maximises a potentially tight lower bound of the log-likelihood of the observed data.
no code implementations • NeurIPS 2018 • Pierre-Alexandre Mattei, Jes Frellsen
Finally, we describe an algorithm for missing data imputation using the exact conditional likelihood of a deep latent variable model.
no code implementations • 8 Mar 2017 • Charles Bouveyron, Pierre Latouche, Pierre-Alexandre Mattei
We present a Bayesian model selection approach to estimate the intrinsic dimensionality of a high-dimensional dataset.
no code implementations • 19 May 2016 • Charles Bouveyron, Pierre Latouche, Pierre-Alexandre Mattei
To this end, using Roweis' probabilistic interpretation of PCA and a Gaussian prior on the loading matrix, we provide the first exact computation of the marginal likelihood of a Bayesian PCA model.