no code implementations • 14 Jun 2024 • Shubham Gupta, Mirco Ravanelli, Pascal Germain, Cem Subakan
In this paper, we propose Phoneme Discretized Saliency Maps (PDSM), a discretization algorithm for saliency maps that takes advantage of phoneme boundaries for explainable detection of AI-generated voice.
no code implementations • 20 Nov 2023 • Benjamin Leblanc, Pascal Germain
Interpretability and explainability have gained more and more attention in the field of machine learning as they are crucial when it comes to high-stakes decisions and troubleshooting.
no code implementations • NeurIPS 2023 • Sokhna Diarra Mbacke, Florence Clerc, Pascal Germain
Since their inception, Variational Autoencoders (VAEs) have become central in machine learning.
1 code implementation • 7 Jun 2023 • Thibaud Godon, Baptiste Bauvin, Pascal Germain, Jacques Corbeil, Alexandre Drouin
Rule-based models, such as decision trees, appeal to practitioners due to their interpretable nature.
1 code implementation • 17 Feb 2023 • Sokhna Diarra Mbacke, Florence Clerc, Pascal Germain
We extend PAC-Bayesian theory to generative models and develop generalization bounds for models based on the Wasserstein distance and the total variation distance.
no code implementations • 7 Sep 2022 • Benjamin Leblanc, Pascal Germain
We study the use of binary activated neural networks as interpretable and explainable predictors in the context of regression tasks on tabular data; more specifically, we provide guarantees on their expressiveness, present an approach based on the efficient computation of SHAP values for quantifying the relative importance of the features, hidden neurons and even weights.
no code implementations • 28 Oct 2021 • Louis Fortier-Dubois, Gaël Letarte, Benjamin Leblanc, François Laviolette, Pascal Germain
Considering a probability distribution over parameters is known as an efficient strategy to learn a neural network with non-differentiable activation functions.
1 code implementation • NeurIPS 2021 • Valentina Zantedeschi, Paul Viallard, Emilie Morvant, Rémi Emonet, Amaury Habrard, Pascal Germain, Benjamin Guedj
We investigate a stochastic counterpart of majority votes over finite ensembles of classifiers, and study its generalization properties.
1 code implementation • 28 Apr 2021 • Paul Viallard, Pascal Germain, Amaury Habrard, Emilie Morvant
In the PAC-Bayesian literature, the C-Bound refers to an insightful relation between the risk of a majority vote classifier (under the zero-one loss) and the first two moments of its margin (i. e., the expected margin and the voters' diversity).
1 code implementation • 17 Feb 2021 • Paul Viallard, Pascal Germain, Amaury Habrard, Emilie Morvant
PAC-Bayesian bounds are known to be tight and informative when studying the generalization ability of randomized classifiers.
no code implementations • 24 Oct 2020 • Yann Pequignot, Mathieu Alain, Patrick Dallaire, Alireza Yeganehparast, Pascal Germain, Josée Desharnais, François Laviolette
Focusing on regression tasks, we choose a simple yet insightful model for this OOD distribution and conduct an empirical evaluation of the ability of various methods to discriminate OOD samples from the data.
no code implementations • 6 Dec 2019 • Vera Shalaeva, Alireza Fakhrizadeh Esfahani, Pascal Germain, Mihaly Petreczky
In this paper, we improve the PAC-Bayesian error bound for linear regression derived in Germain et al. [10].
1 code implementation • 10 Oct 2019 • Kento Nozawa, Pascal Germain, Benjamin Guedj
Contrastive unsupervised representation learning (CURL) is the state-of-the-art technique to learn representations (as a set of features) from unlabelled data.
no code implementations • 14 Jun 2019 • Léo Gautheron, Pascal Germain, Amaury Habrard, Emilie Morvant, Marc Sebban, Valentina Zantedeschi
Unlike state-of-the-art Multiple Kernel Learning techniques that make use of a pre-computed dictionary of kernel functions to select from, at each iteration we fit a kernel by approximating it as a weighted sum of Random Fourier Features (RFF) and by optimizing their barycenter.
1 code implementation • NeurIPS 2019 • Gaël Letarte, Pascal Germain, Benjamin Guedj, François Laviolette
We present a comprehensive study of multilayer neural networks with binary activation, relying on the PAC-Bayesian theory.
1 code implementation • 30 Oct 2018 • Gaël Letarte, Emilie Morvant, Pascal Germain
We revisit Rahimi and Recht (2007)'s kernel random Fourier features (RFF) method through the lens of the PAC-Bayesian theory.
2 code implementations • 17 Aug 2018 • Anil Goyal, Emilie Morvant, Pascal Germain, Massih-Reza Amini
Different experiments on three publicly available datasets show the efficiency of the proposed approach with respect to state-of-art models.
no code implementations • 17 Jul 2017 • Pascal Germain, Amaury Habrard, François Laviolette, Emilie Morvant
Firstly, we propose an improvement of the previous approach we proposed in Germain et al. (2013), which relies on a novel distribution pseudodistance based on a disagreement averaging, allowing us to derive a new tighter domain adaptation bound for the target risk.
no code implementations • 23 Jun 2016 • Anil Goyal, Emilie Morvant, Pascal Germain, Massih-Reza Amini
We study a two-level multiview learning with more than two views under the PAC-Bayesian framework.
no code implementations • NeurIPS 2016 • Pascal Germain, Francis Bach, Alexandre Lacoste, Simon Lacoste-Julien
That is, for the negative log-likelihood loss function, we show that the minimization of PAC-Bayesian generalization risk bounds maximizes the Bayesian marginal likelihood.
1 code implementation • 15 Jun 2015 • Pascal Germain, Amaury Habrard, François Laviolette, Emilie Morvant
We study the issue of PAC-Bayesian domain adaptation: We want to learn, from a source domain, a majority vote model dedicated to a target one.
35 code implementations • 28 May 2015 • Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, Victor Lempitsky
Our approach is directly inspired by the theory on domain adaptation suggesting that, for effective domain transfer to be achieved, predictions must be made based on features that cannot discriminate between the training (source) and test (target) domains.
Ranked #2 on Domain Adaptation on Synth Digits-to-SVHN
no code implementations • 28 Mar 2015 • Pascal Germain, Alexandre Lacasse, François Laviolette, Mario Marchand, Jean-Francis Roy
We propose an extensive analysis of the behavior of majority votes in binary classification.
no code implementations • 24 Mar 2015 • Pascal Germain, Amaury Habrard, François Laviolette, Emilie Morvant
In this paper, we provide two main contributions in PAC-Bayesian theory for domain adaptation where the objective is to learn, from a source distribution, a well-performing majority vote on a different target distribution.
no code implementations • 13 Jan 2015 • Pascal Germain, Amaury Habrard, Francois Laviolette, Emilie Morvant
This paper provides a theoretical analysis of domain adaptation based on the PAC-Bayesian theory.
1 code implementation • 15 Dec 2014 • Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand
We propose a training objective that implements this idea in the context of a neural network, whose hidden layer is trained to be predictive of the classification task, but uninformative as to the domain of the input.
no code implementations • NeurIPS 2009 • Pascal Germain, Alexandre Lacasse, Mario Marchand, Sara Shanian, François Laviolette
We show that standard ell_p-regularized objective functions currently used, such as ridge regression and ell_p-regularized boosting, are obtained from a relaxation of the KL divergence between the quasi uniform posterior and the uniform prior.