Search Results for author: Pascal Germain

Found 26 papers, 12 papers with code

Interpretability in Machine Learning: on the Interplay with Explainability, Predictive Performances and Models

no code implementations20 Nov 2023 Benjamin Leblanc, Pascal Germain

Interpretability has recently gained attention in the field of machine learning, for it is crucial when it comes to high-stakes decisions or troubleshooting.

Position

Invariant Causal Set Covering Machines

1 code implementation7 Jun 2023 Thibaud Godon, Baptiste Bauvin, Pascal Germain, Jacques Corbeil, Alexandre Drouin

Rule-based models, such as decision trees, appeal to practitioners due to their interpretable nature.

PAC-Bayesian Generalization Bounds for Adversarial Generative Models

1 code implementation17 Feb 2023 Sokhna Diarra Mbacke, Florence Clerc, Pascal Germain

We extend PAC-Bayesian theory to generative models and develop generalization bounds for models based on the Wasserstein distance and the total variation distance.

Dimensionality Reduction Generalization Bounds

Seeking Interpretability and Explainability in Binary Activated Neural Networks

no code implementations7 Sep 2022 Benjamin Leblanc, Pascal Germain

We study the use of binary activated neural networks as interpretable and explainable predictors in the context of regression tasks on tabular data; more specifically, we provide guarantees on their expressiveness, present an approach based on the efficient computation of SHAP values for quantifying the relative importance of the features, hidden neurons and even weights.

Binarization

PAC-Bayesian Learning of Aggregated Binary Activated Neural Networks with Probabilities over Representations

no code implementations28 Oct 2021 Louis Fortier-Dubois, Gaël Letarte, Benjamin Leblanc, François Laviolette, Pascal Germain

Considering a probability distribution over parameters is known as an efficient strategy to learn a neural network with non-differentiable activation functions.

Generalization Bounds

Self-Bounding Majority Vote Learning Algorithms by the Direct Minimization of a Tight PAC-Bayesian C-Bound

1 code implementation28 Apr 2021 Paul Viallard, Pascal Germain, Amaury Habrard, Emilie Morvant

In the PAC-Bayesian literature, the C-Bound refers to an insightful relation between the risk of a majority vote classifier (under the zero-one loss) and the first two moments of its margin (i. e., the expected margin and the voters' diversity).

Generalization Bounds

A General Framework for the Practical Disintegration of PAC-Bayesian Bounds

1 code implementation17 Feb 2021 Paul Viallard, Pascal Germain, Amaury Habrard, Emilie Morvant

PAC-Bayesian bounds are known to be tight and informative when studying the generalization ability of randomized classifiers.

Generalization Bounds

Out-of-distribution detection for regression tasks: parameter versus predictor entropy

no code implementations24 Oct 2020 Yann Pequignot, Mathieu Alain, Patrick Dallaire, Alireza Yeganehparast, Pascal Germain, Josée Desharnais, François Laviolette

Focusing on regression tasks, we choose a simple yet insightful model for this OOD distribution and conduct an empirical evaluation of the ability of various methods to discriminate OOD samples from the data.

Out-of-Distribution Detection Out of Distribution (OOD) Detection +2

Improved PAC-Bayesian Bounds for Linear Regression

no code implementations6 Dec 2019 Vera Shalaeva, Alireza Fakhrizadeh Esfahani, Pascal Germain, Mihaly Petreczky

In this paper, we improve the PAC-Bayesian error bound for linear regression derived in Germain et al. [10].

regression Time Series +1

PAC-Bayesian Contrastive Unsupervised Representation Learning

1 code implementation10 Oct 2019 Kento Nozawa, Pascal Germain, Benjamin Guedj

Contrastive unsupervised representation learning (CURL) is the state-of-the-art technique to learn representations (as a set of features) from unlabelled data.

Representation Learning

Learning Landmark-Based Ensembles with Random Fourier Features and Gradient Boosting

no code implementations14 Jun 2019 Léo Gautheron, Pascal Germain, Amaury Habrard, Emilie Morvant, Marc Sebban, Valentina Zantedeschi

Unlike state-of-the-art Multiple Kernel Learning techniques that make use of a pre-computed dictionary of kernel functions to select from, at each iteration we fit a kernel by approximating it as a weighted sum of Random Fourier Features (RFF) and by optimizing their barycenter.

Pseudo-Bayesian Learning with Kernel Fourier Transform as Prior

1 code implementation30 Oct 2018 Gaël Letarte, Emilie Morvant, Pascal Germain

We revisit Rahimi and Recht (2007)'s kernel random Fourier features (RFF) method through the lens of the PAC-Bayesian theory.

Generalization Bounds

Multiview Boosting by Controlling the Diversity and the Accuracy of View-specific Voters

2 code implementations17 Aug 2018 Anil Goyal, Emilie Morvant, Pascal Germain, Massih-Reza Amini

Different experiments on three publicly available datasets show the efficiency of the proposed approach with respect to state-of-art models.

Document Classification Multilingual text classification +1

PAC-Bayes and Domain Adaptation

no code implementations17 Jul 2017 Pascal Germain, Amaury Habrard, François Laviolette, Emilie Morvant

Firstly, we propose an improvement of the previous approach we proposed in Germain et al. (2013), which relies on a novel distribution pseudodistance based on a disagreement averaging, allowing us to derive a new tighter domain adaptation bound for the target risk.

Domain Adaptation Generalization Bounds

PAC-Bayesian Theory Meets Bayesian Inference

no code implementations NeurIPS 2016 Pascal Germain, Francis Bach, Alexandre Lacoste, Simon Lacoste-Julien

That is, for the negative log-likelihood loss function, we show that the minimization of PAC-Bayesian generalization risk bounds maximizes the Bayesian marginal likelihood.

Bayesian Inference regression

A New PAC-Bayesian Perspective on Domain Adaptation

1 code implementation15 Jun 2015 Pascal Germain, Amaury Habrard, François Laviolette, Emilie Morvant

We study the issue of PAC-Bayesian domain adaptation: We want to learn, from a source domain, a majority vote model dedicated to a target one.

Domain Adaptation

Domain-Adversarial Training of Neural Networks

35 code implementations28 May 2015 Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, Victor Lempitsky

Our approach is directly inspired by the theory on domain adaptation suggesting that, for effective domain transfer to be achieved, predictions must be made based on features that cannot discriminate between the training (source) and test (target) domains.

Domain Generalization General Classification +5

PAC-Bayesian Theorems for Domain Adaptation with Specialization to Linear Classifiers

no code implementations24 Mar 2015 Pascal Germain, Amaury Habrard, François Laviolette, Emilie Morvant

In this paper, we provide two main contributions in PAC-Bayesian theory for domain adaptation where the objective is to learn, from a source distribution, a well-performing majority vote on a different target distribution.

Domain Adaptation

Domain-Adversarial Neural Networks

1 code implementation15 Dec 2014 Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand

We propose a training objective that implements this idea in the context of a neural network, whose hidden layer is trained to be predictive of the classification task, but uninformative as to the domain of the input.

Denoising Domain Adaptation +3

From PAC-Bayes Bounds to KL Regularization

no code implementations NeurIPS 2009 Pascal Germain, Alexandre Lacasse, Mario Marchand, Sara Shanian, François Laviolette

We show that standard ell_p-regularized objective functions currently used, such as ridge regression and ell_p-regularized boosting, are obtained from a relaxation of the KL divergence between the quasi uniform posterior and the uniform prior.

regression

Cannot find the paper you are looking for? You can Submit a new open access paper.