Search Results for author: Ulrich Aïvodji

Found 12 papers, 7 papers with code

SoK: Taming the Triangle -- On the Interplays between Fairness, Interpretability and Privacy in Machine Learning

no code implementations22 Dec 2023 Julien Ferry, Ulrich Aïvodji, Sébastien Gambs, Marie-José Huguet, Mohamed Siala

Machine learning techniques are increasingly used for high-stakes decision-making, such as college admissions, loan attribution or recidivism prediction.

Decision Making Fairness

Probabilistic Dataset Reconstruction from Interpretable Models

no code implementations29 Aug 2023 Julien Ferry, Ulrich Aïvodji, Sébastien Gambs, Marie-José Huguet, Mohamed Siala

In addition, we demonstrate that under realistic assumptions regarding the interpretable models' structure, the uncertainty of the reconstruction can be computed efficiently.

Fairness Under Demographic Scarce Regime

1 code implementation24 Jul 2023 Patrik Joslin Kenfack, Samira Ebrahimi Kahou, Ulrich Aïvodji

Surprisingly, our framework outperforms models trained with constraints on the true sensitive attributes.

Attribute Fairness

Learning Hybrid Interpretable Models: Theory, Taxonomy, and Methods

1 code implementation8 Mar 2023 Julien Ferry, Gabriel Laberge, Ulrich Aïvodji

The advantages of such models over classical ones are two-fold: 1) They grant users precise control over the level of transparency of the system and 2) They can potentially perform better than a standalone black box since redirecting some of the inputs to an interpretable model implicitly acts as regularization.

Exploiting Fairness to Enhance Sensitive Attributes Reconstruction

no code implementations2 Sep 2022 Julien Ferry, Ulrich Aïvodji, Sébastien Gambs, Marie-José Huguet, Mohamed Siala

More precisely, we propose a generic reconstruction correction method, which takes as input an initial guess made by the adversary and corrects it to comply with some user-defined constraints (such as the fairness information) while minimizing the changes in the adversary's guess.

Fairness

Fool SHAP with Stealthily Biased Sampling

1 code implementation30 May 2022 Gabriel Laberge, Ulrich Aïvodji, Satoshi Hara, Mario Marchand., Foutse khomh

SHAP explanations aim at identifying which features contribute the most to the difference in model prediction at a specific input versus a background distribution.

Fairness

Characterizing the risk of fairwashing

1 code implementation NeurIPS 2021 Ulrich Aïvodji, Hiromi Arai, Sébastien Gambs, Satoshi Hara

In particular, we show that fairwashed explanation models can generalize beyond the suing group (i. e., data points that are being explained), meaning that a fairwashed explainer can be used to rationalize subsequent unfair decisions of a black-box model.

Fairness

Model extraction from counterfactual explanations

1 code implementation3 Sep 2020 Ulrich Aïvodji, Alexandre Bolot, Sébastien Gambs

Post-hoc explanation techniques refer to a posteriori methods that can be used to explain how black-box machine learning models produce their outcomes.

counterfactual Model extraction

GAMIN: An Adversarial Approach to Black-Box Model Inversion

no code implementations26 Sep 2019 Ulrich Aïvodji, Sébastien Gambs, Timon Ther

While some model inversion attacks have been developed in the past in the black-box attack setting, in which the adversary does not have direct access to the structure of the model, few of these have been conducted so far against complex models such as deep neural networks.

Learning Fair Rule Lists

1 code implementation9 Sep 2019 Ulrich Aïvodji, Julien Ferry, Sébastien Gambs, Marie-José Huguet, Mohamed Siala

While it has been shown that interpretable models can be as accurate as black-box models in several critical domains, existing fair classification techniques that are interpretable by design often display poor accuracy/fairness tradeoffs in comparison with their non-interpretable counterparts.

Classification Decision Making +2

Adversarial training approach for local data debiasing

no code implementations19 Jun 2019 Ulrich Aïvodji, François Bidet, Sébastien Gambs, Rosin Claude Ngueveu, Alain Tapp

The widespread use of automated decision processes in many areas of our society raises serious ethical issues concerning the fairness of the process and the possible resulting discriminations.

Attribute Fairness

Fairwashing: the risk of rationalization

1 code implementation28 Jan 2019 Ulrich Aïvodji, Hiromi Arai, Olivier Fortineau, Sébastien Gambs, Satoshi Hara, Alain Tapp

Black-box explanation is the problem of explaining how a machine learning model -- whose internal logic is hidden to the auditor and generally complex -- produces its outcomes.

BIG-bench Machine Learning Fairness

Cannot find the paper you are looking for? You can Submit a new open access paper.