Search Results for author: Sébastien Gambs

Found 15 papers, 5 papers with code

Smooth Sensitivity for Learning Differentially-Private yet Accurate Rule Lists

no code implementations18 Mar 2024 Timothée Ly, Julien Ferry, Marie-José Huguet, Sébastien Gambs, Ulrich Aivodji

Differentially-private (DP) mechanisms can be embedded into the design of a machine learningalgorithm to protect the resulting model against privacy leakage, although this often comes with asignificant loss of accuracy.

PANORAMIA: Privacy Auditing of Machine Learning Models without Retraining

no code implementations12 Feb 2024 Mishaal Kazmi, Hadrien Lautraite, Alireza Akbari, Mauricio Soroco, Qiaoyue Tang, Tao Wang, Sébastien Gambs, Mathias Lécuyer

We introduce a privacy auditing scheme for ML models that relies on membership inference attacks using generated data as "non-members".

SoK: Taming the Triangle -- On the Interplays between Fairness, Interpretability and Privacy in Machine Learning

no code implementations22 Dec 2023 Julien Ferry, Ulrich Aïvodji, Sébastien Gambs, Marie-José Huguet, Mohamed Siala

Machine learning techniques are increasingly used for high-stakes decision-making, such as college admissions, loan attribution or recidivism prediction.

Decision Making Fairness

Crypto'Graph: Leveraging Privacy-Preserving Distributed Link Prediction for Robust Graph Learning

no code implementations19 Sep 2023 Sofiane Azogagh, Zelma Aubin Birba, Sébastien Gambs, Marc-Olivier Killijian

The use of Crypto'Graph is illustrated for defense against graph poisoning attacks, in which it is possible to identify potential adversarial links without compromising the privacy of the graphs of individual parties.

Graph Learning Link Prediction +2

Probabilistic Dataset Reconstruction from Interpretable Models

no code implementations29 Aug 2023 Julien Ferry, Ulrich Aïvodji, Sébastien Gambs, Marie-José Huguet, Mohamed Siala

In addition, we demonstrate that under realistic assumptions regarding the interpretable models' structure, the uncertainty of the reconstruction can be computed efficiently.

Membership Inference Attack for Beluga Whales Discrimination

no code implementations28 Feb 2023 Voncarlos Marcelo Araújo, Sébastien Gambs, Clément Chion, Robert Michaud, Léo Schneider, Hadrien Lautraite

To efficiently monitor the growth and evolution of a particular wildlife population, one of the main fundamental challenges to address in animal ecology is the re-identification of individuals that have been previously encountered but also the discrimination between known and unknown individuals (the so-called "open-set problem"), which is the first step to realize before re-identification.

Inference Attack Membership Inference Attack

Exploiting Fairness to Enhance Sensitive Attributes Reconstruction

no code implementations2 Sep 2022 Julien Ferry, Ulrich Aïvodji, Sébastien Gambs, Marie-José Huguet, Mohamed Siala

More precisely, we propose a generic reconstruction correction method, which takes as input an initial guess made by the adversary and corrects it to comply with some user-defined constraints (such as the fairness information) while minimizing the changes in the adversary's guess.

Fairness

Fair mapping

no code implementations1 Sep 2022 Sébastien Gambs, Rosin Claude Ngueveu

In addition, our approach can be specialized to model existing state-of-the-art approaches, thus proposing a unifying view on these methods.

Attribute Fairness

Characterizing the risk of fairwashing

1 code implementation NeurIPS 2021 Ulrich Aïvodji, Hiromi Arai, Sébastien Gambs, Satoshi Hara

In particular, we show that fairwashed explanation models can generalize beyond the suing group (i. e., data points that are being explained), meaning that a fairwashed explainer can be used to rationalize subsequent unfair decisions of a black-box model.

Fairness

Model extraction from counterfactual explanations

1 code implementation3 Sep 2020 Ulrich Aïvodji, Alexandre Bolot, Sébastien Gambs

Post-hoc explanation techniques refer to a posteriori methods that can be used to explain how black-box machine learning models produce their outcomes.

counterfactual Model extraction

DYSAN: Dynamically sanitizing motion sensor data against sensitive inferences through adversarial networks

1 code implementation23 Mar 2020 Claude Rosin Ngueveu, Antoine Boutet, Carole Frindel, Sébastien Gambs, Théo Jourdan, Claude Rosin

However, nothing prevents the service provider to infer private and sensitive information about a user such as health or demographic attributes. In this paper, we present DySan, a privacy-preserving framework to sanitize motion sensor data against unwanted sensitive inferences (i. e., improving privacy) while limiting the loss of accuracy on the physical activity monitoring (i. e., maintaining data utility).

Activity Recognition Attribute +2

GAMIN: An Adversarial Approach to Black-Box Model Inversion

no code implementations26 Sep 2019 Ulrich Aïvodji, Sébastien Gambs, Timon Ther

While some model inversion attacks have been developed in the past in the black-box attack setting, in which the adversary does not have direct access to the structure of the model, few of these have been conducted so far against complex models such as deep neural networks.

Learning Fair Rule Lists

1 code implementation9 Sep 2019 Ulrich Aïvodji, Julien Ferry, Sébastien Gambs, Marie-José Huguet, Mohamed Siala

While it has been shown that interpretable models can be as accurate as black-box models in several critical domains, existing fair classification techniques that are interpretable by design often display poor accuracy/fairness tradeoffs in comparison with their non-interpretable counterparts.

Classification Decision Making +2

Adversarial training approach for local data debiasing

no code implementations19 Jun 2019 Ulrich Aïvodji, François Bidet, Sébastien Gambs, Rosin Claude Ngueveu, Alain Tapp

The widespread use of automated decision processes in many areas of our society raises serious ethical issues concerning the fairness of the process and the possible resulting discriminations.

Attribute Fairness

Fairwashing: the risk of rationalization

1 code implementation28 Jan 2019 Ulrich Aïvodji, Hiromi Arai, Olivier Fortineau, Sébastien Gambs, Satoshi Hara, Alain Tapp

Black-box explanation is the problem of explaining how a machine learning model -- whose internal logic is hidden to the auditor and generally complex -- produces its outcomes.

BIG-bench Machine Learning Fairness

Cannot find the paper you are looking for? You can Submit a new open access paper.