no code implementations • 18 Mar 2024 • Timothée Ly, Julien Ferry, Marie-José Huguet, Sébastien Gambs, Ulrich Aivodji
Differentially-private (DP) mechanisms can be embedded into the design of a machine learningalgorithm to protect the resulting model against privacy leakage, although this often comes with asignificant loss of accuracy.
no code implementations • 12 Feb 2024 • Mishaal Kazmi, Hadrien Lautraite, Alireza Akbari, Mauricio Soroco, Qiaoyue Tang, Tao Wang, Sébastien Gambs, Mathias Lécuyer
We introduce a privacy auditing scheme for ML models that relies on membership inference attacks using generated data as "non-members".
no code implementations • 22 Dec 2023 • Julien Ferry, Ulrich Aïvodji, Sébastien Gambs, Marie-José Huguet, Mohamed Siala
Machine learning techniques are increasingly used for high-stakes decision-making, such as college admissions, loan attribution or recidivism prediction.
no code implementations • 19 Sep 2023 • Sofiane Azogagh, Zelma Aubin Birba, Sébastien Gambs, Marc-Olivier Killijian
The use of Crypto'Graph is illustrated for defense against graph poisoning attacks, in which it is possible to identify potential adversarial links without compromising the privacy of the graphs of individual parties.
no code implementations • 29 Aug 2023 • Julien Ferry, Ulrich Aïvodji, Sébastien Gambs, Marie-José Huguet, Mohamed Siala
In addition, we demonstrate that under realistic assumptions regarding the interpretable models' structure, the uncertainty of the reconstruction can be computed efficiently.
no code implementations • 28 Feb 2023 • Voncarlos Marcelo Araújo, Sébastien Gambs, Clément Chion, Robert Michaud, Léo Schneider, Hadrien Lautraite
To efficiently monitor the growth and evolution of a particular wildlife population, one of the main fundamental challenges to address in animal ecology is the re-identification of individuals that have been previously encountered but also the discrimination between known and unknown individuals (the so-called "open-set problem"), which is the first step to realize before re-identification.
no code implementations • 2 Sep 2022 • Julien Ferry, Ulrich Aïvodji, Sébastien Gambs, Marie-José Huguet, Mohamed Siala
More precisely, we propose a generic reconstruction correction method, which takes as input an initial guess made by the adversary and corrects it to comply with some user-defined constraints (such as the fairness information) while minimizing the changes in the adversary's guess.
no code implementations • 1 Sep 2022 • Sébastien Gambs, Rosin Claude Ngueveu
In addition, our approach can be specialized to model existing state-of-the-art approaches, thus proposing a unifying view on these methods.
1 code implementation • NeurIPS 2021 • Ulrich Aïvodji, Hiromi Arai, Sébastien Gambs, Satoshi Hara
In particular, we show that fairwashed explanation models can generalize beyond the suing group (i. e., data points that are being explained), meaning that a fairwashed explainer can be used to rationalize subsequent unfair decisions of a black-box model.
1 code implementation • 3 Sep 2020 • Ulrich Aïvodji, Alexandre Bolot, Sébastien Gambs
Post-hoc explanation techniques refer to a posteriori methods that can be used to explain how black-box machine learning models produce their outcomes.
1 code implementation • 23 Mar 2020 • Claude Rosin Ngueveu, Antoine Boutet, Carole Frindel, Sébastien Gambs, Théo Jourdan, Claude Rosin
However, nothing prevents the service provider to infer private and sensitive information about a user such as health or demographic attributes. In this paper, we present DySan, a privacy-preserving framework to sanitize motion sensor data against unwanted sensitive inferences (i. e., improving privacy) while limiting the loss of accuracy on the physical activity monitoring (i. e., maintaining data utility).
no code implementations • 26 Sep 2019 • Ulrich Aïvodji, Sébastien Gambs, Timon Ther
While some model inversion attacks have been developed in the past in the black-box attack setting, in which the adversary does not have direct access to the structure of the model, few of these have been conducted so far against complex models such as deep neural networks.
1 code implementation • 9 Sep 2019 • Ulrich Aïvodji, Julien Ferry, Sébastien Gambs, Marie-José Huguet, Mohamed Siala
While it has been shown that interpretable models can be as accurate as black-box models in several critical domains, existing fair classification techniques that are interpretable by design often display poor accuracy/fairness tradeoffs in comparison with their non-interpretable counterparts.
no code implementations • 19 Jun 2019 • Ulrich Aïvodji, François Bidet, Sébastien Gambs, Rosin Claude Ngueveu, Alain Tapp
The widespread use of automated decision processes in many areas of our society raises serious ethical issues concerning the fairness of the process and the possible resulting discriminations.
1 code implementation • 28 Jan 2019 • Ulrich Aïvodji, Hiromi Arai, Olivier Fortineau, Sébastien Gambs, Satoshi Hara, Alain Tapp
Black-box explanation is the problem of explaining how a machine learning model -- whose internal logic is hidden to the auditor and generally complex -- produces its outcomes.