no code implementations • 18 Nov 2022 • Jan Aalmoes, Vasisht Duddu, Antoine Boutet
We are the first to demonstrate the alignment of group fairness with the specific privacy notion of attribute privacy in a blackbox setting.
1 code implementation • 21 Aug 2022 • Vasisht Duddu, Antoine Boutet
We focus on the specific privacy risk of attribute inference attack wherein an adversary infers sensitive attributes of an input (e. g., race and sex) given its model explanations.
no code implementations • 17 Aug 2022 • Túlio Pascoal, Jérémie Decouchant, Antoine Boutet, Marcus Völp
We introduce I-GWAS, a novel framework that securely computes and releases the results of multiple possibly interdependent GWASes.
no code implementations • 4 Feb 2022 • Jan Aalmoes, Vasisht Duddu, Antoine Boutet
This unpredictable effect of fairness mechanisms on the attribute privacy risk is an important limitation on their utilization which has to be accounted by the model builder.
no code implementations • 26 Sep 2021 • Antoine Boutet, Thomas Lebrun, Jan Aalmoes, Adrien Baud
Boosted by Machine Learning as a Service (MLaaS), the number of applications relying on ML capabilities is ever increasing.
no code implementations • 15 Jun 2021 • Théo Jourdan, Antoine Boutet, Carole Frindel
While this scheme has been proposed as local adaptation to improve the accuracy of the model through local personalization, it has also the advantage to minimize the information about the model exchanged with the server.
no code implementations • 2 Oct 2020 • Vasisht Duddu, Antoine Boutet, Virat Shejwalkar
We choose quantization as design choice for highly efficient and private models.
1 code implementation • 23 Mar 2020 • Claude Rosin Ngueveu, Antoine Boutet, Carole Frindel, Sébastien Gambs, Théo Jourdan, Claude Rosin
However, nothing prevents the service provider to infer private and sensitive information about a user such as health or demographic attributes. In this paper, we present DySan, a privacy-preserving framework to sanitize motion sensor data against unwanted sensitive inferences (i. e., improving privacy) while limiting the loss of accuracy on the physical activity monitoring (i. e., maintaining data utility).