Search Results for author: Antoine Boutet

Found 8 papers, 2 papers with code

On the Alignment of Group Fairness with Attribute Privacy

no code implementations18 Nov 2022 Jan Aalmoes, Vasisht Duddu, Antoine Boutet

We are the first to demonstrate the alignment of group fairness with the specific privacy notion of attribute privacy in a blackbox setting.

Attribute Fairness +1

Inferring Sensitive Attributes from Model Explanations

1 code implementation21 Aug 2022 Vasisht Duddu, Antoine Boutet

We focus on the specific privacy risk of attribute inference attack wherein an adversary infers sensitive attributes of an input (e. g., race and sex) given its model explanations.

Attribute Inference Attack

I-GWAS: Privacy-Preserving Interdependent Genome-Wide Association Studies

no code implementations17 Aug 2022 Túlio Pascoal, Jérémie Decouchant, Antoine Boutet, Marcus Völp

We introduce I-GWAS, a novel framework that securely computes and releases the results of multiple possibly interdependent GWASes.

Privacy Preserving

Dikaios: Privacy Auditing of Algorithmic Fairness via Attribute Inference Attacks

no code implementations4 Feb 2022 Jan Aalmoes, Vasisht Duddu, Antoine Boutet

This unpredictable effect of fairness mechanisms on the attribute privacy risk is an important limitation on their utilization which has to be accounted by the model builder.

Attribute Fairness +1

MixNN: Protection of Federated Learning Against Inference Attacks by Mixing Neural Network Layers

no code implementations26 Sep 2021 Antoine Boutet, Thomas Lebrun, Jan Aalmoes, Adrien Baud

Boosted by Machine Learning as a Service (MLaaS), the number of applications relying on ML capabilities is ever increasing.

Attribute Federated Learning +2

Privacy Assessment of Federated Learning using Private Personalized Layers

no code implementations15 Jun 2021 Théo Jourdan, Antoine Boutet, Carole Frindel

While this scheme has been proposed as local adaptation to improve the accuracy of the model through local personalization, it has also the advantage to minimize the information about the model exchanged with the server.

Attribute Federated Learning

DYSAN: Dynamically sanitizing motion sensor data against sensitive inferences through adversarial networks

1 code implementation23 Mar 2020 Claude Rosin Ngueveu, Antoine Boutet, Carole Frindel, Sébastien Gambs, Théo Jourdan, Claude Rosin

However, nothing prevents the service provider to infer private and sensitive information about a user such as health or demographic attributes. In this paper, we present DySan, a privacy-preserving framework to sanitize motion sensor data against unwanted sensitive inferences (i. e., improving privacy) while limiting the loss of accuracy on the physical activity monitoring (i. e., maintaining data utility).

Activity Recognition Attribute +2

Cannot find the paper you are looking for? You can Submit a new open access paper.