no code implementations • 28 May 2025 • Angéline Pouget, Mohammad Yaghini, Stephan Rabanser, Nicolas Papernot
Deploying machine learning models in safety-critical domains poses a key challenge: ensuring reliable model performance on downstream user data without access to ground truth labels for direct validation.
no code implementations • 28 May 2025 • Mohammad Yaghini, Tudor Cebere, Michael Menart, Aurélien Bellet, Nicolas Papernot
To address this, we develop RaCO-DP, a DP variant of the Stochastic Gradient Descent-Ascent (SGDA) algorithm which solves the Lagrangian formulation of rate constraint problems.
no code implementations • 15 Nov 2024 • Haonan Duan, Adam Dziedzic, Mohammad Yaghini, Nicolas Papernot, Franziska Boenisch
We show that deploying prompted models presents a significant privacy risk for the data used within the prompt by instantiating a highly effective membership inference attack.
no code implementations • 5 Feb 2024 • Mohammad Yaghini, Patty Liu, Franziska Boenisch, Nicolas Papernot
Existing work on trustworthy machine learning (ML) often concentrates on individual aspects of trust, such as fairness or privacy.
no code implementations • 17 Feb 2023 • Mohammad Yaghini, Patty Liu, Franziska Boenisch, Nicolas Papernot
Deploying machine learning (ML) models often requires both fairness and privacy guarantees.
1 code implementation • 6 Aug 2022 • Congyu Fang, Hengrui Jia, Anvith Thudi, Mohammad Yaghini, Christopher A. Choquette-Choo, Natalie Dullerud, Varun Chandrasekaran, Nicolas Papernot
They empirically argued the benefit of this approach by showing how spoofing--computing a proof for a stolen model--is as expensive as obtaining the proof honestly by training the model.
no code implementations • 25 Jul 2022 • Adam Dziedzic, Stephan Rabanser, Mohammad Yaghini, Armin Ale, Murat A. Erdogdu, Nicolas Papernot
We introduce $p$-DkNN, a novel inference procedure that takes a trained deep neural network and analyzes the similarity structures of its intermediate hidden representations to compute $p$-values associated with the end-to-end model prediction.
no code implementations • 6 Feb 2022 • Shimaa Ahmed, Yash Wani, Ali Shahin Shamsabadi, Mohammad Yaghini, Ilia Shumailov, Nicolas Papernot, Kassem Fawaz
Recent years have seen a surge in the popularity of acoustics-enabled personal devices powered by machine learning.
no code implementations • 20 Sep 2021 • Varun Chandrasekaran, Hengrui Jia, Anvith Thudi, Adelin Travers, Mohammad Yaghini, Nicolas Papernot
The application of machine learning (ML) in computer systems introduces not only many benefits but also risks to society.
1 code implementation • ICLR 2021 • Pratyush Maini, Mohammad Yaghini, Nicolas Papernot
We thus introduce $dataset$ $inference$, the process of identifying whether a suspected model copy has private knowledge from the original model's dataset, as a defense against model stealing.
2 code implementations • 9 Mar 2021 • Hengrui Jia, Mohammad Yaghini, Christopher A. Choquette-Choo, Natalie Dullerud, Anvith Thudi, Varun Chandrasekaran, Nicolas Papernot
In particular, our analyses and experiments show that an adversary seeking to illegitimately manufacture a proof-of-learning needs to perform *at least* as much work than is needed for gradient descent itself.
no code implementations • 8 Nov 2019 • Mohammad Yaghini, Andreas Krause, Hoda Heidari
Our family of fairness notions corresponds to a new interpretation of economic models of Equality of Opportunity (EOP), and it includes most existing notions of fairness as special cases.
2 code implementations • 2 Jun 2019 • Bogdan Kulynych, Mohammad Yaghini, Giovanni Cherubin, Michael Veale, Carmela Troncoso
Differential privacy bounds disparate vulnerability but can significantly reduce the accuracy of the model.