no code implementations • 9 Feb 2024 • Antti Koskela, Rachel Redberg, Yu-Xiang Wang
Private selection mechanisms (e. g., Report Noisy Max, Sparse Vector) are fundamental primitives of differentially private (DP) data analysis with wide applications to private query release, voting, and hyperparameter tuning.
no code implementations • NeurIPS 2023 • Rachel Redberg, Antti Koskela, Yu-Xiang Wang
In the arena of privacy-preserving machine learning, differentially private stochastic gradient descent (DP-SGD) has outstripped the objective perturbation mechanism in popularity and interest.
no code implementations • NeurIPS 2023 • Antti Koskela, tejas kulkarni
Tuning the hyperparameters of differentially private (DP) machine learning (ML) algorithms often requires use of sensitive data and this may leak private information via hyperparameter values.
1 code implementation • 30 Sep 2022 • Antti Koskela, Marlon Tobaben, Antti Honkela
In order to account for the individual privacy losses in a principled manner, we need a privacy accountant for adaptive compositions of randomised mechanisms, where the loss incurred at a given data access is allowed to be smaller than the worst-case loss.
no code implementations • 1 Jun 2021 • Antti Koskela, Mikko A. Heikkilä, Antti Honkela
Shuffle model of differential privacy is a novel distributed privacy model based on a combination of local privacy mechanisms and a secure shuffler.
no code implementations • 24 Feb 2021 • Antti Koskela, Antti Honkela
The recently proposed Fast Fourier Transform (FFT)-based accountant for evaluating $(\varepsilon,\delta)$-differential privacy guarantees using the privacy loss distribution formalism has been shown to give tighter bounds than commonly used methods such as R\'enyi accountants when applied to homogeneous compositions, i. e., to compositions of identical mechanisms.
no code implementations • 1 Nov 2020 • tejas kulkarni, Joonas Jälkö, Antti Koskela, Samuel Kaski, Antti Honkela
Generalized linear models (GLMs) such as logistic regression are among the most widely used arms in data analyst's repertoire and often used on sensitive datasets.
1 code implementation • 10 Jul 2020 • Mikko A. Heikkilä, Antti Koskela, Kana Shimizu, Samuel Kaski, Antti Honkela
In this paper we combine additively homomorphic secure summation protocols with differential privacy in the so-called cross-silo federated learning setting.
1 code implementation • 12 Jun 2020 • Antti Koskela, Joonas Jälkö, Lukas Prediger, Antti Honkela
We carry out an error analysis of the method in terms of moment bounds of the privacy loss distribution which leads to rigorous lower and upper bounds for the true $(\varepsilon,\delta)$-values.
1 code implementation • 7 Jun 2019 • Antti Koskela, Joonas Jälkö, Antti Honkela
The privacy loss of DP algorithms is commonly reported using $(\varepsilon,\delta)$-DP.
1 code implementation • 11 Sep 2018 • Antti Koskela, Antti Honkela
We also show that it works robustly in the case of federated learning unlike commonly used optimisation methods.