no code implementations • 9 May 2025 • Antti Koskela, Mohamed Seif, Andrea J. Goldsmith
We investigate privacy-preserving spectral clustering for community detection within stochastic block models (SBMs).
no code implementations • 11 Dec 2024 • Dong Chen, Alice Dethise, Istemi Ekin Akkus, Ivica Rimac, Klaus Satzke, Antti Koskela, Marco Canini, Wei Wang, Ruichuan Chen
During this collaboration, however, dataset owners and model owners want to protect the confidentiality of their respective assets (i. e., datasets, models and training code), with the dataset owners also caring about the privacy of individual users whose data is in their datasets.
no code implementations • 5 Jul 2024 • Antti Koskela
We demonstrate that it is possible to privately train convex problems with privacy-utility trade-offs comparable to those of one hidden-layer ReLU networks trained with DP stochastic gradient descent (DP-SGD).
no code implementations • 7 Jun 2024 • Antti Koskela, Jafar Mohammadi
Previous auditing methods tightly capture the privacy guarantees of DP-SGD trained models in the white-box setting where the auditor has access to all intermediate models; however, the success of these methods depends on a priori information about the parametric form of the noise and the subsampling ratio used for sampling the gradients.
no code implementations • 9 Feb 2024 • Antti Koskela, Rachel Redberg, Yu-Xiang Wang
Private selection mechanisms (e. g., Report Noisy Max, Sparse Vector) are fundamental primitives of differentially private (DP) data analysis with wide applications to private query release, voting, and hyperparameter tuning.
no code implementations • NeurIPS 2023 • Rachel Redberg, Antti Koskela, Yu-Xiang Wang
In the arena of privacy-preserving machine learning, differentially private stochastic gradient descent (DP-SGD) has outstripped the objective perturbation mechanism in popularity and interest.
no code implementations • NeurIPS 2023 • Antti Koskela, tejas kulkarni
Tuning the hyperparameters of differentially private (DP) machine learning (ML) algorithms often requires use of sensitive data and this may leak private information via hyperparameter values.
1 code implementation • 30 Sep 2022 • Antti Koskela, Marlon Tobaben, Antti Honkela
In order to account for the individual privacy losses in a principled manner, we need a privacy accountant for adaptive compositions of randomised mechanisms, where the loss incurred at a given data access is allowed to be smaller than the worst-case loss.
no code implementations • 1 Jun 2021 • Antti Koskela, Mikko A. Heikkilä, Antti Honkela
Shuffle model of differential privacy is a novel distributed privacy model based on a combination of local privacy mechanisms and a secure shuffler.
no code implementations • 24 Feb 2021 • Antti Koskela, Antti Honkela
The recently proposed Fast Fourier Transform (FFT)-based accountant for evaluating $(\varepsilon,\delta)$-differential privacy guarantees using the privacy loss distribution formalism has been shown to give tighter bounds than commonly used methods such as R\'enyi accountants when applied to homogeneous compositions, i. e., to compositions of identical mechanisms.
no code implementations • 1 Nov 2020 • tejas kulkarni, Joonas Jälkö, Antti Koskela, Samuel Kaski, Antti Honkela
Generalized linear models (GLMs) such as logistic regression are among the most widely used arms in data analyst's repertoire and often used on sensitive datasets.
1 code implementation • 10 Jul 2020 • Mikko A. Heikkilä, Antti Koskela, Kana Shimizu, Samuel Kaski, Antti Honkela
In this paper we combine additively homomorphic secure summation protocols with differential privacy in the so-called cross-silo federated learning setting.
1 code implementation • 12 Jun 2020 • Antti Koskela, Joonas Jälkö, Lukas Prediger, Antti Honkela
We carry out an error analysis of the method in terms of moment bounds of the privacy loss distribution which leads to rigorous lower and upper bounds for the true $(\varepsilon,\delta)$-values.
1 code implementation • 7 Jun 2019 • Antti Koskela, Joonas Jälkö, Antti Honkela
The privacy loss of DP algorithms is commonly reported using $(\varepsilon,\delta)$-DP.
1 code implementation • 11 Sep 2018 • Antti Koskela, Antti Honkela
We also show that it works robustly in the case of federated learning unlike commonly used optimisation methods.