Search Results for author: Antti Koskela

Found 11 papers, 5 papers with code

Privacy Profiles for Private Selection

no code implementations9 Feb 2024 Antti Koskela, Rachel Redberg, Yu-Xiang Wang

Private selection mechanisms (e. g., Report Noisy Max, Sparse Vector) are fundamental primitives of differentially private (DP) data analysis with wide applications to private query release, voting, and hyperparameter tuning.

Improving the Privacy and Practicality of Objective Perturbation for Differentially Private Linear Learners

no code implementations NeurIPS 2023 Rachel Redberg, Antti Koskela, Yu-Xiang Wang

In the arena of privacy-preserving machine learning, differentially private stochastic gradient descent (DP-SGD) has outstripped the objective perturbation mechanism in popularity and interest.

Privacy Preserving regression

Practical Differentially Private Hyperparameter Tuning with Subsampling

no code implementations NeurIPS 2023 Antti Koskela, tejas kulkarni

Tuning the hyperparameters of differentially private (DP) machine learning (ML) algorithms often requires use of sensitive data and this may leak private information via hyperparameter values.

Individual Privacy Accounting with Gaussian Differential Privacy

1 code implementation30 Sep 2022 Antti Koskela, Marlon Tobaben, Antti Honkela

In order to account for the individual privacy losses in a principled manner, we need a privacy accountant for adaptive compositions of randomised mechanisms, where the loss incurred at a given data access is allowed to be smaller than the worst-case loss.

Tight Accounting in the Shuffle Model of Differential Privacy

no code implementations1 Jun 2021 Antti Koskela, Mikko A. Heikkilä, Antti Honkela

Shuffle model of differential privacy is a novel distributed privacy model based on a combination of local privacy mechanisms and a secure shuffler.

Computing Differential Privacy Guarantees for Heterogeneous Compositions Using FFT

no code implementations24 Feb 2021 Antti Koskela, Antti Honkela

The recently proposed Fast Fourier Transform (FFT)-based accountant for evaluating $(\varepsilon,\delta)$-differential privacy guarantees using the privacy loss distribution formalism has been shown to give tighter bounds than commonly used methods such as R\'enyi accountants when applied to homogeneous compositions, i. e., to compositions of identical mechanisms.

Differentially Private Bayesian Inference for Generalized Linear Models

no code implementations1 Nov 2020 tejas kulkarni, Joonas Jälkö, Antti Koskela, Samuel Kaski, Antti Honkela

Generalized linear models (GLMs) such as logistic regression are among the most widely used arms in data analyst's repertoire and often used on sensitive datasets.

Bayesian Inference regression

Differentially private cross-silo federated learning

1 code implementation10 Jul 2020 Mikko A. Heikkilä, Antti Koskela, Kana Shimizu, Samuel Kaski, Antti Honkela

In this paper we combine additively homomorphic secure summation protocols with differential privacy in the so-called cross-silo federated learning setting.

Federated Learning

Tight Differential Privacy for Discrete-Valued Mechanisms and for the Subsampled Gaussian Mechanism Using FFT

1 code implementation12 Jun 2020 Antti Koskela, Joonas Jälkö, Lukas Prediger, Antti Honkela

We carry out an error analysis of the method in terms of moment bounds of the privacy loss distribution which leads to rigorous lower and upper bounds for the true $(\varepsilon,\delta)$-values.

Computing Tight Differential Privacy Guarantees Using FFT

1 code implementation7 Jun 2019 Antti Koskela, Joonas Jälkö, Antti Honkela

The privacy loss of DP algorithms is commonly reported using $(\varepsilon,\delta)$-DP.

Learning Rate Adaptation for Federated and Differentially Private Learning

1 code implementation11 Sep 2018 Antti Koskela, Antti Honkela

We also show that it works robustly in the case of federated learning unlike commonly used optimisation methods.

Federated Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.