Search Results for author: Antti Honkela

Found 29 papers, 15 papers with code

Understanding Practical Membership Privacy of Deep Learning

no code implementations7 Feb 2024 Marlon Tobaben, Gauri Pradhan, Yuan He, Joonas Jälkö, Antti Honkela

We apply a state-of-the-art membership inference attack (MIA) to systematically test the practical privacy vulnerability of fine-tuning large image classification models. We focus on understanding the properties of data sets and samples that make them vulnerable to membership inference.

Image Classification Inference Attack +1

A Bias-Variance Decomposition for Ensembles over Multiple Synthetic Datasets

no code implementations6 Feb 2024 Ossi Räisä, Antti Honkela

We investigate how our theory works in practice by evaluating the performance of an ensemble over many synthetic datasets for several real datasets and downstream predictors.

Model Selection

Subsampling is not Magic: Why Large Batch Sizes Work for Differentially Private Stochastic Optimisation

no code implementations6 Feb 2024 Ossi Räisä, Joonas Jälkö, Antti Honkela

The remaining subsampling-induced variance decreases with larger batch sizes, so large batches reduce the effective total gradient variance.

Privacy-Aware Document Visual Question Answering

no code implementations15 Dec 2023 Rubèn Tito, Khanh Nguyen, Marlon Tobaben, Raouf Kerkouche, Mohamed Ali Souibgui, Kangsoo Jung, Lei Kang, Ernest Valveny, Antti Honkela, Mario Fritz, Dimosthenis Karatzas

We employ a federated learning scheme, that reflects the real-life distribution of documents in different businesses, and we explore the use case where the ID of the invoice issuer is the sensitive information to be protected.

document understanding Federated Learning +3

Collaborative Learning From Distributed Data With Differentially Private Synthetic Twin Data

1 code implementation9 Aug 2023 Lukas Prediger, Joonas Jälkö, Antti Honkela, Samuel Kaski

Consider a setting where multiple parties holding sensitive data aim to collaboratively learn population level statistics, but pooling the sensitive data sets is not possible.

Privacy Preserving

On the Efficacy of Differentially Private Few-shot Image Classification

1 code implementation2 Feb 2023 Marlon Tobaben, Aliaksandra Shysheya, John Bronskill, Andrew Paverd, Shruti Tople, Santiago Zanella-Beguelin, Richard E Turner, Antti Honkela

There has been significant recent progress in training differentially private (DP) models which achieve accuracy that approaches the best non-private models.

Federated Learning Few-Shot Image Classification

DPVIm: Differentially Private Variational Inference Improved

no code implementations28 Oct 2022 Joonas Jälkö, Lukas Prediger, Antti Honkela, Samuel Kaski

Using this as prior knowledge we establish a link between the gradients of the variational parameters, and propose an efficient while simple fix for the problem to obtain a less noisy gradient estimator, which we call $\textit{aligned}$ gradients.

Variational Inference

Individual Privacy Accounting with Gaussian Differential Privacy

1 code implementation30 Sep 2022 Antti Koskela, Marlon Tobaben, Antti Honkela

In order to account for the individual privacy losses in a principled manner, we need a privacy accountant for adaptive compositions of randomised mechanisms, where the loss incurred at a given data access is allowed to be smaller than the worst-case loss.

Differentially private partitioned variational inference

1 code implementation23 Sep 2022 Mikko A. Heikkilä, Matthew Ashman, Siddharth Swaroop, Richard E. Turner, Antti Honkela

In this paper, we present differentially private partitioned variational inference, the first general framework for learning a variational approximation to a Bayesian posterior distribution in the federated learning setting while minimising the number of communication rounds and providing differential privacy guarantees for data subjects.

Federated Learning Privacy Preserving +1

Locally Differentially Private Bayesian Inference

no code implementations27 Oct 2021 tejas kulkarni, Joonas Jälkö, Samuel Kaski, Antti Honkela

In recent years, local differential privacy (LDP) has emerged as a technique of choice for privacy-preserving data collection in several scenarios when the aggregator is not trustworthy.

Bayesian Inference Privacy Preserving +1

Gaussian Processes with Differential Privacy

no code implementations1 Jun 2021 Antti Honkela, Laila Melkas

We achieve this by using sparse GP methodology and publishing a private variational approximation on known inducing points.

Gaussian Processes

Tight Accounting in the Shuffle Model of Differential Privacy

no code implementations1 Jun 2021 Antti Koskela, Mikko A. Heikkilä, Antti Honkela

Shuffle model of differential privacy is a novel distributed privacy model based on a combination of local privacy mechanisms and a secure shuffler.

D3p -- A Python Package for Differentially-Private Probabilistic Programming

1 code implementation22 Mar 2021 Lukas Prediger, Niki Loppi, Samuel Kaski, Antti Honkela

We present d3p, a software package designed to help fielding runtime efficient widely-applicable Bayesian inference under differential privacy guarantees.

Probabilistic Programming regression +1

Computing Differential Privacy Guarantees for Heterogeneous Compositions Using FFT

no code implementations24 Feb 2021 Antti Koskela, Antti Honkela

The recently proposed Fast Fourier Transform (FFT)-based accountant for evaluating $(\varepsilon,\delta)$-differential privacy guarantees using the privacy loss distribution formalism has been shown to give tighter bounds than commonly used methods such as R\'enyi accountants when applied to homogeneous compositions, i. e., to compositions of identical mechanisms.

Differentially Private Bayesian Inference for Generalized Linear Models

no code implementations1 Nov 2020 tejas kulkarni, Joonas Jälkö, Antti Koskela, Samuel Kaski, Antti Honkela

Generalized linear models (GLMs) such as logistic regression are among the most widely used arms in data analyst's repertoire and often used on sensitive datasets.

Bayesian Inference regression

Privacy-preserving Data Sharing on Vertically Partitioned Data

no code implementations19 Oct 2020 Razane Tajeddine, Joonas Jälkö, Samuel Kaski, Antti Honkela

We modify a secure multiparty computation (MPC) framework to combine MPC with differential privacy (DP), in order to use differentially private MPC effectively to learn a probabilistic generative model under DP on such vertically partitioned data.

Privacy Preserving Variational Inference

Differentially private cross-silo federated learning

1 code implementation10 Jul 2020 Mikko A. Heikkilä, Antti Koskela, Kana Shimizu, Samuel Kaski, Antti Honkela

In this paper we combine additively homomorphic secure summation protocols with differential privacy in the so-called cross-silo federated learning setting.

Federated Learning

Tight Differential Privacy for Discrete-Valued Mechanisms and for the Subsampled Gaussian Mechanism Using FFT

1 code implementation12 Jun 2020 Antti Koskela, Joonas Jälkö, Lukas Prediger, Antti Honkela

We carry out an error analysis of the method in terms of moment bounds of the privacy loss distribution which leads to rigorous lower and upper bounds for the true $(\varepsilon,\delta)$-values.

Computing Tight Differential Privacy Guarantees Using FFT

1 code implementation7 Jun 2019 Antti Koskela, Joonas Jälkö, Antti Honkela

The privacy loss of DP algorithms is commonly reported using $(\varepsilon,\delta)$-DP.

Representation Transfer for Differentially Private Drug Sensitivity Prediction

no code implementations29 Jan 2019 Teppo Niinimäki, Mikko Heikkilä, Antti Honkela, Samuel Kaski

Differentially private learning with genomic data is challenging because it is more difficult to guarantee the privacy in high dimensions.

BIG-bench Machine Learning Cancer type classification +2

Differentially Private Markov Chain Monte Carlo

1 code implementation NeurIPS 2019 Mikko A. Heikkilä, Joonas Jälkö, Onur Dikmen, Antti Honkela

Recent developments in differentially private (DP) machine learning and DP Bayesian learning have enabled learning under strong privacy guarantees for the training data subjects.

Learning Rate Adaptation for Federated and Differentially Private Learning

1 code implementation11 Sep 2018 Antti Koskela, Antti Honkela

We also show that it works robustly in the case of federated learning unlike commonly used optimisation methods.

Federated Learning

Differentially Private Bayesian Learning on Distributed Data

1 code implementation NeurIPS 2017 Mikko Heikkilä, Eemil Lagerspetz, Samuel Kaski, Kana Shimizu, Sasu Tarkoma, Antti Honkela

Many applications of machine learning, for example in health care, would benefit from methods that can guarantee privacy of data subjects.

Bayesian Inference

Differentially Private Variational Inference for Non-conjugate Models

2 code implementations27 Oct 2016 Joonas Jälkö, Onur Dikmen, Antti Honkela

It is built on top of doubly stochastic variational inference, a recent advance which provides a variational solution to a large class of models.

Bayesian Inference Variational Inference

Efficient differentially private learning improves drug sensitivity prediction

no code implementations7 Jun 2016 Antti Honkela, Mrinal Das, Arttu Nieminen, Onur Dikmen, Samuel Kaski

Good personalised predictions are vitally important in precision medicine, but genomic information on which the predictions are based is also particularly sensitive, as it directly identifies the patients and hence cannot easily be anonymised.

On the inconsistency of $\ell_1$-penalised sparse precision matrix estimation

no code implementations8 Mar 2016 Otte Heinävaara, Janne Leppä-aho, Jukka Corander, Antti Honkela

Various $\ell_1$-penalised estimation methods such as graphical lasso and CLIME are widely used for sparse precision matrix estimation.

Cannot find the paper you are looking for? You can Submit a new open access paper.