Search Results for author: Alexander Ziller

Found 17 papers, 5 papers with code

How Do Input Attributes Impact the Privacy Loss in Differential Privacy?

no code implementations18 Nov 2022 Tamara T. Mueller, Stefan Kolek, Friederike Jungmann, Alexander Ziller, Dmitrii Usynin, Moritz Knolle, Daniel Rueckert, Georgios Kaissis

Differential privacy (DP) is typically formulated as a worst-case privacy guarantee over all individuals in a database.

Exploiting segmentation labels and representation learning to forecast therapy response of PDAC patients

no code implementations8 Nov 2022 Alexander Ziller, Ayhan Can Erdur, Friederike Jungmann, Daniel Rueckert, Rickmer Braren, Georgios Kaissis

The prediction of pancreatic ductal adenocarcinoma therapy response is a clinically challenging and important task in this high-mortality tumour entity.

Representation Learning

Generalised Likelihood Ratio Testing Adversaries through the Differential Privacy Lens

no code implementations24 Oct 2022 Georgios Kaissis, Alexander Ziller, Stefan Kolek Martinez de Azagra, Daniel Rueckert

Differential Privacy (DP) provides tight upper bounds on the capabilities of optimal adversaries, but such adversaries are rarely encountered in practice.

SmoothNets: Optimizing CNN architecture design for differentially private deep learning

1 code implementation9 May 2022 Nicolas W. Remerscheid, Alexander Ziller, Daniel Rueckert, Georgios Kaissis

The arguably most widely employed algorithm to train deep neural networks with Differential Privacy is DPSGD, which requires clipping and noising of per-sample gradients.

Image Classification with Differential Privacy

Differentially private training of residual networks with scale normalisation

no code implementations1 Mar 2022 Helena Klause, Alexander Ziller, Daniel Rueckert, Kerstin Hammernik, Georgios Kaissis

The training of neural networks with Differentially Private Stochastic Gradient Descent offers formal Differential Privacy guarantees but introduces accuracy trade-offs.

Distributed Machine Learning and the Semblance of Trust

no code implementations21 Dec 2021 Dmitrii Usynin, Alexander Ziller, Daniel Rueckert, Jonathan Passerat-Palmbach, Georgios Kaissis

The utilisation of large and diverse datasets for machine learning (ML) at scale is required to promote scientific insight into many meaningful problems.

BIG-bench Machine Learning Federated Learning +1

A unified interpretation of the Gaussian mechanism for differential privacy through the sensitivity index

no code implementations22 Sep 2021 Georgios Kaissis, Moritz Knolle, Friederike Jungmann, Alexander Ziller, Dmitrii Usynin, Daniel Rueckert

$\psi$ uniquely characterises the GM and its properties by encapsulating its two fundamental quantities: the sensitivity of the query and the magnitude of the noise perturbation.

An automatic differentiation system for the age of differential privacy

no code implementations22 Sep 2021 Dmitrii Usynin, Alexander Ziller, Moritz Knolle, Andrew Trask, Kritika Prakash, Daniel Rueckert, Georgios Kaissis

We introduce Tritium, an automatic differentiation-based sensitivity analysis framework for differentially private (DP) machine learning (ML).

BIG-bench Machine Learning

Partial sensitivity analysis in differential privacy

1 code implementation22 Sep 2021 Tamara T. Mueller, Alexander Ziller, Dmitrii Usynin, Moritz Knolle, Friederike Jungmann, Daniel Rueckert, Georgios Kaissis

However, while techniques such as individual R\'enyi DP (RDP) allow for granular, per-person privacy accounting, few works have investigated the impact of each input feature on the individual's privacy loss.

Image Classification

NeuralDP Differentially private neural networks by design

no code implementations30 Jul 2021 Moritz Knolle, Dmitrii Usynin, Alexander Ziller, Marcus R. Makowski, Daniel Rueckert, Georgios Kaissis

The application of differential privacy to the training of deep neural networks holds the promise of allowing large-scale (decentralized) use of sensitive data while providing rigorous privacy guarantees to the individual.

Sensitivity analysis in differentially private machine learning using hybrid automatic differentiation

no code implementations9 Jul 2021 Alexander Ziller, Dmitrii Usynin, Moritz Knolle, Kritika Prakash, Andrew Trask, Rickmer Braren, Marcus Makowski, Daniel Rueckert, Georgios Kaissis

Reconciling large-scale ML with the closed-form reasoning required for the principled analysis of individual privacy loss requires the introduction of new tools for automatic sensitivity analysis and for tracking an individual's data and their features through the flow of computation.

BIG-bench Machine Learning

Differentially private federated deep learning for multi-site medical image segmentation

1 code implementation6 Jul 2021 Alexander Ziller, Dmitrii Usynin, Nicolas Remerscheid, Moritz Knolle, Marcus Makowski, Rickmer Braren, Daniel Rueckert, Georgios Kaissis

The application of PTs to FL in medical imaging and the trade-offs between privacy guarantees and model utility, the ramifications on training performance and the susceptibility of the final models to attacks have not yet been conclusively investigated.

Federated Learning Image Segmentation +3

Cannot find the paper you are looking for? You can Submit a new open access paper.