Search Results for author: Alexander Ziller

Found 25 papers, 6 papers with code

Visual Privacy Auditing with Diffusion Models

no code implementations12 Mar 2024 Kristian Schwethelm, Johannes Kaiser, Moritz Knolle, Daniel Rueckert, Georgios Kaissis, Alexander Ziller

We propose a reconstruction attack based on diffusion models (DMs) that assumes adversary access to real-world image priors and assess its implications on privacy leakage under DP-SGD.

Image Reconstruction Reconstruction Attack

Bounding Reconstruction Attack Success of Adversaries Without Data Priors

no code implementations20 Feb 2024 Alexander Ziller, Anneliese Riess, Kristian Schwethelm, Tamara T. Mueller, Daniel Rueckert, Georgios Kaissis

When training ML models with differential privacy (DP), formal upper bounds on the success of such reconstruction attacks can be provided.

Reconstruction Attack

How Low Can You Go? Surfacing Prototypical In-Distribution Samples for Unsupervised Anomaly Detection

no code implementations6 Dec 2023 Felix Meissen, Johannes Getzner, Alexander Ziller, Georgios Kaissis, Daniel Rueckert

Additionally, we show that the prototypical in-distribution samples identified by our proposed methods translate well to different models and other datasets and that using their characteristics as guidance allows for successful manual selection of small subsets of high-performing samples.

Pneumonia Detection Unsupervised Anomaly Detection

Reconciling AI Performance and Data Reconstruction Resilience for Medical Imaging

no code implementations5 Dec 2023 Alexander Ziller, Tamara T. Mueller, Simon Stieger, Leonhard Feiner, Johannes Brandt, Rickmer Braren, Daniel Rueckert, Georgios Kaissis

Although a lower budget decreases the risk of information leakage, it typically also reduces the performance of such models.

Bias-Aware Minimisation: Understanding and Mitigating Estimator Bias in Private SGD

no code implementations23 Aug 2023 Moritz Knolle, Robert Dorfman, Alexander Ziller, Daniel Rueckert, Georgios Kaissis

Differentially private SGD (DP-SGD) holds the promise of enabling the safe and responsible application of machine learning to sensitive datasets.

Body Fat Estimation from Surface Meshes using Graph Neural Networks

no code implementations13 Jul 2023 Tamara T. Mueller, Siyu Zhou, Sophie Starck, Friederike Jungmann, Alexander Ziller, Orhun Aksoy, Danylo Movchan, Rickmer Braren, Georgios Kaissis, Daniel Rueckert

Body fat volume and distribution can be a strong indication for a person's overall health and the risk for developing diseases like type 2 diabetes and cardiovascular diseases.

Interpretable 2D Vision Models for 3D Medical Images

1 code implementation13 Jul 2023 Alexander Ziller, Ayhan Can Erdur, Marwa Trigui, Alp Güvenir, Tamara T. Mueller, Philip Müller, Friederike Jungmann, Johannes Brandt, Jan Peeken, Rickmer Braren, Daniel Rueckert, Georgios Kaissis

Training Artificial Intelligence (AI) models on 3D images presents unique challenges compared to the 2D case: Firstly, the demand for computational resources is significantly higher, and secondly, the availability of large datasets for pre-training is often limited, impeding training success.

Bounding data reconstruction attacks with the hypothesis testing interpretation of differential privacy

no code implementations8 Jul 2023 Georgios Kaissis, Jamie Hayes, Alexander Ziller, Daniel Rueckert

We explore Reconstruction Robustness (ReRo), which was recently proposed as an upper bound on the success of data reconstruction attacks against machine learning models.

How Do Input Attributes Impact the Privacy Loss in Differential Privacy?

no code implementations18 Nov 2022 Tamara T. Mueller, Stefan Kolek, Friederike Jungmann, Alexander Ziller, Dmitrii Usynin, Moritz Knolle, Daniel Rueckert, Georgios Kaissis

Differential privacy (DP) is typically formulated as a worst-case privacy guarantee over all individuals in a database.

Exploiting segmentation labels and representation learning to forecast therapy response of PDAC patients

no code implementations8 Nov 2022 Alexander Ziller, Ayhan Can Erdur, Friederike Jungmann, Daniel Rueckert, Rickmer Braren, Georgios Kaissis

The prediction of pancreatic ductal adenocarcinoma therapy response is a clinically challenging and important task in this high-mortality tumour entity.

Representation Learning

Generalised Likelihood Ratio Testing Adversaries through the Differential Privacy Lens

no code implementations24 Oct 2022 Georgios Kaissis, Alexander Ziller, Stefan Kolek Martinez de Azagra, Daniel Rueckert

Differential Privacy (DP) provides tight upper bounds on the capabilities of optimal adversaries, but such adversaries are rarely encountered in practice.

SmoothNets: Optimizing CNN architecture design for differentially private deep learning

1 code implementation9 May 2022 Nicolas W. Remerscheid, Alexander Ziller, Daniel Rueckert, Georgios Kaissis

The arguably most widely employed algorithm to train deep neural networks with Differential Privacy is DPSGD, which requires clipping and noising of per-sample gradients.

Image Classification with Differential Privacy

Differentially private training of residual networks with scale normalisation

no code implementations1 Mar 2022 Helena Klause, Alexander Ziller, Daniel Rueckert, Kerstin Hammernik, Georgios Kaissis

The training of neural networks with Differentially Private Stochastic Gradient Descent offers formal Differential Privacy guarantees but introduces accuracy trade-offs.

Distributed Machine Learning and the Semblance of Trust

no code implementations21 Dec 2021 Dmitrii Usynin, Alexander Ziller, Daniel Rueckert, Jonathan Passerat-Palmbach, Georgios Kaissis

The utilisation of large and diverse datasets for machine learning (ML) at scale is required to promote scientific insight into many meaningful problems.

BIG-bench Machine Learning Federated Learning +1

A unified interpretation of the Gaussian mechanism for differential privacy through the sensitivity index

no code implementations22 Sep 2021 Georgios Kaissis, Moritz Knolle, Friederike Jungmann, Alexander Ziller, Dmitrii Usynin, Daniel Rueckert

$\psi$ uniquely characterises the GM and its properties by encapsulating its two fundamental quantities: the sensitivity of the query and the magnitude of the noise perturbation.

An automatic differentiation system for the age of differential privacy

no code implementations22 Sep 2021 Dmitrii Usynin, Alexander Ziller, Moritz Knolle, Andrew Trask, Kritika Prakash, Daniel Rueckert, Georgios Kaissis

We introduce Tritium, an automatic differentiation-based sensitivity analysis framework for differentially private (DP) machine learning (ML).

BIG-bench Machine Learning

Partial sensitivity analysis in differential privacy

1 code implementation22 Sep 2021 Tamara T. Mueller, Alexander Ziller, Dmitrii Usynin, Moritz Knolle, Friederike Jungmann, Daniel Rueckert, Georgios Kaissis

However, while techniques such as individual R\'enyi DP (RDP) allow for granular, per-person privacy accounting, few works have investigated the impact of each input feature on the individual's privacy loss.

Image Classification

NeuralDP Differentially private neural networks by design

no code implementations30 Jul 2021 Moritz Knolle, Dmitrii Usynin, Alexander Ziller, Marcus R. Makowski, Daniel Rueckert, Georgios Kaissis

The application of differential privacy to the training of deep neural networks holds the promise of allowing large-scale (decentralized) use of sensitive data while providing rigorous privacy guarantees to the individual.

Sensitivity analysis in differentially private machine learning using hybrid automatic differentiation

no code implementations9 Jul 2021 Alexander Ziller, Dmitrii Usynin, Moritz Knolle, Kritika Prakash, Andrew Trask, Rickmer Braren, Marcus Makowski, Daniel Rueckert, Georgios Kaissis

Reconciling large-scale ML with the closed-form reasoning required for the principled analysis of individual privacy loss requires the introduction of new tools for automatic sensitivity analysis and for tracking an individual's data and their features through the flow of computation.

BIG-bench Machine Learning

Differentially private federated deep learning for multi-site medical image segmentation

1 code implementation6 Jul 2021 Alexander Ziller, Dmitrii Usynin, Nicolas Remerscheid, Moritz Knolle, Marcus Makowski, Rickmer Braren, Daniel Rueckert, Georgios Kaissis

The application of PTs to FL in medical imaging and the trade-offs between privacy guarantees and model utility, the ramifications on training performance and the susceptibility of the final models to attacks have not yet been conclusively investigated.

Federated Learning Image Segmentation +4

Oktoberfest Food Dataset

1 code implementation22 Nov 2019 Alexander Ziller, Julius Hansjakob, Vitalii Rusinov, Daniel Zügner, Peter Vogel, Stephan Günnemann

We release a realistic, diverse, and challenging dataset for object detection on images.

Object object-detection +1

Cannot find the paper you are looking for? You can Submit a new open access paper.