Search Results for author: Moritz Knolle

Found 14 papers, 3 papers with code

Differentially private federated deep learning for multi-site medical image segmentation

1 code implementation6 Jul 2021 Alexander Ziller, Dmitrii Usynin, Nicolas Remerscheid, Moritz Knolle, Marcus Makowski, Rickmer Braren, Daniel Rueckert, Georgios Kaissis

The application of PTs to FL in medical imaging and the trade-offs between privacy guarantees and model utility, the ramifications on training performance and the susceptibility of the final models to attacks have not yet been conclusively investigated.

Federated Learning Image Segmentation +4

Sensitivity analysis in differentially private machine learning using hybrid automatic differentiation

no code implementations9 Jul 2021 Alexander Ziller, Dmitrii Usynin, Moritz Knolle, Kritika Prakash, Andrew Trask, Rickmer Braren, Marcus Makowski, Daniel Rueckert, Georgios Kaissis

Reconciling large-scale ML with the closed-form reasoning required for the principled analysis of individual privacy loss requires the introduction of new tools for automatic sensitivity analysis and for tracking an individual's data and their features through the flow of computation.

BIG-bench Machine Learning

NeuralDP Differentially private neural networks by design

no code implementations30 Jul 2021 Moritz Knolle, Dmitrii Usynin, Alexander Ziller, Marcus R. Makowski, Daniel Rueckert, Georgios Kaissis

The application of differential privacy to the training of deep neural networks holds the promise of allowing large-scale (decentralized) use of sensitive data while providing rigorous privacy guarantees to the individual.

Partial sensitivity analysis in differential privacy

1 code implementation22 Sep 2021 Tamara T. Mueller, Alexander Ziller, Dmitrii Usynin, Moritz Knolle, Friederike Jungmann, Daniel Rueckert, Georgios Kaissis

However, while techniques such as individual R\'enyi DP (RDP) allow for granular, per-person privacy accounting, few works have investigated the impact of each input feature on the individual's privacy loss.

Image Classification

An automatic differentiation system for the age of differential privacy

no code implementations22 Sep 2021 Dmitrii Usynin, Alexander Ziller, Moritz Knolle, Andrew Trask, Kritika Prakash, Daniel Rueckert, Georgios Kaissis

We introduce Tritium, an automatic differentiation-based sensitivity analysis framework for differentially private (DP) machine learning (ML).

BIG-bench Machine Learning

A unified interpretation of the Gaussian mechanism for differential privacy through the sensitivity index

no code implementations22 Sep 2021 Georgios Kaissis, Moritz Knolle, Friederike Jungmann, Alexander Ziller, Dmitrii Usynin, Daniel Rueckert

$\psi$ uniquely characterises the GM and its properties by encapsulating its two fundamental quantities: the sensitivity of the query and the magnitude of the noise perturbation.

How Do Input Attributes Impact the Privacy Loss in Differential Privacy?

no code implementations18 Nov 2022 Tamara T. Mueller, Stefan Kolek, Friederike Jungmann, Alexander Ziller, Dmitrii Usynin, Moritz Knolle, Daniel Rueckert, Georgios Kaissis

Differential privacy (DP) is typically formulated as a worst-case privacy guarantee over all individuals in a database.

Bias-Aware Minimisation: Understanding and Mitigating Estimator Bias in Private SGD

no code implementations23 Aug 2023 Moritz Knolle, Robert Dorfman, Alexander Ziller, Daniel Rueckert, Georgios Kaissis

Differentially private SGD (DP-SGD) holds the promise of enabling the safe and responsible application of machine learning to sensitive datasets.

(Predictable) Performance Bias in Unsupervised Anomaly Detection

no code implementations25 Sep 2023 Felix Meissen, Svenja Breuer, Moritz Knolle, Alena Buyx, Ruth Müller, Georgios Kaissis, Benedikt Wiestler, Daniel Rückert

The empirical fairness laws discovered in our study make disparate performance in UAD models easier to estimate and aid in determining the most desirable dataset composition.

Fairness Unsupervised Anomaly Detection

SoK: Memorisation in machine learning

no code implementations6 Nov 2023 Dmitrii Usynin, Moritz Knolle, Georgios Kaissis

In this work we unify a broad range of previous definitions and perspectives on memorisation in ML, discuss their interplay with model generalisation and their implications of these phenomena on data privacy.

Visual Privacy Auditing with Diffusion Models

no code implementations12 Mar 2024 Kristian Schwethelm, Johannes Kaiser, Moritz Knolle, Daniel Rueckert, Georgios Kaissis, Alexander Ziller

We propose a reconstruction attack based on diffusion models (DMs) that assumes adversary access to real-world image priors and assess its implications on privacy leakage under DP-SGD.

Image Reconstruction Reconstruction Attack

Cannot find the paper you are looking for? You can Submit a new open access paper.