1 code implementation • 3 Feb 2023 • Soroosh Tayebi Arasteh, Alexander Ziller, Christiane Kuhl, Marcus Makowski, Sven Nebelung, Rickmer Braren, Daniel Rueckert, Daniel Truhn, Georgios Kaissis
In this work, we evaluated the effect of privacy-preserving training of AI models for chest radiograph diagnosis regarding accuracy and fairness compared to non-private training.
no code implementations • 18 Nov 2022 • Tamara T. Mueller, Stefan Kolek, Friederike Jungmann, Alexander Ziller, Dmitrii Usynin, Moritz Knolle, Daniel Rueckert, Georgios Kaissis
Differential privacy (DP) is typically formulated as a worst-case privacy guarantee over all individuals in a database.
no code implementations • 8 Nov 2022 • Alexander Ziller, Ayhan Can Erdur, Friederike Jungmann, Daniel Rueckert, Rickmer Braren, Georgios Kaissis
The prediction of pancreatic ductal adenocarcinoma therapy response is a clinically challenging and important task in this high-mortality tumour entity.
no code implementations • 24 Oct 2022 • Georgios Kaissis, Alexander Ziller, Stefan Kolek Martinez de Azagra, Daniel Rueckert
Differential Privacy (DP) provides tight upper bounds on the capabilities of optimal adversaries, but such adversaries are rarely encountered in practice.
1 code implementation • 9 May 2022 • Nicolas W. Remerscheid, Alexander Ziller, Daniel Rueckert, Georgios Kaissis
The arguably most widely employed algorithm to train deep neural networks with Differential Privacy is DPSGD, which requires clipping and noising of per-sample gradients.
Ranked #2 on
Image Classification
on Imagenette
no code implementations • 1 Mar 2022 • Helena Klause, Alexander Ziller, Daniel Rueckert, Kerstin Hammernik, Georgios Kaissis
The training of neural networks with Differentially Private Stochastic Gradient Descent offers formal Differential Privacy guarantees but introduces accuracy trade-offs.
no code implementations • 21 Dec 2021 • Dmitrii Usynin, Alexander Ziller, Daniel Rueckert, Jonathan Passerat-Palmbach, Georgios Kaissis
The utilisation of large and diverse datasets for machine learning (ML) at scale is required to promote scientific insight into many meaningful problems.
no code implementations • 7 Oct 2021 • Alexander Ziller, Dmitrii Usynin, Moritz Knolle, Kerstin Hammernik, Daniel Rueckert, Georgios Kaissis
We present $\zeta$-DP, an extension of differential privacy (DP) to complex-valued functions.
no code implementations • 22 Sep 2021 • Georgios Kaissis, Moritz Knolle, Friederike Jungmann, Alexander Ziller, Dmitrii Usynin, Daniel Rueckert
$\psi$ uniquely characterises the GM and its properties by encapsulating its two fundamental quantities: the sensitivity of the query and the magnitude of the noise perturbation.
no code implementations • 22 Sep 2021 • Dmitrii Usynin, Alexander Ziller, Moritz Knolle, Andrew Trask, Kritika Prakash, Daniel Rueckert, Georgios Kaissis
We introduce Tritium, an automatic differentiation-based sensitivity analysis framework for differentially private (DP) machine learning (ML).
1 code implementation • 22 Sep 2021 • Tamara T. Mueller, Alexander Ziller, Dmitrii Usynin, Moritz Knolle, Friederike Jungmann, Daniel Rueckert, Georgios Kaissis
However, while techniques such as individual R\'enyi DP (RDP) allow for granular, per-person privacy accounting, few works have investigated the impact of each input feature on the individual's privacy loss.
no code implementations • 30 Jul 2021 • Moritz Knolle, Dmitrii Usynin, Alexander Ziller, Marcus R. Makowski, Daniel Rueckert, Georgios Kaissis
The application of differential privacy to the training of deep neural networks holds the promise of allowing large-scale (decentralized) use of sensitive data while providing rigorous privacy guarantees to the individual.
no code implementations • 9 Jul 2021 • Alexander Ziller, Dmitrii Usynin, Moritz Knolle, Kritika Prakash, Andrew Trask, Rickmer Braren, Marcus Makowski, Daniel Rueckert, Georgios Kaissis
Reconciling large-scale ML with the closed-form reasoning required for the principled analysis of individual privacy loss requires the introduction of new tools for automatic sensitivity analysis and for tracking an individual's data and their features through the flow of computation.
no code implementations • 9 Jul 2021 • Moritz Knolle, Alexander Ziller, Dmitrii Usynin, Rickmer Braren, Marcus R. Makowski, Daniel Rueckert, Georgios Kaissis
We show that differentially private stochastic gradient descent (DP-SGD) can yield poorly calibrated, overconfident deep learning models.
1 code implementation • 6 Jul 2021 • Alexander Ziller, Dmitrii Usynin, Nicolas Remerscheid, Moritz Knolle, Marcus Makowski, Rickmer Braren, Daniel Rueckert, Georgios Kaissis
The application of PTs to FL in medical imaging and the trade-offs between privacy guarantees and model utility, the ramifications on training performance and the susceptibility of the final models to attacks have not yet been conclusively investigated.
no code implementations • 10 Dec 2020 • Alexander Ziller, Jonathan Passerat-Palmbach, Théo Ryffel, Dmitrii Usynin, Andrew Trask, Ionésio Da Lima Costa Junior, Jason Mancuso, Marcus Makowski, Daniel Rueckert, Rickmer Braren, Georgios Kaissis
The utilisation of artificial intelligence in medicine and healthcare has led to successful clinical applications in several domains.
1 code implementation • 22 Nov 2019 • Alexander Ziller, Julius Hansjakob, Vitalii Rusinov, Daniel Zügner, Peter Vogel, Stephan Günnemann
We release a realistic, diverse, and challenging dataset for object detection on images.