no code implementations • 18 Nov 2022 • Tamara T. Mueller, Stefan Kolek, Friederike Jungmann, Alexander Ziller, Dmitrii Usynin, Moritz Knolle, Daniel Rueckert, Georgios Kaissis
Differential privacy (DP) is typically formulated as a worst-case privacy guarantee over all individuals in a database.
no code implementations • 5 May 2022 • Dmitrii Usynin, Helena Klause, Johannes C. Paetzold, Daniel Rueckert, Georgios Kaissis
In federated learning for medical image analysis, the safety of the learning protocol is paramount.
no code implementations • 17 Mar 2022 • Tamara T. Mueller, Dmitrii Usynin, Johannes C. Paetzold, Daniel Rueckert, Georgios Kaissis
In this work, we study the applications of differential privacy (DP) in the context of graph-structured data.
no code implementations • 1 Mar 2022 • Dmitrii Usynin, Daniel Rueckert, Georgios Kaissis
Collaborative machine learning settings like federated learning can be susceptible to adversarial interference and attacks.
no code implementations • 5 Feb 2022 • Tamara T. Mueller, Johannes C. Paetzold, Chinmay Prabhakar, Dmitrii Usynin, Daniel Rueckert, Georgios Kaissis
In this work, we introduce differential privacy for graph-level classification, one of the key applications of machine learning on graphs.
no code implementations • 21 Dec 2021 • Dmitrii Usynin, Alexander Ziller, Daniel Rueckert, Jonathan Passerat-Palmbach, Georgios Kaissis
The utilisation of large and diverse datasets for machine learning (ML) at scale is required to promote scientific insight into many meaningful problems.
no code implementations • 7 Oct 2021 • Alexander Ziller, Dmitrii Usynin, Moritz Knolle, Kerstin Hammernik, Daniel Rueckert, Georgios Kaissis
We present $\zeta$-DP, an extension of differential privacy (DP) to complex-valued functions.
no code implementations • 22 Sep 2021 • Georgios Kaissis, Moritz Knolle, Friederike Jungmann, Alexander Ziller, Dmitrii Usynin, Daniel Rueckert
$\psi$ uniquely characterises the GM and its properties by encapsulating its two fundamental quantities: the sensitivity of the query and the magnitude of the noise perturbation.
no code implementations • 22 Sep 2021 • Dmitrii Usynin, Alexander Ziller, Moritz Knolle, Andrew Trask, Kritika Prakash, Daniel Rueckert, Georgios Kaissis
We introduce Tritium, an automatic differentiation-based sensitivity analysis framework for differentially private (DP) machine learning (ML).
1 code implementation • 22 Sep 2021 • Tamara T. Mueller, Alexander Ziller, Dmitrii Usynin, Moritz Knolle, Friederike Jungmann, Daniel Rueckert, Georgios Kaissis
However, while techniques such as individual R\'enyi DP (RDP) allow for granular, per-person privacy accounting, few works have investigated the impact of each input feature on the individual's privacy loss.
no code implementations • 30 Jul 2021 • Moritz Knolle, Dmitrii Usynin, Alexander Ziller, Marcus R. Makowski, Daniel Rueckert, Georgios Kaissis
The application of differential privacy to the training of deep neural networks holds the promise of allowing large-scale (decentralized) use of sensitive data while providing rigorous privacy guarantees to the individual.
no code implementations • 9 Jul 2021 • Alexander Ziller, Dmitrii Usynin, Moritz Knolle, Kritika Prakash, Andrew Trask, Rickmer Braren, Marcus Makowski, Daniel Rueckert, Georgios Kaissis
Reconciling large-scale ML with the closed-form reasoning required for the principled analysis of individual privacy loss requires the introduction of new tools for automatic sensitivity analysis and for tracking an individual's data and their features through the flow of computation.
no code implementations • 9 Jul 2021 • Moritz Knolle, Alexander Ziller, Dmitrii Usynin, Rickmer Braren, Marcus R. Makowski, Daniel Rueckert, Georgios Kaissis
We show that differentially private stochastic gradient descent (DP-SGD) can yield poorly calibrated, overconfident deep learning models.
1 code implementation • 6 Jul 2021 • Alexander Ziller, Dmitrii Usynin, Nicolas Remerscheid, Moritz Knolle, Marcus Makowski, Rickmer Braren, Daniel Rueckert, Georgios Kaissis
The application of PTs to FL in medical imaging and the trade-offs between privacy guarantees and model utility, the ramifications on training performance and the susceptibility of the final models to attacks have not yet been conclusively investigated.
no code implementations • 10 Dec 2020 • Alexander Ziller, Jonathan Passerat-Palmbach, Théo Ryffel, Dmitrii Usynin, Andrew Trask, Ionésio Da Lima Costa Junior, Jason Mancuso, Marcus Makowski, Daniel Rueckert, Rickmer Braren, Georgios Kaissis
The utilisation of artificial intelligence in medicine and healthcare has led to successful clinical applications in several domains.