Search Results for author: Dmitrii Usynin

Found 17 papers, 3 papers with code

SoK: Memorisation in machine learning

no code implementations6 Nov 2023 Dmitrii Usynin, Moritz Knolle, Georgios Kaissis

In this work we unify a broad range of previous definitions and perspectives on memorisation in ML, discuss their interplay with model generalisation and their implications of these phenomena on data privacy.

Leveraging gradient-derived metrics for data selection and valuation in differentially private training

no code implementations4 May 2023 Dmitrii Usynin, Daniel Rueckert, Georgios Kaissis

Obtaining high-quality data for collaborative training of machine learning models can be a challenging task due to A) the regulatory concerns and B) lack of incentive to participate.

How Do Input Attributes Impact the Privacy Loss in Differential Privacy?

no code implementations18 Nov 2022 Tamara T. Mueller, Stefan Kolek, Friederike Jungmann, Alexander Ziller, Dmitrii Usynin, Moritz Knolle, Daniel Rueckert, Georgios Kaissis

Differential privacy (DP) is typically formulated as a worst-case privacy guarantee over all individuals in a database.

Beyond Gradients: Exploiting Adversarial Priors in Model Inversion Attacks

no code implementations1 Mar 2022 Dmitrii Usynin, Daniel Rueckert, Georgios Kaissis

Collaborative machine learning settings like federated learning can be susceptible to adversarial interference and attacks.

Federated Learning

Differentially Private Graph Classification with GNNs

1 code implementation5 Feb 2022 Tamara T. Mueller, Johannes C. Paetzold, Chinmay Prabhakar, Dmitrii Usynin, Daniel Rueckert, Georgios Kaissis

In this work, we introduce differential privacy for graph-level classification, one of the key applications of machine learning on graphs.

BIG-bench Machine Learning Graph Classification

Distributed Machine Learning and the Semblance of Trust

no code implementations21 Dec 2021 Dmitrii Usynin, Alexander Ziller, Daniel Rueckert, Jonathan Passerat-Palmbach, Georgios Kaissis

The utilisation of large and diverse datasets for machine learning (ML) at scale is required to promote scientific insight into many meaningful problems.

BIG-bench Machine Learning Federated Learning +1

A unified interpretation of the Gaussian mechanism for differential privacy through the sensitivity index

no code implementations22 Sep 2021 Georgios Kaissis, Moritz Knolle, Friederike Jungmann, Alexander Ziller, Dmitrii Usynin, Daniel Rueckert

$\psi$ uniquely characterises the GM and its properties by encapsulating its two fundamental quantities: the sensitivity of the query and the magnitude of the noise perturbation.

Partial sensitivity analysis in differential privacy

1 code implementation22 Sep 2021 Tamara T. Mueller, Alexander Ziller, Dmitrii Usynin, Moritz Knolle, Friederike Jungmann, Daniel Rueckert, Georgios Kaissis

However, while techniques such as individual R\'enyi DP (RDP) allow for granular, per-person privacy accounting, few works have investigated the impact of each input feature on the individual's privacy loss.

Image Classification

An automatic differentiation system for the age of differential privacy

no code implementations22 Sep 2021 Dmitrii Usynin, Alexander Ziller, Moritz Knolle, Andrew Trask, Kritika Prakash, Daniel Rueckert, Georgios Kaissis

We introduce Tritium, an automatic differentiation-based sensitivity analysis framework for differentially private (DP) machine learning (ML).

BIG-bench Machine Learning

NeuralDP Differentially private neural networks by design

no code implementations30 Jul 2021 Moritz Knolle, Dmitrii Usynin, Alexander Ziller, Marcus R. Makowski, Daniel Rueckert, Georgios Kaissis

The application of differential privacy to the training of deep neural networks holds the promise of allowing large-scale (decentralized) use of sensitive data while providing rigorous privacy guarantees to the individual.

Sensitivity analysis in differentially private machine learning using hybrid automatic differentiation

no code implementations9 Jul 2021 Alexander Ziller, Dmitrii Usynin, Moritz Knolle, Kritika Prakash, Andrew Trask, Rickmer Braren, Marcus Makowski, Daniel Rueckert, Georgios Kaissis

Reconciling large-scale ML with the closed-form reasoning required for the principled analysis of individual privacy loss requires the introduction of new tools for automatic sensitivity analysis and for tracking an individual's data and their features through the flow of computation.

BIG-bench Machine Learning

Differentially private federated deep learning for multi-site medical image segmentation

1 code implementation6 Jul 2021 Alexander Ziller, Dmitrii Usynin, Nicolas Remerscheid, Moritz Knolle, Marcus Makowski, Rickmer Braren, Daniel Rueckert, Georgios Kaissis

The application of PTs to FL in medical imaging and the trade-offs between privacy guarantees and model utility, the ramifications on training performance and the susceptibility of the final models to attacks have not yet been conclusively investigated.

Federated Learning Image Segmentation +4

Cannot find the paper you are looking for? You can Submit a new open access paper.