Search Results for author: Georgios Kaissis

Found 64 papers, 29 papers with code

The Liver Tumor Segmentation Benchmark (LiTS)

6 code implementations13 Jan 2019 Patrick Bilic, Patrick Christ, Hongwei Bran Li, Eugene Vorontsov, Avi Ben-Cohen, Georgios Kaissis, Adi Szeskin, Colin Jacobs, Gabriel Efrain Humpire Mamani, Gabriel Chartrand, Fabian Lohöfer, Julian Walter Holch, Wieland Sommer, Felix Hofmann, Alexandre Hostettler, Naama Lev-Cohain, Michal Drozdzal, Michal Marianne Amitai, Refael Vivantik, Jacob Sosna, Ivan Ezhov, Anjany Sekuboyina, Fernando Navarro, Florian Kofler, Johannes C. Paetzold, Suprosanna Shit, Xiaobin Hu, Jana Lipková, Markus Rempfler, Marie Piraud, Jan Kirschke, Benedikt Wiestler, Zhiheng Zhang, Christian Hülsemeyer, Marcel Beetz, Florian Ettlinger, Michela Antonelli, Woong Bae, Míriam Bellver, Lei Bi, Hao Chen, Grzegorz Chlebus, Erik B. Dam, Qi Dou, Chi-Wing Fu, Bogdan Georgescu, Xavier Giró-i-Nieto, Felix Gruen, Xu Han, Pheng-Ann Heng, Jürgen Hesser, Jan Hendrik Moltz, Christian Igel, Fabian Isensee, Paul Jäger, Fucang Jia, Krishna Chaitanya Kaluva, Mahendra Khened, Ildoo Kim, Jae-Hun Kim, Sungwoong Kim, Simon Kohl, Tomasz Konopczynski, Avinash Kori, Ganapathy Krishnamurthi, Fan Li, Hongchao Li, Junbo Li, Xiaomeng Li, John Lowengrub, Jun Ma, Klaus Maier-Hein, Kevis-Kokitsi Maninis, Hans Meine, Dorit Merhof, Akshay Pai, Mathias Perslev, Jens Petersen, Jordi Pont-Tuset, Jin Qi, Xiaojuan Qi, Oliver Rippel, Karsten Roth, Ignacio Sarasua, Andrea Schenk, Zengming Shen, Jordi Torres, Christian Wachinger, Chunliang Wang, Leon Weninger, Jianrong Wu, Daguang Xu, Xiaoping Yang, Simon Chun-Ho Yu, Yading Yuan, Miao Yu, Liping Zhang, Jorge Cardoso, Spyridon Bakas, Rickmer Braren, Volker Heinemann, Christopher Pal, An Tang, Samuel Kadoury, Luc Soler, Bram van Ginneken, Hayit Greenspan, Leo Joskowicz, Bjoern Menze

In this work, we report the set-up and results of the Liver Tumor Segmentation Benchmark (LiTS), which was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI) 2017 and the International Conferences on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2017 and 2018.

Benchmarking Computed Tomography (CT) +3

Interactive and Explainable Region-guided Radiology Report Generation

1 code implementation CVPR 2023 Tim Tanida, Philip Müller, Georgios Kaissis, Daniel Rueckert

While previous methods generate reports without the possibility of human intervention and with limited explainability, our method opens up novel clinical use cases through additional interactive capabilities and introduces a high degree of transparency and explainability.

Medical Report Generation

Whole Brain Vessel Graphs: A Dataset and Benchmark for Graph Learning and Neuroscience (VesselGraph)

1 code implementation30 Aug 2021 Johannes C. Paetzold, Julian McGinnis, Suprosanna Shit, Ivan Ezhov, Paul Büschl, Chinmay Prabhakar, Mihail I. Todorov, Anjany Sekuboyina, Georgios Kaissis, Ali Ertürk, Stephan Günnemann, Bjoern H. Menze

Moreover, we benchmark numerous state-of-the-art graph learning algorithms on the biologically relevant tasks of vessel prediction and vessel classification using the introduced vessel graph dataset.

Graph Learning

Relationformer: A Unified Framework for Image-to-Graph Generation

1 code implementation19 Mar 2022 Suprosanna Shit, Rajat Koner, Bastian Wittmann, Johannes Paetzold, Ivan Ezhov, Hongwei Li, Jiazhen Pan, Sahand Sharifzadeh, Georgios Kaissis, Volker Tresp, Bjoern Menze

We leverage direct set-based object prediction and incorporate the interaction among the objects to learn an object-relation representation jointly.

Graph Generation Object +4

U-Noise: Learnable Noise Masks for Interpretable Image Segmentation

1 code implementation14 Jan 2021 Teddy Koker, FatemehSadat Mireshghallah, Tom Titcombe, Georgios Kaissis

Deep Neural Networks (DNNs) are widely used for decision making in a myriad of critical applications, ranging from medical to societal and even judicial.

Decision Making Image Segmentation +2

Joint Learning of Localized Representations from Medical Images and Reports

1 code implementation6 Dec 2021 Philip Müller, Georgios Kaissis, Congyu Zou, Daniel Rueckert

Contrastive learning has proven effective for pre-training image models on unlabeled data with promising results for tasks such as medical image classification.

Contrastive Learning Medical Image Classification +5

Unsupervised Pathology Detection: A Deep Dive Into the State of the Art

1 code implementation1 Mar 2023 Ioannis Lagogiannis, Felix Meissen, Georgios Kaissis, Daniel Rueckert

Our experiments demonstrate that newly developed feature-modeling methods from the industrial and medical literature achieve increased performance compared to previous work and set the new SOTA in a variety of modalities and datasets.

Unsupervised Anomaly Detection

Unsupervised Anomaly Localization with Structural Feature-Autoencoders

1 code implementation23 Aug 2022 Felix Meissen, Johannes Paetzold, Georgios Kaissis, Daniel Rueckert

Most commonly, the anomaly detection model generates a "normal" version of an input image, and the pixel-wise $l^p$-difference of the two is used to localize anomalies.

Unsupervised Anomaly Detection

Differentially Private Graph Classification with GNNs

1 code implementation5 Feb 2022 Tamara T. Mueller, Johannes C. Paetzold, Chinmay Prabhakar, Dmitrii Usynin, Daniel Rueckert, Georgios Kaissis

In this work, we introduce differential privacy for graph-level classification, one of the key applications of machine learning on graphs.

BIG-bench Machine Learning Graph Classification

Challenging Current Semi-Supervised Anomaly Segmentation Methods for Brain MRI

1 code implementation13 Sep 2021 Felix Meissen, Georgios Kaissis, Daniel Rueckert

In this work, we tackle the problem of Semi-Supervised Anomaly Segmentation (SAS) in Magnetic Resonance Images (MRI) of the brain, which is the task of automatically identifying pathologies in brain images.

Anomaly Detection Segmentation

Differentially private federated deep learning for multi-site medical image segmentation

1 code implementation6 Jul 2021 Alexander Ziller, Dmitrii Usynin, Nicolas Remerscheid, Moritz Knolle, Marcus Makowski, Rickmer Braren, Daniel Rueckert, Georgios Kaissis

The application of PTs to FL in medical imaging and the trade-offs between privacy guarantees and model utility, the ramifications on training performance and the susceptibility of the final models to attacks have not yet been conclusively investigated.

Federated Learning Image Segmentation +4

Robust Detection Outcome: A Metric for Pathology Detection in Medical Images

1 code implementation3 Mar 2023 Felix Meissen, Philip Müller, Georgios Kaissis, Daniel Rueckert

To tackle this problem, we propose Robust Detection Outcome (RoDeO); a novel metric for evaluating algorithms for pathology detection in medical images, especially in chest X-rays.

object-detection Object Detection

Preserving privacy in domain transfer of medical AI models comes at no performance costs: The integral role of differential privacy

1 code implementation10 Jun 2023 Soroosh Tayebi Arasteh, Mahshad Lotfinia, Teresa Nolte, Marwin Saehn, Peter Isfort, Christiane Kuhl, Sven Nebelung, Georgios Kaissis, Daniel Truhn

We specifically investigate the performance of models trained with DP as compared to models trained without DP on data from institutions that the model had not seen during its training (i. e., external validation) - the situation that is reflective of the clinical use of AI models.

Domain Generalization Fairness +4

AutoSeg -- Steering the Inductive Biases for Automatic Pathology Segmentation

1 code implementation24 Jan 2022 Felix Meissen, Georgios Kaissis, Daniel Rueckert

In medical imaging, un-, semi-, or self-supervised pathology detection is often approached with anomaly- or out-of-distribution detection methods, whose inductive biases are not intentionally directed towards detecting pathologies, and are therefore sub-optimal for this task.

Out-of-Distribution Detection

On the Pitfalls of Using the Residual Error as Anomaly Score

1 code implementation8 Feb 2022 Felix Meissen, Benedikt Wiestler, Georgios Kaissis, Daniel Rueckert

Many current state-of-the-art methods for anomaly localization in medical images rely on calculating a residual image between a potentially anomalous input image and its "healthy" reconstruction.

SmoothNets: Optimizing CNN architecture design for differentially private deep learning

1 code implementation9 May 2022 Nicolas W. Remerscheid, Alexander Ziller, Daniel Rueckert, Georgios Kaissis

The arguably most widely employed algorithm to train deep neural networks with Differential Privacy is DPSGD, which requires clipping and noising of per-sample gradients.

Image Classification with Differential Privacy

Anatomy-Driven Pathology Detection on Chest X-rays

1 code implementation5 Sep 2023 Philip Müller, Felix Meissen, Johannes Brandt, Georgios Kaissis, Daniel Rueckert

Pathology detection and delineation enables the automatic interpretation of medical scans such as chest X-rays while providing a high level of explainability to support radiologists in making informed decisions.

Anatomy Multiple Instance Learning +2

Interpretable 2D Vision Models for 3D Medical Images

1 code implementation13 Jul 2023 Alexander Ziller, Ayhan Can Erdur, Marwa Trigui, Alp Güvenir, Tamara T. Mueller, Philip Müller, Friederike Jungmann, Johannes Brandt, Jan Peeken, Rickmer Braren, Daniel Rueckert, Georgios Kaissis

Training Artificial Intelligence (AI) models on 3D images presents unique challenges compared to the 2D case: Firstly, the demand for computational resources is significantly higher, and secondly, the availability of large datasets for pre-training is often limited, impeding training success.

Weakly Supervised Object Detection in Chest X-Rays with Differentiable ROI Proposal Networks and Soft ROI Pooling

1 code implementation19 Feb 2024 Philip Müller, Felix Meissen, Georgios Kaissis, Daniel Rueckert

Weakly supervised object detection (WSup-OD) increases the usefulness and interpretability of image classification algorithms without requiring additional supervision.

Image Classification Multiple Instance Learning +2

Sensitivity analysis in differentially private machine learning using hybrid automatic differentiation

no code implementations9 Jul 2021 Alexander Ziller, Dmitrii Usynin, Moritz Knolle, Kritika Prakash, Andrew Trask, Rickmer Braren, Marcus Makowski, Daniel Rueckert, Georgios Kaissis

Reconciling large-scale ML with the closed-form reasoning required for the principled analysis of individual privacy loss requires the introduction of new tools for automatic sensitivity analysis and for tracking an individual's data and their features through the flow of computation.

BIG-bench Machine Learning

NeuralDP Differentially private neural networks by design

no code implementations30 Jul 2021 Moritz Knolle, Dmitrii Usynin, Alexander Ziller, Marcus R. Makowski, Daniel Rueckert, Georgios Kaissis

The application of differential privacy to the training of deep neural networks holds the promise of allowing large-scale (decentralized) use of sensitive data while providing rigorous privacy guarantees to the individual.

Partial sensitivity analysis in differential privacy

1 code implementation22 Sep 2021 Tamara T. Mueller, Alexander Ziller, Dmitrii Usynin, Moritz Knolle, Friederike Jungmann, Daniel Rueckert, Georgios Kaissis

However, while techniques such as individual R\'enyi DP (RDP) allow for granular, per-person privacy accounting, few works have investigated the impact of each input feature on the individual's privacy loss.

Image Classification

An automatic differentiation system for the age of differential privacy

no code implementations22 Sep 2021 Dmitrii Usynin, Alexander Ziller, Moritz Knolle, Andrew Trask, Kritika Prakash, Daniel Rueckert, Georgios Kaissis

We introduce Tritium, an automatic differentiation-based sensitivity analysis framework for differentially private (DP) machine learning (ML).

BIG-bench Machine Learning

A unified interpretation of the Gaussian mechanism for differential privacy through the sensitivity index

no code implementations22 Sep 2021 Georgios Kaissis, Moritz Knolle, Friederike Jungmann, Alexander Ziller, Dmitrii Usynin, Daniel Rueckert

$\psi$ uniquely characterises the GM and its properties by encapsulating its two fundamental quantities: the sensitivity of the query and the magnitude of the noise perturbation.

Distributed Machine Learning and the Semblance of Trust

no code implementations21 Dec 2021 Dmitrii Usynin, Alexander Ziller, Daniel Rueckert, Jonathan Passerat-Palmbach, Georgios Kaissis

The utilisation of large and diverse datasets for machine learning (ML) at scale is required to promote scientific insight into many meaningful problems.

BIG-bench Machine Learning Federated Learning +1

Multi-modal unsupervised brain image registration using edge maps

no code implementations9 Feb 2022 Vasiliki Sideri-Lampretsa, Georgios Kaissis, Daniel Rueckert

Diffeomorphic deformable multi-modal image registration is a challenging task which aims to bring images acquired by different modalities to the same coordinate space and at the same time to preserve the topology and the invertibility of the transformation.

Image Registration

Beyond Gradients: Exploiting Adversarial Priors in Model Inversion Attacks

no code implementations1 Mar 2022 Dmitrii Usynin, Daniel Rueckert, Georgios Kaissis

Collaborative machine learning settings like federated learning can be susceptible to adversarial interference and attacks.

Federated Learning

Differentially private training of residual networks with scale normalisation

no code implementations1 Mar 2022 Helena Klause, Alexander Ziller, Daniel Rueckert, Kerstin Hammernik, Georgios Kaissis

The training of neural networks with Differentially Private Stochastic Gradient Descent offers formal Differential Privacy guarantees but introduces accuracy trade-offs.

Kernel Normalized Convolutional Networks

1 code implementation20 May 2022 Reza Nasirigerdeh, Reihaneh Torkzadehmahani, Daniel Rueckert, Georgios Kaissis

Existing convolutional neural network architectures frequently rely upon batch normalization (BatchNorm) to effectively train the model.

Federated Learning Image Classification +1

Bridging the Gap: Differentially Private Equivariant Deep Learning for Medical Image Analysis

no code implementations9 Sep 2022 Florian A. Hölzl, Daniel Rueckert, Georgios Kaissis

Machine learning with formal privacy-preserving techniques like Differential Privacy (DP) allows one to derive valuable insights from sensitive medical imaging data while promising to protect patient privacy, but it usually comes at a sharp privacy-utility trade-off.

Privacy Preserving

Label Noise-Robust Learning using a Confidence-Based Sieving Strategy

no code implementations11 Oct 2022 Reihaneh Torkzadehmahani, Reza Nasirigerdeh, Daniel Rueckert, Georgios Kaissis

Identifying the samples with noisy labels and preventing the model from learning them is a promising approach to address this challenge.

Generalised Likelihood Ratio Testing Adversaries through the Differential Privacy Lens

no code implementations24 Oct 2022 Georgios Kaissis, Alexander Ziller, Stefan Kolek Martinez de Azagra, Daniel Rueckert

Differential Privacy (DP) provides tight upper bounds on the capabilities of optimal adversaries, but such adversaries are rarely encountered in practice.

Exploiting segmentation labels and representation learning to forecast therapy response of PDAC patients

no code implementations8 Nov 2022 Alexander Ziller, Ayhan Can Erdur, Friederike Jungmann, Daniel Rueckert, Rickmer Braren, Georgios Kaissis

The prediction of pancreatic ductal adenocarcinoma therapy response is a clinically challenging and important task in this high-mortality tumour entity.

Representation Learning

How Do Input Attributes Impact the Privacy Loss in Differential Privacy?

no code implementations18 Nov 2022 Tamara T. Mueller, Stefan Kolek, Friederike Jungmann, Alexander Ziller, Dmitrii Usynin, Moritz Knolle, Daniel Rueckert, Georgios Kaissis

Differential privacy (DP) is typically formulated as a worst-case privacy guarantee over all individuals in a database.

Equivariant Differentially Private Deep Learning: Why DP-SGD Needs Sparser Models

1 code implementation30 Jan 2023 Florian A. Hölzl, Daniel Rueckert, Georgios Kaissis

We achieve such sparsity by design by introducing equivariant convolutional networks for model training with Differential Privacy.

Image Classification with Differential Privacy

Incentivising the federation: gradient-based metrics for data selection and valuation in private decentralised training

no code implementations4 May 2023 Dmitrii Usynin, Daniel Rueckert, Georgios Kaissis

Obtaining high-quality data for collaborative training of machine learning models can be a challenging task due to A) regulatory concerns and B) a lack of data owner incentives to participate.

Federated Learning

Bounding data reconstruction attacks with the hypothesis testing interpretation of differential privacy

no code implementations8 Jul 2023 Georgios Kaissis, Jamie Hayes, Alexander Ziller, Daniel Rueckert

We explore Reconstruction Robustness (ReRo), which was recently proposed as an upper bound on the success of data reconstruction attacks against machine learning models.

Extended Graph Assessment Metrics for Graph Neural Networks

no code implementations13 Jul 2023 Tamara T. Mueller, Sophie Starck, Leonhard F. Feiner, Kyriaki-Margarita Bintsi, Daniel Rueckert, Georgios Kaissis

In this work, we introduce extended graph assessment metrics (GAMs) for regression tasks and continuous adjacency matrices.

regression

Body Fat Estimation from Surface Meshes using Graph Neural Networks

no code implementations13 Jul 2023 Tamara T. Mueller, Siyu Zhou, Sophie Starck, Friederike Jungmann, Alexander Ziller, Orhun Aksoy, Danylo Movchan, Rickmer Braren, Georgios Kaissis, Daniel Rueckert

Body fat volume and distribution can be a strong indication for a person's overall health and the risk for developing diseases like type 2 diabetes and cardiovascular diseases.

Bias-Aware Minimisation: Understanding and Mitigating Estimator Bias in Private SGD

no code implementations23 Aug 2023 Moritz Knolle, Robert Dorfman, Alexander Ziller, Daniel Rueckert, Georgios Kaissis

Differentially private SGD (DP-SGD) holds the promise of enabling the safe and responsible application of machine learning to sensitive datasets.

MAD: Modality Agnostic Distance Measure for Image Registration

no code implementations6 Sep 2023 Vasiliki Sideri-Lampretsa, Veronika A. Zimmer, Huaqi Qiu, Georgios Kaissis, Daniel Rueckert

The success of multi-modal image registration, whether it is conventional or learning based, is predicated upon the choice of an appropriate distance (or similarity) measure.

Image Registration

FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare

no code implementations11 Aug 2023 Karim Lekadir, Aasa Feragen, Abdul Joseph Fofanah, Alejandro F Frangi, Alena Buyx, Anais Emelie, Andrea Lara, Antonio R Porras, An-Wen Chan, Arcadi Navarro, Ben Glocker, Benard O Botwe, Bishesh Khanal, Brigit Beger, Carol C Wu, Celia Cintas, Curtis P Langlotz, Daniel Rueckert, Deogratias Mzurikwao, Dimitrios I Fotiadis, Doszhan Zhussupov, Enzo Ferrante, Erik Meijering, Eva Weicken, Fabio A González, Folkert W Asselbergs, Fred Prior, Gabriel P Krestin, Gary Collins, Geletaw S Tegenaw, Georgios Kaissis, Gianluca Misuraca, Gianna Tsakou, Girish Dwivedi, Haridimos Kondylakis, Harsha Jayakody, Henry C Woodruf, Hugo JWL Aerts, Ian Walsh, Ioanna Chouvarda, Irène Buvat, Islem Rekik, James Duncan, Jayashree Kalpathy-Cramer, Jihad Zahir, Jinah Park, John Mongan, Judy W Gichoya, Julia A Schnabel, Kaisar Kushibar, Katrine Riklund, Kensaku MORI, Kostas Marias, Lameck M Amugongo, Lauren A Fromont, Lena Maier-Hein, Leonor Cerdá Alberich, Leticia Rittner, Lighton Phiri, Linda Marrakchi-Kacem, Lluís Donoso-Bach, Luis Martí-Bonmatí, M Jorge Cardoso, Maciej Bobowicz, Mahsa Shabani, Manolis Tsiknakis, Maria A Zuluaga, Maria Bielikova, Marie-Christine Fritzsche, Marius George Linguraru, Markus Wenzel, Marleen de Bruijne, Martin G Tolsgaard, Marzyeh Ghassemi, Md Ashrafuzzaman, Melanie Goisauf, Mohammad Yaqub, Mohammed Ammar, Mónica Cano Abadía, Mukhtar M E Mahmoud, Mustafa Elattar, Nicola Rieke, Nikolaos Papanikolaou, Noussair Lazrak, Oliver Díaz, Olivier Salvado, Oriol Pujol, Ousmane Sall, Pamela Guevara, Peter Gordebeke, Philippe Lambin, Pieta Brown, Purang Abolmaesumi, Qi Dou, Qinghua Lu, Richard Osuala, Rose Nakasi, S Kevin Zhou, Sandy Napel, Sara Colantonio, Shadi Albarqouni, Smriti Joshi, Stacy Carter, Stefan Klein, Steffen E Petersen, Susanna Aussó, Suyash Awate, Tammy Riklin Raviv, Tessa Cook, Tinashe E M Mutsvangwa, Wendy A Rogers, Wiro J Niessen, Xènia Puig-Bosch, Yi Zeng, Yunusa G Mohammed, Yves Saint James Aquino, Zohaib Salahuddin, Martijn P A Starmans

This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare.

Fairness

(Predictable) Performance Bias in Unsupervised Anomaly Detection

no code implementations25 Sep 2023 Felix Meissen, Svenja Breuer, Moritz Knolle, Alena Buyx, Ruth Müller, Georgios Kaissis, Benedikt Wiestler, Daniel Rückert

The empirical fairness laws discovered in our study make disparate performance in UAD models easier to estimate and aid in determining the most desirable dataset composition.

Fairness Unsupervised Anomaly Detection

Propagation and Attribution of Uncertainty in Medical Imaging Pipelines

1 code implementation28 Sep 2023 Leonhard F. Feiner, Martin J. Menten, Kerstin Hammernik, Paul Hager, Wenqi Huang, Daniel Rueckert, Rickmer F. Braren, Georgios Kaissis

In this paper, we propose a method to propagate uncertainty through cascades of deep learning models in medical imaging pipelines.

SoK: Memorisation in machine learning

no code implementations6 Nov 2023 Dmitrii Usynin, Moritz Knolle, Georgios Kaissis

In this work we unify a broad range of previous definitions and perspectives on memorisation in ML, discuss their interplay with model generalisation and their implications of these phenomena on data privacy.

How Low Can You Go? Surfacing Prototypical In-Distribution Samples for Unsupervised Anomaly Detection

no code implementations6 Dec 2023 Felix Meissen, Johannes Getzner, Alexander Ziller, Georgios Kaissis, Daniel Rueckert

Additionally, we show that the prototypical in-distribution samples identified by our proposed methods translate well to different models and other datasets and that using their characteristics as guidance allows for successful manual selection of small subsets of high-performing samples.

Pneumonia Detection Unsupervised Anomaly Detection

Reconciling AI Performance and Data Reconstruction Resilience for Medical Imaging

no code implementations5 Dec 2023 Alexander Ziller, Tamara T. Mueller, Simon Stieger, Leonhard Feiner, Johannes Brandt, Rickmer Braren, Daniel Rueckert, Georgios Kaissis

Although a lower budget decreases the risk of information leakage, it typically also reduces the performance of such models.

Bounding Reconstruction Attack Success of Adversaries Without Data Priors

no code implementations20 Feb 2024 Alexander Ziller, Anneliese Riess, Kristian Schwethelm, Tamara T. Mueller, Daniel Rueckert, Georgios Kaissis

When training ML models with differential privacy (DP), formal upper bounds on the success of such reconstruction attacks can be provided.

Reconstruction Attack

Visual Privacy Auditing with Diffusion Models

no code implementations12 Mar 2024 Kristian Schwethelm, Johannes Kaiser, Moritz Knolle, Daniel Rueckert, Georgios Kaissis, Alexander Ziller

We propose a reconstruction attack based on diffusion models (DMs) that assumes adversary access to real-world image priors and assess its implications on privacy leakage under DP-SGD.

Image Reconstruction Reconstruction Attack

Cannot find the paper you are looking for? You can Submit a new open access paper.