Search Results for author: Sven Nebelung

Found 17 papers, 15 papers with code

Mind the Gap: Federated Learning Broadens Domain Generalization in Diagnostic AI Models

1 code implementation1 Oct 2023 Soroosh Tayebi Arasteh, Christiane Kuhl, Marwin-Jonathan Saehn, Peter Isfort, Daniel Truhn, Sven Nebelung

So far, the impact of training strategy, i. e., local versus collaborative, on the diagnostic on-domain and off-domain performance of AI models interpreting chest radiographs has not been assessed.

Domain Generalization Federated Learning +2

Large Language Models Streamline Automated Machine Learning for Clinical Studies

1 code implementation27 Aug 2023 Soroosh Tayebi Arasteh, Tianyu Han, Mahshad Lotfinia, Christiane Kuhl, Jakob Nikolas Kather, Daniel Truhn, Sven Nebelung

A knowledge gap persists between machine learning (ML) developers (e. g., data scientists) and practitioners (e. g., clinicians), hampering the full utilization of ML for clinical data analysis.

Enhancing Network Initialization for Medical AI Models Using Large-Scale, Unlabeled Natural Images

2 code implementations15 Aug 2023 Soroosh Tayebi Arasteh, Leo Misera, Jakob Nikolas Kather, Daniel Truhn, Sven Nebelung

In this study, we explored if SSL for pre-training on non-medical images can be applied to chest radiographs and how it compares to supervised pre-training on non-medical images and on medical images.

Medical Diagnosis Medical Image Classification +1

Preserving privacy in domain transfer of medical AI models comes at no performance costs: The integral role of differential privacy

1 code implementation10 Jun 2023 Soroosh Tayebi Arasteh, Mahshad Lotfinia, Teresa Nolte, Marwin Saehn, Peter Isfort, Christiane Kuhl, Sven Nebelung, Georgios Kaissis, Daniel Truhn

We specifically investigate the performance of models trained with DP as compared to models trained without DP on data from institutions that the model had not seen during its training (i. e., external validation) - the situation that is reflective of the clinical use of AI models.

Domain Generalization Fairness +4

Transformers for CT Reconstruction From Monoplanar and Biplanar Radiographs

no code implementations11 May 2023 Firas Khader, Gustav Müller-Franzes, Tianyu Han, Sven Nebelung, Christiane Kuhl, Johannes Stegmaier, Daniel Truhn

X-rays are widely available and even if the CT reconstructed from these radiographs is not a replacement of a complete CT in the diagnostic setting, it might serve to spare the patients from radiation where a CT is only acquired for rough measurements such as determining organ size.

Computed Tomography (CT)

Cascaded Cross-Attention Networks for Data-Efficient Whole-Slide Image Classification Using Transformers

no code implementations11 May 2023 Firas Khader, Jakob Nikolas Kather, Tianyu Han, Sven Nebelung, Christiane Kuhl, Johannes Stegmaier, Daniel Truhn

However, while the conventional transformer allows for a simultaneous processing of a large set of input tokens, the computational demand scales quadratically with the number of input tokens and thus quadratically with the number of image patches.

Image Classification whole slide images

Medical Diffusion: Denoising Diffusion Probabilistic Models for 3D Medical Image Generation

1 code implementation7 Nov 2022 Firas Khader, Gustav Mueller-Franzes, Soroosh Tayebi Arasteh, Tianyu Han, Christoph Haarburger, Maximilian Schulze-Hagen, Philipp Schad, Sandy Engelhardt, Bettina Baessler, Sebastian Foersch, Johannes Stegmaier, Christiane Kuhl, Sven Nebelung, Jakob Nikolas Kather, Daniel Truhn

Furthermore, we demonstrate that synthetic images can be used in a self-supervised pre-training and improve the performance of breast segmentation models when data is scarce (dice score 0. 91 vs. 0. 95 without vs. with synthetic data).

Computed Tomography (CT) Denoising +3

Advancing diagnostic performance and clinical usability of neural networks via adversarial training and dual batch normalization

1 code implementation25 Nov 2020 Tianyu Han, Sven Nebelung, Federico Pedersoli, Markus Zimmermann, Maximilian Schulze-Hagen, Michael Ho, Christoph Haarburger, Fabian Kiessling, Christiane Kuhl, Volkmar Schulz, Daniel Truhn

Contrary to previous research on adversarially trained models, we found that the accuracy of such models was equal to standard models when sufficiently large datasets and dual batch norm training were used.

Decision Making

Cannot find the paper you are looking for? You can Submit a new open access paper.