no code implementations • 2 Sep 2024 • Sam Gijsen, Kerstin Ritter
Compared to models exposed to narrower clinical text information, we find such models to retrieve EEGs based on clinical reports (and vice versa) with substantially higher accuracy.
1 code implementation • 27 Sep 2023 • Roshan Prakash Rane, Jihoon Kim, Arjun Umesha, Didem Stark, Marc-André Schulz, Kerstin Ritter
In conclusion, the DeepRepViz framework provides a systematic approach to test for potential confounders such as age, sex, and imaging artifacts and improves the transparency of DL models for neuroimaging studies.
1 code implementation • 21 Jun 2023 • Marta Oliveira, Rick Wilming, Benedict Clark, Céline Budding, Fabian Eitel, Kerstin Ritter, Stefan Haufe
Here, we propose a benchmark dataset that allows for quantifying explanation performance in a realistic magnetic resonance imaging (MRI) classification task.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +2
no code implementations • 20 Jan 2023 • Fabian Eitel, Marc-André Schulz, Moritz Seiler, Henrik Walter, Kerstin Ritter
By promising more accurate diagnostics and individual treatment recommendations, deep neural networks and in particular convolutional neural networks have advanced to a powerful tool in medical imaging.
no code implementations • 22 Jul 2022 • Di Wang, Nicolas Honnorat, Peter T. Fox, Kerstin Ritter, Simon B. Eickhoff, Sudha Seshadri, Mohamad Habes
Deep neural networks currently provide the most advanced and accurate machine learning models to distinguish between structural MRI scans of subjects with Alzheimer's disease and healthy controls.
no code implementations • 24 Mar 2022 • Fabian Eitel, Anna Melkonyan, Kerstin Ritter
A major prerequisite for the application of machine learning models in clinical decision making is trust and interpretability.
1 code implementation • 9 Dec 2021 • Céline Budding, Fabian Eitel, Kerstin Ritter, Stefan Haufe
Over the last years, many 'explainable artificial intelligence' (xAI) approaches have been developed, but these have not always been objectively evaluated.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
no code implementations • 23 Jul 2020 • Fabian Eitel, Jan Philipp Albrecht, Martin Weygandt, Friedemann Paul, Kerstin Ritter
Neuroimaging data, e. g. obtained from magnetic resonance imaging (MRI), is comparably homogeneous due to (1) the uniform structure of the brain and (2) additional efforts to spatially normalize the data to a standard template using linear and non-linear transformations.
1 code implementation • 6 Apr 2020 • Matthias Ritter, Derek V. M. Ott, Friedemann Paul, John-Dylan Haynes, Kerstin Ritter
Due to the dynamic development of infections and the time lag between when patients are infected and when a proportion of them enters an intensive care unit (ICU), the need for future intensive care can easily be underestimated.
Populations and Evolution Applications
no code implementations • 14 Nov 2019 • Fabian Eitel, Jan Philipp Albrecht, Friedemann Paul, Kerstin Ritter
Neuroimaging studies based on magnetic resonance imaging (MRI) typically employ rigorous forms of preprocessing.
no code implementations • 19 Sep 2019 • Fabian Eitel, Kerstin Ritter
Attribution methods are an easy to use tool for investigating and validating machine learning models.
1 code implementation • 18 Apr 2019 • Fabian Eitel, Emily Soehler, Judith Bellmann-Strobl, Alexander U. Brandt, Klemens Ruprecht, René M. Giess, Joseph Kuchling, Susanna Asseyer, Martin Weygandt, John-Dylan Haynes, Michael Scheel, Friedemann Paul, Kerstin Ritter
The subsequent LRP visualization revealed that the CNN model focuses indeed on individual lesions, but also incorporates additional information such as lesion location, non-lesional white matter or gray matter areas such as the thalamus, which are established conventional and advanced MRI markers in MS. We conclude that LRP and the proposed framework have the capability to make diagnostic decisions of...
1 code implementation • 18 Mar 2019 • Moritz Böhle, Fabian Eitel, Martin Weygandt, Kerstin Ritter
In this study, we propose using layer-wise relevance propagation (LRP) to visualize convolutional neural network decisions for AD based on MRI data.
2D Human Pose Estimation Quantitative Methods
1 code implementation • 8 Aug 2018 • Johannes Rieke, Fabian Eitel, Martin Weygandt, John-Dylan Haynes, Kerstin Ritter
In summary, we show that applying different visualization methods is important to understand the decisions of a CNN, a step that is crucial to increase clinical impact and trust in computer-based decision support systems.