Search Results for author: Maximilian Dreyer

Found 11 papers, 6 papers with code

Explainable concept mappings of MRI: Revealing the mechanisms underlying deep learning-based brain disease classification

no code implementations16 Apr 2024 Christian Tinauer, Anna Damulina, Maximilian Sackl, Martin Soellradl, Reduan Achtibat, Maximilian Dreyer, Frederik Pahde, Sebastian Lapuschkin, Reinhold Schmidt, Stefan Ropele, Wojciech Samek, Christian Langkammer

Using quantitative R2* maps we separated Alzheimer's patients (n=117) from normal controls (n=219) by using a convolutional neural network and systematically investigated the learned concepts using Concept Relevance Propagation and compared these results to a conventional region of interest-based analysis.

PURE: Turning Polysemantic Neurons Into Pure Features by Identifying Relevant Circuits

1 code implementation9 Apr 2024 Maximilian Dreyer, Erblina Purelku, Johanna Vielhaben, Wojciech Samek, Sebastian Lapuschkin

The field of mechanistic interpretability aims to study the role of individual neurons in Deep Neural Networks.

AttnLRP: Attention-Aware Layer-wise Relevance Propagation for Transformers

1 code implementation8 Feb 2024 Reduan Achtibat, Sayed Mohammad Vakilzadeh Hatefi, Maximilian Dreyer, Aakriti Jain, Thomas Wiegand, Sebastian Lapuschkin, Wojciech Samek

Large Language Models are prone to biased predictions and hallucinations, underlining the paramount importance of understanding their model-internal reasoning process.

Attribute Computational Efficiency

Understanding the (Extra-)Ordinary: Validating Deep Model Decisions with Prototypical Concept-based Explanations

1 code implementation28 Nov 2023 Maximilian Dreyer, Reduan Achtibat, Wojciech Samek, Sebastian Lapuschkin

What sets our approach apart is the combination of local and global strategies, enabling a clearer understanding of the (dis-)similarities in model decisions compared to the expected (prototypical) concept use, ultimately reducing the dependence on human long-term assessment.

Decision Making

From Hope to Safety: Unlearning Biases of Deep Models via Gradient Penalization in Latent Space

1 code implementation18 Aug 2023 Maximilian Dreyer, Frederik Pahde, Christopher J. Anders, Wojciech Samek, Sebastian Lapuschkin

Deep Neural Networks are prone to learning spurious correlations embedded in the training data, leading to potentially biased predictions.

Decision Making

Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models

1 code implementation22 Mar 2023 Frederik Pahde, Maximilian Dreyer, Wojciech Samek, Sebastian Lapuschkin

To tackle this problem, we propose Reveal to Revise (R2R), a framework entailing the entire eXplainable Artificial Intelligence (XAI) life cycle, enabling practitioners to iteratively identify, mitigate, and (re-)evaluate spurious model behavior with a minimal amount of human interaction.

Age Estimation Decision Making +2

Revealing Hidden Context Bias in Segmentation and Object Detection through Concept-specific Explanations

no code implementations21 Nov 2022 Maximilian Dreyer, Reduan Achtibat, Thomas Wiegand, Wojciech Samek, Sebastian Lapuschkin

Applying traditional post-hoc attribution methods to segmentation or object detection predictors offers only limited insights, as the obtained feature attribution maps at input level typically resemble the models' predicted segmentation mask or bounding box.

Explainable artificial intelligence object-detection +2

From Attribution Maps to Human-Understandable Explanations through Concept Relevance Propagation

2 code implementations7 Jun 2022 Reduan Achtibat, Maximilian Dreyer, Ilona Eisenbraun, Sebastian Bosse, Thomas Wiegand, Wojciech Samek, Sebastian Lapuschkin

In this work we introduce the Concept Relevance Propagation (CRP) approach, which combines the local and global perspectives and thus allows answering both the "where" and "what" questions for individual predictions.

Decision Making Explainable artificial intelligence +1

Navigating Neural Space: Revisiting Concept Activation Vectors to Overcome Directional Divergence

no code implementations7 Feb 2022 Frederik Pahde, Maximilian Dreyer, Leander Weber, Moritz Weckbecker, Christopher J. Anders, Thomas Wiegand, Wojciech Samek, Sebastian Lapuschkin

With a growing interest in understanding neural network prediction strategies, Concept Activation Vectors (CAVs) have emerged as a popular tool for modeling human-understandable concepts in the latent space.

TAG

ECQ$^{\text{x}}$: Explainability-Driven Quantization for Low-Bit and Sparse DNNs

no code implementations9 Sep 2021 Daniel Becking, Maximilian Dreyer, Wojciech Samek, Karsten Müller, Sebastian Lapuschkin

The remarkable success of deep neural networks (DNNs) in various applications is accompanied by a significant increase in network parameters and arithmetic operations.

Explainable Artificial Intelligence (XAI) Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.