Search Results for author: Sebastian Lapuschkin

Found 46 papers, 23 papers with code

Explainable concept mappings of MRI: Revealing the mechanisms underlying deep learning-based brain disease classification

no code implementations16 Apr 2024 Christian Tinauer, Anna Damulina, Maximilian Sackl, Martin Soellradl, Reduan Achtibat, Maximilian Dreyer, Frederik Pahde, Sebastian Lapuschkin, Reinhold Schmidt, Stefan Ropele, Wojciech Samek, Christian Langkammer

Using quantitative R2* maps we separated Alzheimer's patients (n=117) from normal controls (n=219) by using a convolutional neural network and systematically investigated the learned concepts using Concept Relevance Propagation and compared these results to a conventional region of interest-based analysis.

PURE: Turning Polysemantic Neurons Into Pure Features by Identifying Relevant Circuits

1 code implementation9 Apr 2024 Maximilian Dreyer, Erblina Purelku, Johanna Vielhaben, Wojciech Samek, Sebastian Lapuschkin

The field of mechanistic interpretability aims to study the role of individual neurons in Deep Neural Networks.

DualView: Data Attribution from the Dual Perspective

2 code implementations19 Feb 2024 Galip Ümit Yolcu, Thomas Wiegand, Wojciech Samek, Sebastian Lapuschkin

In this work we present DualView, a novel method for post-hoc data attribution based on surrogate modelling, demonstrating both high computational efficiency, as well as good evaluation results.

Computational Efficiency

AttnLRP: Attention-Aware Layer-wise Relevance Propagation for Transformers

1 code implementation8 Feb 2024 Reduan Achtibat, Sayed Mohammad Vakilzadeh Hatefi, Maximilian Dreyer, Aakriti Jain, Thomas Wiegand, Sebastian Lapuschkin, Wojciech Samek

Large Language Models are prone to biased predictions and hallucinations, underlining the paramount importance of understanding their model-internal reasoning process.

Attribute Computational Efficiency

Explaining Predictive Uncertainty by Exposing Second-Order Effects

no code implementations30 Jan 2024 Florian Bley, Sebastian Lapuschkin, Wojciech Samek, Grégoire Montavon

So far, the question of explaining predictive uncertainty, i. e. why a model 'doubts', has been scarcely studied.

Sanity Checks Revisited: An Exploration to Repair the Model Parameter Randomisation Test

1 code implementation12 Jan 2024 Anna Hedström, Leander Weber, Sebastian Lapuschkin, Marina MC Höhne

The Model Parameter Randomisation Test (MPRT) is widely acknowledged in the eXplainable Artificial Intelligence (XAI) community for its well-motivated evaluative principle: that the explanation function should be sensitive to changes in the parameters of the model function.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI)

Understanding the (Extra-)Ordinary: Validating Deep Model Decisions with Prototypical Concept-based Explanations

1 code implementation28 Nov 2023 Maximilian Dreyer, Reduan Achtibat, Wojciech Samek, Sebastian Lapuschkin

What sets our approach apart is the combination of local and global strategies, enabling a clearer understanding of the (dis-)similarities in model decisions compared to the expected (prototypical) concept use, ultimately reducing the dependence on human long-term assessment.

Decision Making

Generative Fractional Diffusion Models

no code implementations26 Oct 2023 Gabriel Nobis, Marco Aversa, Maximilian Springenberg, Michael Detzel, Stefano Ermon, Shinichi Nakajima, Roderick Murray-Smith, Sebastian Lapuschkin, Christoph Knochenhauer, Luis Oala, Wojciech Samek

We generalize the continuous time framework for score-based generative models from an underlying Brownian motion (BM) to an approximation of fractional Brownian motion (FBM).

Human-Centered Evaluation of XAI Methods

no code implementations11 Oct 2023 Karam Dawoud, Wojciech Samek, Peter Eisert, Sebastian Lapuschkin, Sebastian Bosse

In the ever-evolving field of Artificial Intelligence, a critical challenge has been to decipher the decision-making processes within the so-called "black boxes" in deep learning.

Decision Making Image Classification

Layer-wise Feedback Propagation

no code implementations23 Aug 2023 Leander Weber, Jim Berend, Alexander Binder, Thomas Wiegand, Wojciech Samek, Sebastian Lapuschkin

In this paper, we present Layer-wise Feedback Propagation (LFP), a novel training approach for neural-network-like predictors that utilizes explainability, specifically Layer-wise Relevance Propagation(LRP), to assign rewards to individual connections based on their respective contributions to solving a given task.

Transfer Learning

From Hope to Safety: Unlearning Biases of Deep Models via Gradient Penalization in Latent Space

1 code implementation18 Aug 2023 Maximilian Dreyer, Frederik Pahde, Christopher J. Anders, Wojciech Samek, Sebastian Lapuschkin

Deep Neural Networks are prone to learning spurious correlations embedded in the training data, leading to potentially biased predictions.

Decision Making

Bridging the Gap: Gaze Events as Interpretable Concepts to Explain Deep Neural Sequence Models

1 code implementation12 Apr 2023 Daniel G. Krakowczyk, Paul Prasse, David R. Reich, Sebastian Lapuschkin, Tobias Scheffer, Lena A. Jäger

In this work, we employ established gaze event detection algorithms for fixations and saccades and quantitatively evaluate the impact of these events by determining their concept influence.

Event Detection Explainable Artificial Intelligence (XAI)

Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models

1 code implementation22 Mar 2023 Frederik Pahde, Maximilian Dreyer, Wojciech Samek, Sebastian Lapuschkin

To tackle this problem, we propose Reveal to Revise (R2R), a framework entailing the entire eXplainable Artificial Intelligence (XAI) life cycle, enabling practitioners to iteratively identify, mitigate, and (re-)evaluate spurious model behavior with a minimal amount of human interaction.

Age Estimation Decision Making +2

Explainable AI for Time Series via Virtual Inspection Layers

no code implementations11 Mar 2023 Johanna Vielhaben, Sebastian Lapuschkin, Grégoire Montavon, Wojciech Samek

In this way, we extend the applicability of a family of XAI methods to domains (e. g. speech) where the input is only interpretable after a transformation.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +3

The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus

1 code implementation14 Feb 2023 Anna Hedström, Philine Bommer, Kristoffer K. Wickstrøm, Wojciech Samek, Sebastian Lapuschkin, Marina M. -C. Höhne

We address this problem through a meta-evaluation of different quality estimators in XAI, which we define as ''the process of evaluating the evaluation method''.

Explainable Artificial Intelligence (XAI)

Optimizing Explanations by Network Canonization and Hyperparameter Search

no code implementations30 Nov 2022 Frederik Pahde, Galip Ümit Yolcu, Alexander Binder, Wojciech Samek, Sebastian Lapuschkin

We further suggest a XAI evaluation framework with which we quantify and compare the effect sof model canonization for various XAI methods in image classification tasks on the Pascal-VOC and ILSVRC2017 datasets, as well as for Visual Question Answering using CLEVR-XAI.

Explainable Artificial Intelligence (XAI) Image Classification +2

Shortcomings of Top-Down Randomization-Based Sanity Checks for Evaluations of Deep Neural Network Explanations

no code implementations CVPR 2023 Alexander Binder, Leander Weber, Sebastian Lapuschkin, Grégoire Montavon, Klaus-Robert Müller, Wojciech Samek

To address shortcomings of this test, we start by observing an experimental gap in the ranking of explanation methods between randomization-based sanity checks [1] and model output faithfulness measures (e. g. [25]).

Revealing Hidden Context Bias in Segmentation and Object Detection through Concept-specific Explanations

no code implementations21 Nov 2022 Maximilian Dreyer, Reduan Achtibat, Thomas Wiegand, Wojciech Samek, Sebastian Lapuschkin

Applying traditional post-hoc attribution methods to segmentation or object detection predictors offers only limited insights, as the obtained feature attribution maps at input level typically resemble the models' predicted segmentation mask or bounding box.

Explainable artificial intelligence object-detection +2

From Attribution Maps to Human-Understandable Explanations through Concept Relevance Propagation

2 code implementations7 Jun 2022 Reduan Achtibat, Maximilian Dreyer, Ilona Eisenbraun, Sebastian Bosse, Thomas Wiegand, Wojciech Samek, Sebastian Lapuschkin

In this work we introduce the Concept Relevance Propagation (CRP) approach, which combines the local and global perspectives and thus allows answering both the "where" and "what" questions for individual predictions.

Decision Making Explainable artificial intelligence +1

Explain to Not Forget: Defending Against Catastrophic Forgetting with XAI

no code implementations4 May 2022 Sami Ede, Serop Baghdadlian, Leander Weber, An Nguyen, Dario Zanca, Wojciech Samek, Sebastian Lapuschkin

The ability to continuously process and retain new information like we do naturally as humans is a feat that is highly sought after when training neural networks.

Explainable Artificial Intelligence (XAI)

But that's not why: Inference adjustment by interactive prototype revision

no code implementations18 Mar 2022 Michael Gerstenberger, Sebastian Lapuschkin, Peter Eisert, Sebastian Bosse

It shows that even correct classifications can rely on unreasonable prototypes that result from confounding variables in a dataset.

BIG-bench Machine Learning Decision Making

Beyond Explaining: Opportunities and Challenges of XAI-Based Model Improvement

no code implementations15 Mar 2022 Leander Weber, Sebastian Lapuschkin, Alexander Binder, Wojciech Samek

We conclude that while model improvement based on XAI can have significant beneficial effects even on complex and not easily quantifyable model properties, these methods need to be applied carefully, since their success can vary depending on a multitude of factors, such as the model and dataset used, or the employed explanation method.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI)

Measurably Stronger Explanation Reliability via Model Canonization

no code implementations14 Feb 2022 Franz Motzkus, Leander Weber, Sebastian Lapuschkin

While rule-based attribution methods have proven useful for providing local explanations for Deep Neural Networks, explaining modern and more varied network architectures yields new challenges in generating trustworthy explanations, since the established rule sets might not be sufficient or applicable to novel network structures.

Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond

1 code implementation NeurIPS 2023 Anna Hedström, Leander Weber, Dilyara Bareeva, Daniel Krakowczyk, Franz Motzkus, Wojciech Samek, Sebastian Lapuschkin, Marina M. -C. Höhne

The evaluation of explanation methods is a research topic that has not yet been explored deeply, however, since explainability is supposed to strengthen trust in artificial intelligence, it is necessary to systematically review and compare explanation methods in order to confirm their correctness.

Explainable Artificial Intelligence (XAI)

Navigating Neural Space: Revisiting Concept Activation Vectors to Overcome Directional Divergence

no code implementations7 Feb 2022 Frederik Pahde, Maximilian Dreyer, Leander Weber, Moritz Weckbecker, Christopher J. Anders, Thomas Wiegand, Wojciech Samek, Sebastian Lapuschkin

With a growing interest in understanding neural network prediction strategies, Concept Activation Vectors (CAVs) have emerged as a popular tool for modeling human-understandable concepts in the latent space.

TAG

ECQ$^{\text{x}}$: Explainability-Driven Quantization for Low-Bit and Sparse DNNs

no code implementations9 Sep 2021 Daniel Becking, Maximilian Dreyer, Wojciech Samek, Karsten Müller, Sebastian Lapuschkin

The remarkable success of deep neural networks (DNNs) in various applications is accompanied by a significant increase in network parameters and arithmetic operations.

Explainable Artificial Intelligence (XAI) Quantization

Explanation-Guided Training for Cross-Domain Few-Shot Classification

1 code implementation17 Jul 2020 Jiamei Sun, Sebastian Lapuschkin, Wojciech Samek, Yunqing Zhao, Ngai-Man Cheung, Alexander Binder

It leverages on the explanation scores, obtained from existing explanation methods when applied to the predictions of FSC models, computed for intermediate feature maps of the models.

Classification Cross-Domain Few-Shot +1

Understanding Integrated Gradients with SmoothTaylor for Deep Neural Network Attribution

1 code implementation arXiv 2020 Gary S. W. Goh, Sebastian Lapuschkin, Leander Weber, Wojciech Samek, Alexander Binder

From our experiments, we find that the SmoothTaylor approach together with adaptive noising is able to generate better quality saliency maps with lesser noise and higher sensitivity to the relevant points in the input space as compared to Integrated Gradients.

Image Classification Object Recognition

Explain and Improve: LRP-Inference Fine-Tuning for Image Captioning Models

1 code implementation4 Jan 2020 Jiamei Sun, Sebastian Lapuschkin, Wojciech Samek, Alexander Binder

We develop variants of layer-wise relevance propagation (LRP) and gradient-based explanation methods, tailored to image captioning models with attention mechanisms.

Hallucination Image Captioning +2

Finding and Removing Clever Hans: Using Explanation Methods to Debug and Improve Deep Models

2 code implementations22 Dec 2019 Christopher J. Anders, Leander Weber, David Neumann, Wojciech Samek, Klaus-Robert Müller, Sebastian Lapuschkin

Based on a recent technique - Spectral Relevance Analysis - we propose the following technical contributions and resulting findings: (a) a scalable quantification of artifactual and poisoned classes where the machine learning models under study exhibit CH behavior, (b) several approaches denoted as Class Artifact Compensation (ClArC), which are able to effectively and significantly reduce a model's CH behavior.

Pruning by Explaining: A Novel Criterion for Deep Neural Network Pruning

1 code implementation18 Dec 2019 Seul-Ki Yeom, Philipp Seegerer, Sebastian Lapuschkin, Alexander Binder, Simon Wiedemann, Klaus-Robert Müller, Wojciech Samek

The success of convolutional neural networks (CNNs) in various applications is accompanied by a significant increase in computation and parameter storage costs.

Explainable Artificial Intelligence (XAI) Model Compression +2

Unmasking Clever Hans Predictors and Assessing What Machines Really Learn

1 code implementation26 Feb 2019 Sebastian Lapuschkin, Stephan Wäldchen, Alexander Binder, Grégoire Montavon, Wojciech Samek, Klaus-Robert Müller

Current learning machines have successfully solved hard application problems, reaching high accuracy and displaying seemingly "intelligent" behavior.

Explaining the Unique Nature of Individual Gait Patterns with Deep Learning

1 code implementation13 Aug 2018 Fabian Horst, Sebastian Lapuschkin, Wojciech Samek, Klaus-Robert Müller, Wolfgang I. Schöllhorn

Machine learning (ML) techniques such as (deep) artificial neural networks (DNN) are solving very successfully a plethora of tasks and provide new predictive models for complex physical, chemical, biological and social systems.

iNNvestigate neural networks!

1 code implementation13 Aug 2018 Maximilian Alber, Sebastian Lapuschkin, Philipp Seegerer, Miriam Hägele, Kristof T. Schütt, Grégoire Montavon, Wojciech Samek, Klaus-Robert Müller, Sven Dähne, Pieter-Jan Kindermans

The presented library iNNvestigate addresses this by providing a common interface and out-of-the- box implementation for many analysis methods, including the reference implementation for PatternNet and PatternAttribution as well as for LRP-methods.

Interpretable Machine Learning

Interpreting the Predictions of Complex ML Models by Layer-wise Relevance Propagation

no code implementations24 Nov 2016 Wojciech Samek, Grégoire Montavon, Alexander Binder, Sebastian Lapuschkin, Klaus-Robert Müller

Complex nonlinear models such as deep neural network (DNNs) have become an important tool for image classification, speech recognition, natural language processing, and many other fields of application.

General Classification Image Classification +2

Cannot find the paper you are looking for? You can Submit a new open access paper.