Search Results for author: Leander Weber

Found 10 papers, 4 papers with code

Finding and Removing Clever Hans: Using Explanation Methods to Debug and Improve Deep Models

2 code implementations22 Dec 2019 Christopher J. Anders, Leander Weber, David Neumann, Wojciech Samek, Klaus-Robert Müller, Sebastian Lapuschkin

Based on a recent technique - Spectral Relevance Analysis - we propose the following technical contributions and resulting findings: (a) a scalable quantification of artifactual and poisoned classes where the machine learning models under study exhibit CH behavior, (b) several approaches denoted as Class Artifact Compensation (ClArC), which are able to effectively and significantly reduce a model's CH behavior.

Understanding Integrated Gradients with SmoothTaylor for Deep Neural Network Attribution

1 code implementation arXiv 2020 Gary S. W. Goh, Sebastian Lapuschkin, Leander Weber, Wojciech Samek, Alexander Binder

From our experiments, we find that the SmoothTaylor approach together with adaptive noising is able to generate better quality saliency maps with lesser noise and higher sensitivity to the relevant points in the input space as compared to Integrated Gradients.

Image Classification Object Recognition

Navigating Neural Space: Revisiting Concept Activation Vectors to Overcome Directional Divergence

no code implementations7 Feb 2022 Frederik Pahde, Maximilian Dreyer, Leander Weber, Moritz Weckbecker, Christopher J. Anders, Thomas Wiegand, Wojciech Samek, Sebastian Lapuschkin

With a growing interest in understanding neural network prediction strategies, Concept Activation Vectors (CAVs) have emerged as a popular tool for modeling human-understandable concepts in the latent space.

TAG

Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond

1 code implementation NeurIPS 2023 Anna Hedström, Leander Weber, Dilyara Bareeva, Daniel Krakowczyk, Franz Motzkus, Wojciech Samek, Sebastian Lapuschkin, Marina M. -C. Höhne

The evaluation of explanation methods is a research topic that has not yet been explored deeply, however, since explainability is supposed to strengthen trust in artificial intelligence, it is necessary to systematically review and compare explanation methods in order to confirm their correctness.

Explainable Artificial Intelligence (XAI)

Measurably Stronger Explanation Reliability via Model Canonization

no code implementations14 Feb 2022 Franz Motzkus, Leander Weber, Sebastian Lapuschkin

While rule-based attribution methods have proven useful for providing local explanations for Deep Neural Networks, explaining modern and more varied network architectures yields new challenges in generating trustworthy explanations, since the established rule sets might not be sufficient or applicable to novel network structures.

Beyond Explaining: Opportunities and Challenges of XAI-Based Model Improvement

no code implementations15 Mar 2022 Leander Weber, Sebastian Lapuschkin, Alexander Binder, Wojciech Samek

We conclude that while model improvement based on XAI can have significant beneficial effects even on complex and not easily quantifyable model properties, these methods need to be applied carefully, since their success can vary depending on a multitude of factors, such as the model and dataset used, or the employed explanation method.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI)

Explain to Not Forget: Defending Against Catastrophic Forgetting with XAI

no code implementations4 May 2022 Sami Ede, Serop Baghdadlian, Leander Weber, An Nguyen, Dario Zanca, Wojciech Samek, Sebastian Lapuschkin

The ability to continuously process and retain new information like we do naturally as humans is a feat that is highly sought after when training neural networks.

Explainable Artificial Intelligence (XAI)

Shortcomings of Top-Down Randomization-Based Sanity Checks for Evaluations of Deep Neural Network Explanations

no code implementations CVPR 2023 Alexander Binder, Leander Weber, Sebastian Lapuschkin, Grégoire Montavon, Klaus-Robert Müller, Wojciech Samek

To address shortcomings of this test, we start by observing an experimental gap in the ranking of explanation methods between randomization-based sanity checks [1] and model output faithfulness measures (e. g. [25]).

Layer-wise Feedback Propagation

no code implementations23 Aug 2023 Leander Weber, Jim Berend, Alexander Binder, Thomas Wiegand, Wojciech Samek, Sebastian Lapuschkin

In this paper, we present Layer-wise Feedback Propagation (LFP), a novel training approach for neural-network-like predictors that utilizes explainability, specifically Layer-wise Relevance Propagation(LRP), to assign rewards to individual connections based on their respective contributions to solving a given task.

Transfer Learning

Sanity Checks Revisited: An Exploration to Repair the Model Parameter Randomisation Test

1 code implementation12 Jan 2024 Anna Hedström, Leander Weber, Sebastian Lapuschkin, Marina MC Höhne

The Model Parameter Randomisation Test (MPRT) is widely acknowledged in the eXplainable Artificial Intelligence (XAI) community for its well-motivated evaluative principle: that the explanation function should be sensitive to changes in the parameters of the model function.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI)

Cannot find the paper you are looking for? You can Submit a new open access paper.