Search Results for author: Sebastian Lapuschkin

Found 23 papers, 13 papers with code

Explain to Not Forget: Defending Against Catastrophic Forgetting with XAI

no code implementations4 May 2022 Sami Ede, Serop Baghdadlian, Leander Weber, An Nguyen, Dario Zanca, Wojciech Samek, Sebastian Lapuschkin

The ability to continuously process and retain new information like we do naturally as humans is a feat that is highly sought after when training neural networks.

But that's not why: Inference adjustment by interactive prototype deselection

no code implementations18 Mar 2022 Michael Gerstenberger, Sebastian Lapuschkin, Peter Eisert, Sebastian Bosse

It shows that even correct classifications can rely on unreasonable prototypes that result from confounding variables in a dataset.

Decision Making

Beyond Explaining: Opportunities and Challenges of XAI-Based Model Improvement

no code implementations15 Mar 2022 Leander Weber, Sebastian Lapuschkin, Alexander Binder, Wojciech Samek

We conclude that while model improvement based on XAI can have significant beneficial effects even on complex and not easily quantifyable model properties, these methods need to be applied carefully, since their success can vary depending on a multitude of factors, such as the model and dataset used, or the employed explanation method.

Explainable artificial intelligence

Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations

1 code implementation14 Feb 2022 Anna Hedström, Leander Weber, Dilyara Bareeva, Franz Motzkus, Wojciech Samek, Sebastian Lapuschkin, Marina M. -C. Höhne

The evaluation of explanation methods is a research topic that has not yet been explored deeply, however, since explainability is supposed to strengthen trust in artificial intelligence, it is necessary to systematically review and compare explanation methods in order to confirm their correctness.

Measurably Stronger Explanation Reliability via Model Canonization

no code implementations14 Feb 2022 Franz Motzkus, Leander Weber, Sebastian Lapuschkin

While rule-based attribution methods have proven useful for providing local explanations for Deep Neural Networks, explaining modern and more varied network architectures yields new challenges in generating trustworthy explanations, since the established rule sets might not be sufficient or applicable to novel network structures.

PatClArC: Using Pattern Concept Activation Vectors for Noise-Robust Model Debugging

no code implementations7 Feb 2022 Frederik Pahde, Leander Weber, Christopher J. Anders, Wojciech Samek, Sebastian Lapuschkin

We demonstrate that pattern-based artifact modeling has beneficial effects on the application of CAVs as a means to remove influence of confounding features from models via the ClArC framework.

TAG

ECQ$^{\text{x}}$: Explainability-Driven Quantization for Low-Bit and Sparse DNNs

no code implementations9 Sep 2021 Daniel Becking, Maximilian Dreyer, Wojciech Samek, Karsten Müller, Sebastian Lapuschkin

The remarkable success of deep neural networks (DNNs) in various applications is accompanied by a significant increase in network parameters and arithmetic operations.

Quantization

Explanation-Guided Training for Cross-Domain Few-Shot Classification

1 code implementation17 Jul 2020 Jiamei Sun, Sebastian Lapuschkin, Wojciech Samek, Yunqing Zhao, Ngai-Man Cheung, Alexander Binder

It leverages on the explanation scores, obtained from existing explanation methods when applied to the predictions of FSC models, computed for intermediate feature maps of the models.

Classification Cross-Domain Few-Shot +1

Understanding Integrated Gradients with SmoothTaylor for Deep Neural Network Attribution

1 code implementation arXiv 2020 Gary S. W. Goh, Sebastian Lapuschkin, Leander Weber, Wojciech Samek, Alexander Binder

From our experiments, we find that the SmoothTaylor approach together with adaptive noising is able to generate better quality saliency maps with lesser noise and higher sensitivity to the relevant points in the input space as compared to Integrated Gradients.

Image Classification Object Recognition

Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications

no code implementations17 Mar 2020 Wojciech Samek, Grégoire Montavon, Sebastian Lapuschkin, Christopher J. Anders, Klaus-Robert Müller

With the broader and highly successful usage of machine learning in industry and the sciences, there has been a growing demand for Explainable AI.

Interpretable Machine Learning

Explain and Improve: LRP-Inference Fine-Tuning for Image Captioning Models

1 code implementation4 Jan 2020 Jiamei Sun, Sebastian Lapuschkin, Wojciech Samek, Alexander Binder

We develop variants of layer-wise relevance propagation (LRP) and gradient-based explanation methods, tailored to image captioning models with attention mechanisms.

Image Captioning

Finding and Removing Clever Hans: Using Explanation Methods to Debug and Improve Deep Models

2 code implementations22 Dec 2019 Christopher J. Anders, Leander Weber, David Neumann, Wojciech Samek, Klaus-Robert Müller, Sebastian Lapuschkin

Based on a recent technique - Spectral Relevance Analysis - we propose the following technical contributions and resulting findings: (a) a scalable quantification of artifactual and poisoned classes where the machine learning models under study exhibit CH behavior, (b) several approaches denoted as Class Artifact Compensation (ClArC), which are able to effectively and significantly reduce a model's CH behavior.

Pruning by Explaining: A Novel Criterion for Deep Neural Network Pruning

1 code implementation18 Dec 2019 Seul-Ki Yeom, Philipp Seegerer, Sebastian Lapuschkin, Alexander Binder, Simon Wiedemann, Klaus-Robert Müller, Wojciech Samek

The success of convolutional neural networks (CNNs) in various applications is accompanied by a significant increase in computation and parameter storage costs.

Model Compression Network Pruning +1

Unmasking Clever Hans Predictors and Assessing What Machines Really Learn

1 code implementation26 Feb 2019 Sebastian Lapuschkin, Stephan Wäldchen, Alexander Binder, Grégoire Montavon, Wojciech Samek, Klaus-Robert Müller

Current learning machines have successfully solved hard application problems, reaching high accuracy and displaying seemingly "intelligent" behavior.

Explaining the Unique Nature of Individual Gait Patterns with Deep Learning

1 code implementation13 Aug 2018 Fabian Horst, Sebastian Lapuschkin, Wojciech Samek, Klaus-Robert Müller, Wolfgang I. Schöllhorn

Machine learning (ML) techniques such as (deep) artificial neural networks (DNN) are solving very successfully a plethora of tasks and provide new predictive models for complex physical, chemical, biological and social systems.

iNNvestigate neural networks!

1 code implementation13 Aug 2018 Maximilian Alber, Sebastian Lapuschkin, Philipp Seegerer, Miriam Hägele, Kristof T. Schütt, Grégoire Montavon, Wojciech Samek, Klaus-Robert Müller, Sven Dähne, Pieter-Jan Kindermans

The presented library iNNvestigate addresses this by providing a common interface and out-of-the- box implementation for many analysis methods, including the reference implementation for PatternNet and PatternAttribution as well as for LRP-methods.

Interpretable Machine Learning

Interpreting and Explaining Deep Neural Networks for Classification of Audio Signals

2 code implementations9 Jul 2018 Sören Becker, Marcel Ackermann, Sebastian Lapuschkin, Klaus-Robert Müller, Wojciech Samek

Interpretability of deep neural networks is a recently emerging area of machine learning research targeting a better understanding of how models perform feature selection and derive their classification decisions.

Audio Classification Decision Making +1

Interpreting the Predictions of Complex ML Models by Layer-wise Relevance Propagation

no code implementations24 Nov 2016 Wojciech Samek, Grégoire Montavon, Alexander Binder, Sebastian Lapuschkin, Klaus-Robert Müller

Complex nonlinear models such as deep neural network (DNNs) have become an important tool for image classification, speech recognition, natural language processing, and many other fields of application.

General Classification Image Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.