1 code implementation • 9 Apr 2024 • Maximilian Dreyer, Erblina Purelku, Johanna Vielhaben, Wojciech Samek, Sebastian Lapuschkin
The field of mechanistic interpretability aims to study the role of individual neurons in Deep Neural Networks.
1 code implementation • 12 Jan 2024 • Stefan Blücher, Johanna Vielhaben, Nils Strodthoff
The R-OMS score enables a systematic comparison of occlusion strategies and resolves the disagreement problem by grouping consistent PF rankings.
no code implementations • 27 Apr 2023 • Annika Frommholz, Fabian Seipel, Sebastian Lapuschkin, Wojciech Samek, Johanna Vielhaben
Deep neural networks are a promising tool for Audio Event Classification.
no code implementations • 11 Mar 2023 • Johanna Vielhaben, Sebastian Lapuschkin, Grégoire Montavon, Wojciech Samek
In this way, we extend the applicability of a family of XAI methods to domains (e. g. speech) where the input is only interpretable after a transformation.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +3
1 code implementation • 27 Jan 2023 • Johanna Vielhaben, Stefan Blücher, Nils Strodthoff
For the trustworthy application of XAI, in particular for high-stake decisions, a more global model understanding is required.
no code implementations • 11 Mar 2022 • Johanna Vielhaben, Stefan Blücher, Nils Strodthoff
We empirically demonstrate the soundness of the proposed Sparse Subspace Clustering for Concept Discovery (SSCCD) method for a variety of different image classification tasks.
1 code implementation • 16 Apr 2021 • Johanna Vielhaben, Markus Wenzel, Eva Weicken, Nils Strodthoff
Predicting the binding of viral peptides to the major histocompatibility complex with machine learning can potentially extend the computational immunology toolkit for vaccine development, and serve as a key component in the fight against a pandemic.
2 code implementations • 26 Feb 2021 • Stefan Blücher, Johanna Vielhaben, Nils Strodthoff
PredDiff is a model-agnostic, local attribution method that is firmly rooted in probability theory.
1 code implementation • 18 Dec 2020 • Johanna Vielhaben, Nils Strodthoff
Generative neural samplers offer a complementary approach to Monte Carlo methods for problems in statistical physics and quantum field theory.
2 code implementations • 9 Jul 2018 • Sören Becker, Johanna Vielhaben, Marcel Ackermann, Klaus-Robert Müller, Sebastian Lapuschkin, Wojciech Samek
Explainable Artificial Intelligence (XAI) is targeted at understanding how models perform feature selection and derive their classification decisions.