Search Results for author: Frederik Pahde

Found 10 papers, 2 papers with code

Explainable concept mappings of MRI: Revealing the mechanisms underlying deep learning-based brain disease classification

no code implementations16 Apr 2024 Christian Tinauer, Anna Damulina, Maximilian Sackl, Martin Soellradl, Reduan Achtibat, Maximilian Dreyer, Frederik Pahde, Sebastian Lapuschkin, Reinhold Schmidt, Stefan Ropele, Wojciech Samek, Christian Langkammer

Using quantitative R2* maps we separated Alzheimer's patients (n=117) from normal controls (n=219) by using a convolutional neural network and systematically investigated the learned concepts using Concept Relevance Propagation and compared these results to a conventional region of interest-based analysis.

From Hope to Safety: Unlearning Biases of Deep Models via Gradient Penalization in Latent Space

1 code implementation18 Aug 2023 Maximilian Dreyer, Frederik Pahde, Christopher J. Anders, Wojciech Samek, Sebastian Lapuschkin

Deep Neural Networks are prone to learning spurious correlations embedded in the training data, leading to potentially biased predictions.

Decision Making

Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models

1 code implementation22 Mar 2023 Frederik Pahde, Maximilian Dreyer, Wojciech Samek, Sebastian Lapuschkin

To tackle this problem, we propose Reveal to Revise (R2R), a framework entailing the entire eXplainable Artificial Intelligence (XAI) life cycle, enabling practitioners to iteratively identify, mitigate, and (re-)evaluate spurious model behavior with a minimal amount of human interaction.

Age Estimation Decision Making +2

Optimizing Explanations by Network Canonization and Hyperparameter Search

no code implementations30 Nov 2022 Frederik Pahde, Galip Ümit Yolcu, Alexander Binder, Wojciech Samek, Sebastian Lapuschkin

We further suggest a XAI evaluation framework with which we quantify and compare the effect sof model canonization for various XAI methods in image classification tasks on the Pascal-VOC and ILSVRC2017 datasets, as well as for Visual Question Answering using CLEVR-XAI.

Explainable Artificial Intelligence (XAI) Image Classification +2

Navigating Neural Space: Revisiting Concept Activation Vectors to Overcome Directional Divergence

no code implementations7 Feb 2022 Frederik Pahde, Maximilian Dreyer, Leander Weber, Moritz Weckbecker, Christopher J. Anders, Thomas Wiegand, Wojciech Samek, Sebastian Lapuschkin

With a growing interest in understanding neural network prediction strategies, Concept Activation Vectors (CAVs) have emerged as a popular tool for modeling human-understandable concepts in the latent space.

TAG

Multimodal Prototypical Networks for Few-shot Learning

no code implementations17 Nov 2020 Frederik Pahde, Mihai Puscas, Tassilo Klein, Moin Nabi

Although providing exceptional results for many computer vision tasks, state-of-the-art deep learning algorithms catastrophically struggle in low data scenarios.

Classification Few-Shot Learning +1

Low-Shot Learning from Imaginary 3D Model

no code implementations4 Jan 2019 Frederik Pahde, Mihai Puscas, Jannik Wolff, Tassilo Klein, Nicu Sebe, Moin Nabi

Since the advent of deep learning, neural networks have demonstrated remarkable results in many visual recognition tasks, constantly pushing the limits.

Few-Shot Learning

Cross-modal Hallucination for Few-shot Fine-grained Recognition

no code implementations13 Jun 2018 Frederik Pahde, Patrick Jähnichen, Tassilo Klein, Moin Nabi

State-of-the-art deep learning algorithms generally require large amounts of data for model training.

Hallucination

Cannot find the paper you are looking for? You can Submit a new open access paper.