Search Results for author: Stefan Haufe

Found 15 papers, 9 papers with code

Enhancing Brain Source Reconstruction through Physics-Informed 3D Neural Networks

no code implementations31 Oct 2024 Marco Morik, Ali Hashemi, Klaus-Robert Müller, Stefan Haufe, Shinichi Nakajima

Traditional methods predominantly rely on manually crafted priors, missing the flexibility of data-driven learning, while recent deep learning approaches focus on end-to-end learning, typically using the physical information of the forward model only for generating training data.

EEG

Explainable AI needs formal notions of explanation correctness

no code implementations22 Sep 2024 Stefan Haufe, Rick Wilming, Benedict Clark, Rustam Zhumagambetov, Danny Panknin, Ahcène Boubekki

This will lead to notions of explanation correctness that can be theoretically verified and objective metrics of explanation performance that can be assessed using ground-truth data.

Attribute Explainable artificial intelligence +2

GECOBench: A Gender-Controlled Text Dataset and Benchmark for Quantifying Biases in Explanations

1 code implementation17 Jun 2024 Rick Wilming, Artur Dox, Hjalmar Schulz, Marta Oliveira, Benedict Clark, Stefan Haufe

This gives rise to ground-truth 'world explanations' for gender classification tasks, enabling the objective evaluation of the correctness of XAI methods.

Benchmarking Explainable artificial intelligence +2

EXACT: Towards a platform for empirically benchmarking Machine Learning model explanation methods

no code implementations20 May 2024 Benedict Clark, Rick Wilming, Artur Dox, Paul Eschenbach, Sami Hached, Daniel Jin Wodke, Michias Taye Zewdie, Uladzislau Bruila, Marta Oliveira, Hjalmar Schulz, Luca Matteo Cornils, Danny Panknin, Ahcène Boubekki, Stefan Haufe

The evolving landscape of explainable artificial intelligence (XAI) aims to improve the interpretability of intricate machine learning (ML) models, yet faces challenges in formalisation and empirical validation, being an inherently unsupervised process.

Benchmarking Explainable artificial intelligence +1

XAI-TRIS: Non-linear image benchmarks to quantify false positive post-hoc attribution of feature importance

1 code implementation22 Jun 2023 Benedict Clark, Rick Wilming, Stefan Haufe

The field of 'explainable' artificial intelligence (XAI) has produced highly cited methods that seek to make the decisions of complex machine learning (ML) methods 'understandable' to humans, for example by attributing 'importance' scores to input features.

Edge Detection Explainable artificial intelligence +2

Theoretical Behavior of XAI Methods in the Presence of Suppressor Variables

no code implementations2 Jun 2023 Rick Wilming, Leo Kieslich, Benedict Clark, Stefan Haufe

In recent years, the community of 'explainable artificial intelligence' (XAI) has created a vast body of methods to bridge a perceived gap between model 'complexity' and 'interpretability'.

Attribute Binary Classification +2

Evaluating saliency methods on artificial data with different background types

1 code implementation9 Dec 2021 Céline Budding, Fabian Eitel, Kerstin Ritter, Stefan Haufe

Over the last years, many 'explainable artificial intelligence' (xAI) approaches have been developed, but these have not always been objectively evaluated.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI)

Scrutinizing XAI using linear ground-truth data with suppressor variables

1 code implementation14 Nov 2021 Rick Wilming, Céline Budding, Klaus-Robert Müller, Stefan Haufe

It has been demonstrated that some saliency methods can highlight features that have no statistical association with the prediction target (suppressor variables).

Explainable Artificial Intelligence (XAI) Feature Importance

Joint Learning of Full-structure Noise in Hierarchical Bayesian Regression Models

1 code implementation1 Jan 2021 Ali Hashemi, Chang Cai, Klaus Robert Muller, Srikantan Nagarajan, Stefan Haufe

We consider hierarchical Bayesian (type-II maximum likelihood) regression models for observations with latent variables for source and noise, where parameters of priors for source and noise terms need to be estimated jointly from data.

EEG regression

A state-space model for inferring effective connectivity of latent neural dynamics from simultaneous EEG/fMRI

1 code implementation NeurIPS 2019 Tao Tu, John Paisley, Stefan Haufe, Paul Sajda

In this study, we develop a linear state-space model to infer the effective connectivity in a distributed brain network based on simultaneously recorded EEG and fMRI data.

EEG

Correlated Components Analysis - Extracting Reliable Dimensions in Multivariate Data

1 code implementation26 Jan 2018 Lucas C. Parra, Stefan Haufe, Jacek P. Dmochowski

How does one find dimensions in multivariate data that are reliably expressed across repetitions?

Validity of time reversal for testing Granger causality

no code implementations25 Sep 2015 Irene Winkler, Danny Panknin, Daniel Bartz, Klaus-Robert Müller, Stefan Haufe

Inferring causal interactions from observed data is a challenging problem, especially in the presence of measurement noise.

valid

Cannot find the paper you are looking for? You can Submit a new open access paper.