no code implementations • 31 Oct 2024 • Marco Morik, Ali Hashemi, Klaus-Robert Müller, Stefan Haufe, Shinichi Nakajima
Traditional methods predominantly rely on manually crafted priors, missing the flexibility of data-driven learning, while recent deep learning approaches focus on end-to-end learning, typically using the physical information of the forward model only for generating training data.
no code implementations • 22 Sep 2024 • Stefan Haufe, Rick Wilming, Benedict Clark, Rustam Zhumagambetov, Danny Panknin, Ahcène Boubekki
This will lead to notions of explanation correctness that can be theoretically verified and objective metrics of explanation performance that can be assessed using ground-truth data.
1 code implementation • 17 Jun 2024 • Rick Wilming, Artur Dox, Hjalmar Schulz, Marta Oliveira, Benedict Clark, Stefan Haufe
This gives rise to ground-truth 'world explanations' for gender classification tasks, enabling the objective evaluation of the correctness of XAI methods.
no code implementations • 20 May 2024 • Benedict Clark, Rick Wilming, Artur Dox, Paul Eschenbach, Sami Hached, Daniel Jin Wodke, Michias Taye Zewdie, Uladzislau Bruila, Marta Oliveira, Hjalmar Schulz, Luca Matteo Cornils, Danny Panknin, Ahcène Boubekki, Stefan Haufe
The evolving landscape of explainable artificial intelligence (XAI) aims to improve the interpretability of intricate machine learning (ML) models, yet faces challenges in formalisation and empirical validation, being an inherently unsupervised process.
1 code implementation • 22 Jun 2023 • Benedict Clark, Rick Wilming, Stefan Haufe
The field of 'explainable' artificial intelligence (XAI) has produced highly cited methods that seek to make the decisions of complex machine learning (ML) methods 'understandable' to humans, for example by attributing 'importance' scores to input features.
1 code implementation • 21 Jun 2023 • Marta Oliveira, Rick Wilming, Benedict Clark, Céline Budding, Fabian Eitel, Kerstin Ritter, Stefan Haufe
Here, we propose a benchmark dataset that allows for quantifying explanation performance in a realistic magnetic resonance imaging (MRI) classification task.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +2
no code implementations • 2 Jun 2023 • Rick Wilming, Leo Kieslich, Benedict Clark, Stefan Haufe
In recent years, the community of 'explainable artificial intelligence' (XAI) has created a vast body of methods to bridge a perceived gap between model 'complexity' and 'interpretability'.
1 code implementation • 9 Dec 2021 • Céline Budding, Fabian Eitel, Kerstin Ritter, Stefan Haufe
Over the last years, many 'explainable artificial intelligence' (xAI) approaches have been developed, but these have not always been objectively evaluated.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
1 code implementation • 14 Nov 2021 • Rick Wilming, Céline Budding, Klaus-Robert Müller, Stefan Haufe
It has been demonstrated that some saliency methods can highlight features that have no statistical association with the prediction target (suppressor variables).
Explainable Artificial Intelligence (XAI) Feature Importance
1 code implementation • NeurIPS 2021 • Ali Hashemi, Yijing Gao, Chang Cai, Sanjay Ghosh, Klaus-Robert Müller, Srikantan S. Nagarajan, Stefan Haufe
Several problems in neuroimaging and beyond require inference on the parameters of multi-task sparse hierarchical regression models.
1 code implementation • 1 Jan 2021 • Ali Hashemi, Chang Cai, Klaus Robert Muller, Srikantan Nagarajan, Stefan Haufe
We consider hierarchical Bayesian (type-II maximum likelihood) regression models for observations with latent variables for source and noise, where parameters of priors for source and noise terms need to be estimated jointly from data.
1 code implementation • NeurIPS 2019 • Tao Tu, John Paisley, Stefan Haufe, Paul Sajda
In this study, we develop a linear state-space model to infer the effective connectivity in a distributed brain network based on simultaneously recorded EEG and fMRI data.
1 code implementation • 26 Jan 2018 • Lucas C. Parra, Stefan Haufe, Jacek P. Dmochowski
How does one find dimensions in multivariate data that are reliably expressed across repetitions?
no code implementations • 25 Sep 2015 • Irene Winkler, Danny Panknin, Daniel Bartz, Klaus-Robert Müller, Stefan Haufe
Inferring causal interactions from observed data is a challenging problem, especially in the presence of measurement noise.
no code implementations • NeurIPS 2008 • Stefan Haufe, Vadim V. Nikulin, Andreas Ziehe, Klaus-Robert Müller, Guido Nolte
We introduce a novel framework for estimating vector fields using sparse basis field expansions (S-FLEX).