Search Results for author: Anna Hedström

Found 6 papers, 5 papers with code

Sanity Checks Revisited: An Exploration to Repair the Model Parameter Randomisation Test

1 code implementation12 Jan 2024 Anna Hedström, Leander Weber, Sebastian Lapuschkin, Marina MC Höhne

The Model Parameter Randomisation Test (MPRT) is widely acknowledged in the eXplainable Artificial Intelligence (XAI) community for its well-motivated evaluative principle: that the explanation function should be sensitive to changes in the parameters of the model function.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI)

Finding the right XAI method -- A Guide for the Evaluation and Ranking of Explainable AI Methods in Climate Science

1 code implementation1 Mar 2023 Philine Bommer, Marlene Kretschmer, Anna Hedström, Dilyara Bareeva, Marina M. -C. Höhne

We find architecture-dependent performance differences regarding robustness, complexity and localization skills of different XAI methods, highlighting the necessity for research task-specific evaluation.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI)

The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus

1 code implementation14 Feb 2023 Anna Hedström, Philine Bommer, Kristoffer K. Wickstrøm, Wojciech Samek, Sebastian Lapuschkin, Marina M. -C. Höhne

We address this problem through a meta-evaluation of different quality estimators in XAI, which we define as ''the process of evaluating the evaluation method''.

Explainable Artificial Intelligence (XAI)

Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond

1 code implementation NeurIPS 2023 Anna Hedström, Leander Weber, Dilyara Bareeva, Daniel Krakowczyk, Franz Motzkus, Wojciech Samek, Sebastian Lapuschkin, Marina M. -C. Höhne

The evaluation of explanation methods is a research topic that has not yet been explored deeply, however, since explainability is supposed to strengthen trust in artificial intelligence, it is necessary to systematically review and compare explanation methods in order to confirm their correctness.

Explainable Artificial Intelligence (XAI)

NoiseGrad: Enhancing Explanations by Introducing Stochasticity to Model Weights

2 code implementations18 Jun 2021 Kirill Bykov, Anna Hedström, Shinichi Nakajima, Marina M. -C. Höhne

For local explanation, stochasticity is known to help: a simple method, called SmoothGrad, has improved the visual quality of gradient-based attribution by adding noise to the input space and averaging the explanations of the noisy inputs.

Decision Making

Cannot find the paper you are looking for? You can Submit a new open access paper.