1 code implementation • 7 Dec 2024 • Kristoffer Wickstrøm, Marina Marie-Claire Höhne, Anna Hedström
The lack of ground truth explanation labels is a fundamental challenge for quantitative evaluation in explainable artificial intelligence (XAI).
no code implementations • 4 Nov 2024 • Rémi Kazmierczak, Steve Azzolin, Eloïse Berthier, Anna Hedström, Patricia Delhomme, Nicolas Bousquet, Goran Frehse, Massimiliano Mancini, Baptiste Caramiaux, Andrea Passerini, Gianni Franchi
Our first key contribution is a human evaluation of XAI explanations on four diverse datasets (COCO, Pascal Parts, Cats Dogs Cars, and MonumAI) which constitutes the first large-scale benchmark dataset for XAI, with annotations at both the image and concept levels.
1 code implementation • 9 Oct 2024 • Dilyara Bareeva, Galip Ümit Yolcu, Anna Hedström, Niklas Schmolenski, Thomas Wiegand, Wojciech Samek, Sebastian Lapuschkin
In recent years, training data attribution (TDA) methods have emerged as a promising direction for the interpretability of neural networks.
1 code implementation • 30 May 2024 • Laura Kopf, Philine Lou Bommer, Anna Hedström, Sebastian Lapuschkin, Marina M. -C. Höhne, Kirill Bykov
A crucial aspect of understanding the complex nature of Deep Neural Networks (DNNs) is the ability to explain learned concepts within their latent representations.
1 code implementation • 3 May 2024 • Anna Hedström, Leander Weber, Sebastian Lapuschkin, Marina Höhne
The Model Parameter Randomisation Test (MPRT) is highly recognised in the eXplainable Artificial Intelligence (XAI) community due to its fundamental evaluative criterion: explanations should be sensitive to the parameters of the model they seek to explain.
Explainable artificial intelligence
Explainable Artificial Intelligence (XAI)
1 code implementation • 12 Jan 2024 • Anna Hedström, Leander Weber, Sebastian Lapuschkin, Marina MC Höhne
The Model Parameter Randomisation Test (MPRT) is widely acknowledged in the eXplainable Artificial Intelligence (XAI) community for its well-motivated evaluative principle: that the explanation function should be sensitive to changes in the parameters of the model function.
Explainable artificial intelligence
Explainable Artificial Intelligence (XAI)
no code implementations • 13 Dec 2023 • Shanghua Liu, Anna Hedström, Deepak Hanike Basavegowda, Cornelia Weltzien, Marina M. -C. Höhne
Grasslands are known for their high biodiversity and ability to provide multiple ecosystem services.
1 code implementation • 1 Mar 2023 • Philine Bommer, Marlene Kretschmer, Anna Hedström, Dilyara Bareeva, Marina M. -C. Höhne
We find architecture-dependent performance differences regarding robustness, complexity and localization skills of different XAI methods, highlighting the necessity for research task-specific evaluation.
Explainable artificial intelligence
Explainable Artificial Intelligence (XAI)
1 code implementation • 14 Feb 2023 • Anna Hedström, Philine Bommer, Kristoffer K. Wickstrøm, Wojciech Samek, Sebastian Lapuschkin, Marina M. -C. Höhne
We address this problem through a meta-evaluation of different quality estimators in XAI, which we define as ''the process of evaluating the evaluation method''.
1 code implementation • NeurIPS 2023 • Anna Hedström, Leander Weber, Dilyara Bareeva, Daniel Krakowczyk, Franz Motzkus, Wojciech Samek, Sebastian Lapuschkin, Marina M. -C. Höhne
The evaluation of explanation methods is a research topic that has not yet been explored deeply, however, since explainability is supposed to strengthen trust in artificial intelligence, it is necessary to systematically review and compare explanation methods in order to confirm their correctness.
2 code implementations • 18 Jun 2021 • Kirill Bykov, Anna Hedström, Shinichi Nakajima, Marina M. -C. Höhne
For local explanation, stochasticity is known to help: a simple method, called SmoothGrad, has improved the visual quality of gradient-based attribution by adding noise to the input space and averaging the explanations of the noisy inputs.