Search Results for author: Benedict Clark

Found 6 papers, 3 papers with code

Explainable AI needs formal notions of explanation correctness

no code implementations22 Sep 2024 Stefan Haufe, Rick Wilming, Benedict Clark, Rustam Zhumagambetov, Danny Panknin, Ahcène Boubekki

This will lead to notions of explanation correctness that can be theoretically verified and objective metrics of explanation performance that can be assessed using ground-truth data.

Attribute Explainable artificial intelligence +2

GECOBench: A Gender-Controlled Text Dataset and Benchmark for Quantifying Biases in Explanations

1 code implementation17 Jun 2024 Rick Wilming, Artur Dox, Hjalmar Schulz, Marta Oliveira, Benedict Clark, Stefan Haufe

This gives rise to ground-truth 'world explanations' for gender classification tasks, enabling the objective evaluation of the correctness of XAI methods.

Benchmarking Explainable artificial intelligence +2

EXACT: Towards a platform for empirically benchmarking Machine Learning model explanation methods

no code implementations20 May 2024 Benedict Clark, Rick Wilming, Artur Dox, Paul Eschenbach, Sami Hached, Daniel Jin Wodke, Michias Taye Zewdie, Uladzislau Bruila, Marta Oliveira, Hjalmar Schulz, Luca Matteo Cornils, Danny Panknin, Ahcène Boubekki, Stefan Haufe

The evolving landscape of explainable artificial intelligence (XAI) aims to improve the interpretability of intricate machine learning (ML) models, yet faces challenges in formalisation and empirical validation, being an inherently unsupervised process.

Benchmarking Explainable artificial intelligence +1

XAI-TRIS: Non-linear image benchmarks to quantify false positive post-hoc attribution of feature importance

1 code implementation22 Jun 2023 Benedict Clark, Rick Wilming, Stefan Haufe

The field of 'explainable' artificial intelligence (XAI) has produced highly cited methods that seek to make the decisions of complex machine learning (ML) methods 'understandable' to humans, for example by attributing 'importance' scores to input features.

Edge Detection Explainable artificial intelligence +2

Theoretical Behavior of XAI Methods in the Presence of Suppressor Variables

no code implementations2 Jun 2023 Rick Wilming, Leo Kieslich, Benedict Clark, Stefan Haufe

In recent years, the community of 'explainable artificial intelligence' (XAI) has created a vast body of methods to bridge a perceived gap between model 'complexity' and 'interpretability'.

Attribute Binary Classification +2

Cannot find the paper you are looking for? You can Submit a new open access paper.