Search Results for author: Eoin Delaney

Found 6 papers, 2 papers with code

Advancing Post Hoc Case Based Explanation with Feature Highlighting

no code implementations6 Nov 2023 Eoin Kenny, Eoin Delaney, Mark Keane

Explainable AI (XAI) has been proposed as a valuable tool to assist in downstream tasks involving human and AI collaboration.

valid

Explaining Groups of Instances Counterfactually for XAI: A Use Case, Algorithm and User Study for Group-Counterfactuals

no code implementations16 Mar 2023 Greta Warren, Mark T. Keane, Christophe Gueret, Eoin Delaney

Counterfactual explanations are an increasingly popular form of post hoc explanation due to their (i) applicability across problem domains, (ii) proposed legal compliance (e. g., with GDPR), and (iii) reliance on the contrastive nature of human explanation.

counterfactual Explainable Artificial Intelligence (XAI)

Counterfactual Explanations for Misclassified Images: How Human and Machine Explanations Differ

1 code implementation16 Dec 2022 Eoin Delaney, Arjun Pakrashi, Derek Greene, Mark T. Keane

Counterfactual explanations have emerged as a popular solution for the eXplainable AI (XAI) problem of elucidating the predictions of black-box deep-learning systems due to their psychological validity, flexibility across problem domains and proposed legal compliance.

counterfactual Explainable Artificial Intelligence (XAI)

Uncertainty Estimation and Out-of-Distribution Detection for Counterfactual Explanations: Pitfalls and Solutions

no code implementations20 Jul 2021 Eoin Delaney, Derek Greene, Mark T. Keane

Whilst an abundance of techniques have recently been proposed to generate counterfactual explanations for the predictions of opaque black-box systems, markedly less attention has been paid to exploring the uncertainty of these generated explanations.

counterfactual Medical Diagnosis +1

Instance-based Counterfactual Explanations for Time Series Classification

1 code implementation28 Sep 2020 Eoin Delaney, Derek Greene, Mark T. Keane

In recent years, there has been a rapidly expanding focus on explaining the predictions made by black-box AI systems that handle image and tabular data.

Classification counterfactual +6

Cannot find the paper you are looking for? You can Submit a new open access paper.