Search Results for author: Eoin M. Kenny

Found 7 papers, 2 papers with code

The Utility of "Even if..." Semifactual Explanation to Optimise Positive Outcomes

1 code implementation29 Oct 2023 Eoin M. Kenny, Weipeng Huang

When users receive either a positive or negative outcome from an automated system, Explainable AI (XAI) has almost exclusively focused on how to mutate negative outcomes into positive ones by crossing a decision boundary using counterfactuals (e. g., \textit{"If you earn 2k more, we will accept your loan application"}).

Advancing Nearest Neighbor Explanation-by-Example with Critical Classification Regions

no code implementations29 Sep 2021 Eoin M. Kenny, Eoin D. Delaney, Mark T. Keane

There is an increasing body of evidence suggesting that post-hoc explanation-by- example with nearest neighbors is a promising solution for the eXplainable Artificial Intelligence (XAI) problem.

Classification Explainable artificial intelligence +1

Generating Plausible Counterfactual Explanations for Deep Transformers in Financial Text Classification

no code implementations COLING 2020 Linyi Yang, Eoin M. Kenny, Tin Lok James Ng, Yi Yang, Barry Smyth, Ruihai Dong

Corporate mergers and acquisitions (M&A) account for billions of dollars of investment globally every year, and offer an interesting and challenging domain for artificial intelligence.

counterfactual Explainable Artificial Intelligence (XAI) +3

Play MNIST For Me! User Studies on the Effects of Post-Hoc, Example-Based Explanations & Error Rates on Debugging a Deep Learning, Black-Box Classifier

no code implementations10 Sep 2020 Courtney Ford, Eoin M. Kenny, Mark T. Keane

This paper reports two experiments (N=349) on the impact of post hoc explanations by example and error rates on peoples perceptions of a black box classifier.

Explainable Artificial Intelligence (XAI)

On Generating Plausible Counterfactual and Semi-Factual Explanations for Deep Learning

1 code implementation10 Sep 2020 Eoin M. Kenny, Mark T. Keane

There is a growing concern that the recent progress made in AI, especially regarding the predictive competence of deep learning models, will be undermined by a failure to properly explain their operation and outputs.

counterfactual Explainable Artificial Intelligence (XAI)

The Twin-System Approach as One Generic Solution for XAI: An Overview of ANN-CBR Twins for Explaining Deep Learning

no code implementations20 May 2019 Mark T. Keane, Eoin M. Kenny

The notion of twin systems is proposed to address the eXplainable AI (XAI) problem, where an uninterpretable black-box system is mapped to a white-box 'twin' that is more interpretable.

Explainable Artificial Intelligence (XAI)

How Case Based Reasoning Explained Neural Networks: An XAI Survey of Post-Hoc Explanation-by-Example in ANN-CBR Twins

no code implementations17 May 2019 Mark T. Keane, Eoin M. Kenny

This paper surveys an approach to the XAI problem, using post-hoc explanation by example, that hinges on twinning Artificial Neural Networks (ANNs) with Case-Based Reasoning (CBR) systems, so-called ANN-CBR twins.

Explainable Artificial Intelligence (XAI)

Cannot find the paper you are looking for? You can Submit a new open access paper.