Search Results for author: Mark T. Keane

Found 18 papers, 4 papers with code

Explaining Groups of Instances Counterfactually for XAI: A Use Case, Algorithm and User Study for Group-Counterfactuals

no code implementations16 Mar 2023 Greta Warren, Mark T. Keane, Christophe Gueret, Eoin Delaney

Counterfactual explanations are an increasingly popular form of post hoc explanation due to their (i) applicability across problem domains, (ii) proposed legal compliance (e. g., with GDPR), and (iii) reliance on the contrastive nature of human explanation.

Counterfactual Explanations for Misclassified Images: How Human and Machine Explanations Differ

1 code implementation16 Dec 2022 Eoin Delaney, Arjun Pakrashi, Derek Greene, Mark T. Keane

Counterfactual explanations have emerged as a popular solution for the eXplainable AI (XAI) problem of elucidating the predictions of black-box deep-learning systems due to their psychological validity, flexibility across problem domains and proposed legal compliance.

Solving the Class Imbalance Problem Using a Counterfactual Method for Data Augmentation

no code implementations5 Nov 2021 Mohammed Temraz, Mark T. Keane

The experiments also show that CFA is competitive with many other oversampling methods many of which are variants of SMOTE.

Data Augmentation

Advancing Nearest Neighbor Explanation-by-Example with Critical Classification Regions

no code implementations29 Sep 2021 Eoin M. Kenny, Eoin D. Delaney, Mark T. Keane

There is an increasing body of evidence suggesting that post-hoc explanation-by- example with nearest neighbors is a promising solution for the eXplainable Artificial Intelligence (XAI) problem.

Classification Explainable artificial intelligence

Uncertainty Estimation and Out-of-Distribution Detection for Counterfactual Explanations: Pitfalls and Solutions

no code implementations20 Jul 2021 Eoin Delaney, Derek Greene, Mark T. Keane

Whilst an abundance of techniques have recently been proposed to generate counterfactual explanations for the predictions of opaque black-box systems, markedly less attention has been paid to exploring the uncertainty of these generated explanations.

Medical Diagnosis Out-of-Distribution Detection

Instance-based Counterfactual Explanations for Time Series Classification

1 code implementation28 Sep 2020 Eoin Delaney, Derek Greene, Mark T. Keane

In recent years, there has been a rapidly expanding focus on explaining the predictions made by black-box AI systems that handle image and tabular data.

Classification Counterfactual Explanation +4

On Generating Plausible Counterfactual and Semi-Factual Explanations for Deep Learning

1 code implementation10 Sep 2020 Eoin M. Kenny, Mark T. Keane

There is a growing concern that the recent progress made in AI, especially regarding the predictive competence of deep learning models, will be undermined by a failure to properly explain their operation and outputs.

Play MNIST For Me! User Studies on the Effects of Post-Hoc, Example-Based Explanations & Error Rates on Debugging a Deep Learning, Black-Box Classifier

no code implementations10 Sep 2020 Courtney Ford, Eoin M. Kenny, Mark T. Keane

This paper reports two experiments (N=349) on the impact of post hoc explanations by example and error rates on peoples perceptions of a black box classifier.

Good Counterfactuals and Where to Find Them: A Case-Based Technique for Generating Counterfactuals for Explainable AI (XAI)

no code implementations26 May 2020 Mark T. Keane, Barry Smyth

Recently, a groundswell of research has identified the use of counterfactual explanations as a potentially significant solution to the Explainable AI (XAI) problem.

The Twin-System Approach as One Generic Solution for XAI: An Overview of ANN-CBR Twins for Explaining Deep Learning

no code implementations20 May 2019 Mark T. Keane, Eoin M. Kenny

The notion of twin systems is proposed to address the eXplainable AI (XAI) problem, where an uninterpretable black-box system is mapped to a white-box 'twin' that is more interpretable.

Plotting Markson's 'Mistress'

no code implementations17 May 2019 Kelleher Conor, Mark T. Keane

We present a distant reading of this work designed to complement a close reading of it by David Foster Wallace (1990).

The Unexpected Unexpected and the Expected Unexpected: How People's Conception of the Unexpected is Not That Unexpected

1 code implementation17 May 2019 Molly S Quinn, Kathleen Campbell, Mark T. Keane

The answers people give when asked to 'think of the unexpected' for everyday event scenarios appear to be more expected than unexpected.


How Case Based Reasoning Explained Neural Networks: An XAI Survey of Post-Hoc Explanation-by-Example in ANN-CBR Twins

no code implementations17 May 2019 Mark T. Keane, Eoin M. Kenny

This paper surveys an approach to the XAI problem, using post-hoc explanation by example, that hinges on twinning Artificial Neural Networks (ANNs) with Case-Based Reasoning (CBR) systems, so-called ANN-CBR twins.

Helping News Editors Write Better Headlines: A Recommender to Improve the Keyword Contents & Shareability of News Headlines

no code implementations26 May 2017 Terrence Szymanski, Claudia Orellana-Rodriguez, Mark T. Keane

We present a software tool that employs state-of-the-art natural language processing (NLP) and machine learning techniques to help newspaper editors compose effective headlines for online publication.


Cognitive residues of similarity

no code implementations9 Aug 2013 Stephanie OToole, Mark T. Keane

So, target objects were first presented in a comparison task (e. g., rate the similarity of this object to another) thus, presumably, modifying some of their features before asking people to visually search for the same object in complex scenes (with distractors and camouflaged backgrounds).

Innovation networks

no code implementations9 Aug 2013 Petra Ahrweiler, Mark T. Keane

The tri-partite framework captures networks of ideas (Concept Level), people (Individual Level) and social structures (Social-Organizational Level) and the interactions between these levels.

Surprise: Youve got some explaining to do

no code implementations9 Aug 2013 Meadhbh Foster, Mark T. Keane

For the Task variable, participants either answered comprehension questions or provided an explanation of the outcome.

Cannot find the paper you are looking for? You can Submit a new open access paper.