Search Results for author: Mark T. Keane

Found 19 papers, 5 papers with code

Instance-based Counterfactual Explanations for Time Series Classification

1 code implementation28 Sep 2020 Eoin Delaney, Derek Greene, Mark T. Keane

In recent years, there has been a rapidly expanding focus on explaining the predictions made by black-box AI systems that handle image and tabular data.

Classification counterfactual +6

On Generating Plausible Counterfactual and Semi-Factual Explanations for Deep Learning

1 code implementation10 Sep 2020 Eoin M. Kenny, Mark T. Keane

There is a growing concern that the recent progress made in AI, especially regarding the predictive competence of deep learning models, will be undermined by a failure to properly explain their operation and outputs.

counterfactual Explainable Artificial Intelligence (XAI)

Solving the Class Imbalance Problem Using a Counterfactual Method for Data Augmentation

1 code implementation5 Nov 2021 Mohammed Temraz, Mark T. Keane

The experiments also show that CFA is competitive with many other oversampling methods many of which are variants of SMOTE.

counterfactual Data Augmentation

Helping News Editors Write Better Headlines: A Recommender to Improve the Keyword Contents & Shareability of News Headlines

no code implementations26 May 2017 Terrence Szymanski, Claudia Orellana-Rodriguez, Mark T. Keane

We present a software tool that employs state-of-the-art natural language processing (NLP) and machine learning techniques to help newspaper editors compose effective headlines for online publication.

regression

Cognitive residues of similarity

no code implementations9 Aug 2013 Stephanie OToole, Mark T. Keane

So, target objects were first presented in a comparison task (e. g., rate the similarity of this object to another) thus, presumably, modifying some of their features before asking people to visually search for the same object in complex scenes (with distractors and camouflaged backgrounds).

Object

Surprise: Youve got some explaining to do

no code implementations9 Aug 2013 Meadhbh Foster, Mark T. Keane

For the Task variable, participants either answered comprehension questions or provided an explanation of the outcome.

Innovation networks

no code implementations9 Aug 2013 Petra Ahrweiler, Mark T. Keane

The tri-partite framework captures networks of ideas (Concept Level), people (Individual Level) and social structures (Social-Organizational Level) and the interactions between these levels.

How Case Based Reasoning Explained Neural Networks: An XAI Survey of Post-Hoc Explanation-by-Example in ANN-CBR Twins

no code implementations17 May 2019 Mark T. Keane, Eoin M. Kenny

This paper surveys an approach to the XAI problem, using post-hoc explanation by example, that hinges on twinning Artificial Neural Networks (ANNs) with Case-Based Reasoning (CBR) systems, so-called ANN-CBR twins.

Explainable Artificial Intelligence (XAI)

Plotting Markson's 'Mistress'

no code implementations17 May 2019 Kelleher Conor, Mark T. Keane

We present a distant reading of this work designed to complement a close reading of it by David Foster Wallace (1990).

The Twin-System Approach as One Generic Solution for XAI: An Overview of ANN-CBR Twins for Explaining Deep Learning

no code implementations20 May 2019 Mark T. Keane, Eoin M. Kenny

The notion of twin systems is proposed to address the eXplainable AI (XAI) problem, where an uninterpretable black-box system is mapped to a white-box 'twin' that is more interpretable.

Explainable Artificial Intelligence (XAI)

The Unexpected Unexpected and the Expected Unexpected: How People's Conception of the Unexpected is Not That Unexpected

1 code implementation17 May 2019 Molly S Quinn, Kathleen Campbell, Mark T. Keane

The answers people give when asked to 'think of the unexpected' for everyday event scenarios appear to be more expected than unexpected.

Philosophy

Good Counterfactuals and Where to Find Them: A Case-Based Technique for Generating Counterfactuals for Explainable AI (XAI)

no code implementations26 May 2020 Mark T. Keane, Barry Smyth

Recently, a groundswell of research has identified the use of counterfactual explanations as a potentially significant solution to the Explainable AI (XAI) problem.

counterfactual Explainable Artificial Intelligence (XAI)

Play MNIST For Me! User Studies on the Effects of Post-Hoc, Example-Based Explanations & Error Rates on Debugging a Deep Learning, Black-Box Classifier

no code implementations10 Sep 2020 Courtney Ford, Eoin M. Kenny, Mark T. Keane

This paper reports two experiments (N=349) on the impact of post hoc explanations by example and error rates on peoples perceptions of a black box classifier.

Explainable Artificial Intelligence (XAI)

Uncertainty Estimation and Out-of-Distribution Detection for Counterfactual Explanations: Pitfalls and Solutions

no code implementations20 Jul 2021 Eoin Delaney, Derek Greene, Mark T. Keane

Whilst an abundance of techniques have recently been proposed to generate counterfactual explanations for the predictions of opaque black-box systems, markedly less attention has been paid to exploring the uncertainty of these generated explanations.

counterfactual Medical Diagnosis +1

Advancing Nearest Neighbor Explanation-by-Example with Critical Classification Regions

no code implementations29 Sep 2021 Eoin M. Kenny, Eoin D. Delaney, Mark T. Keane

There is an increasing body of evidence suggesting that post-hoc explanation-by- example with nearest neighbors is a promising solution for the eXplainable Artificial Intelligence (XAI) problem.

Classification Explainable artificial intelligence +1

Counterfactual Explanations for Misclassified Images: How Human and Machine Explanations Differ

1 code implementation16 Dec 2022 Eoin Delaney, Arjun Pakrashi, Derek Greene, Mark T. Keane

Counterfactual explanations have emerged as a popular solution for the eXplainable AI (XAI) problem of elucidating the predictions of black-box deep-learning systems due to their psychological validity, flexibility across problem domains and proposed legal compliance.

counterfactual Explainable Artificial Intelligence (XAI)

Explaining Groups of Instances Counterfactually for XAI: A Use Case, Algorithm and User Study for Group-Counterfactuals

no code implementations16 Mar 2023 Greta Warren, Mark T. Keane, Christophe Gueret, Eoin Delaney

Counterfactual explanations are an increasingly popular form of post hoc explanation due to their (i) applicability across problem domains, (ii) proposed legal compliance (e. g., with GDPR), and (iii) reliance on the contrastive nature of human explanation.

counterfactual Explainable Artificial Intelligence (XAI)

Even-Ifs From If-Onlys: Are the Best Semi-Factual Explanations Found Using Counterfactuals As Guides?

no code implementations1 Mar 2024 Saugat Aryal, Mark T. Keane

Recently, counterfactuals using "if-only" explanations have become very popular in eXplainable AI (XAI), as they describe which changes to feature-inputs of a black-box AI system result in changes to a (usually negative) decision-outcome.

counterfactual

Cannot find the paper you are looking for? You can Submit a new open access paper.