no code implementations • 16 Mar 2023 • Greta Warren, Mark T. Keane, Christophe Gueret, Eoin Delaney
Counterfactual explanations are an increasingly popular form of post hoc explanation due to their (i) applicability across problem domains, (ii) proposed legal compliance (e. g., with GDPR), and (iii) reliance on the contrastive nature of human explanation.
1 code implementation • 16 Dec 2022 • Eoin Delaney, Arjun Pakrashi, Derek Greene, Mark T. Keane
Counterfactual explanations have emerged as a popular solution for the eXplainable AI (XAI) problem of elucidating the predictions of black-box deep-learning systems due to their psychological validity, flexibility across problem domains and proposed legal compliance.
no code implementations • 5 Nov 2021 • Mohammed Temraz, Mark T. Keane
The experiments also show that CFA is competitive with many other oversampling methods many of which are variants of SMOTE.
no code implementations • 29 Sep 2021 • Eoin M. Kenny, Eoin D. Delaney, Mark T. Keane
There is an increasing body of evidence suggesting that post-hoc explanation-by- example with nearest neighbors is a promising solution for the eXplainable Artificial Intelligence (XAI) problem.
no code implementations • 20 Jul 2021 • Eoin Delaney, Derek Greene, Mark T. Keane
Whilst an abundance of techniques have recently been proposed to generate counterfactual explanations for the predictions of opaque black-box systems, markedly less attention has been paid to exploring the uncertainty of these generated explanations.
no code implementations • 6 Jan 2021 • Cathal Ryan, Christophe Guéret, Donagh Berry, Medb Corcoran, Mark T. Keane, Brian Mac Namee
Mastitis is a billion dollar health problem for the modern dairy industry, with implications for antibiotic resistance.
1 code implementation • 28 Sep 2020 • Eoin Delaney, Derek Greene, Mark T. Keane
In recent years, there has been a rapidly expanding focus on explaining the predictions made by black-box AI systems that handle image and tabular data.
1 code implementation • 10 Sep 2020 • Eoin M. Kenny, Mark T. Keane
There is a growing concern that the recent progress made in AI, especially regarding the predictive competence of deep learning models, will be undermined by a failure to properly explain their operation and outputs.
no code implementations • 10 Sep 2020 • Courtney Ford, Eoin M. Kenny, Mark T. Keane
This paper reports two experiments (N=349) on the impact of post hoc explanations by example and error rates on peoples perceptions of a black box classifier.
no code implementations • 26 May 2020 • Mark T. Keane, Barry Smyth
Recently, a groundswell of research has identified the use of counterfactual explanations as a potentially significant solution to the Explainable AI (XAI) problem.
no code implementations • 20 May 2019 • Mark T. Keane, Eoin M. Kenny
The notion of twin systems is proposed to address the eXplainable AI (XAI) problem, where an uninterpretable black-box system is mapped to a white-box 'twin' that is more interpretable.
no code implementations • 17 May 2019 • Kelleher Conor, Mark T. Keane
We present a distant reading of this work designed to complement a close reading of it by David Foster Wallace (1990).
1 code implementation • 17 May 2019 • Molly S Quinn, Kathleen Campbell, Mark T. Keane
The answers people give when asked to 'think of the unexpected' for everyday event scenarios appear to be more expected than unexpected.
no code implementations • 17 May 2019 • Mark T. Keane, Eoin M. Kenny
This paper surveys an approach to the XAI problem, using post-hoc explanation by example, that hinges on twinning Artificial Neural Networks (ANNs) with Case-Based Reasoning (CBR) systems, so-called ANN-CBR twins.
no code implementations • 26 May 2017 • Terrence Szymanski, Claudia Orellana-Rodriguez, Mark T. Keane
We present a software tool that employs state-of-the-art natural language processing (NLP) and machine learning techniques to help newspaper editors compose effective headlines for online publication.
no code implementations • 9 Aug 2013 • Stephanie OToole, Mark T. Keane
So, target objects were first presented in a comparison task (e. g., rate the similarity of this object to another) thus, presumably, modifying some of their features before asking people to visually search for the same object in complex scenes (with distractors and camouflaged backgrounds).
no code implementations • 9 Aug 2013 • Petra Ahrweiler, Mark T. Keane
The tri-partite framework captures networks of ideas (Concept Level), people (Individual Level) and social structures (Social-Organizational Level) and the interactions between these levels.
no code implementations • 9 Aug 2013 • Meadhbh Foster, Mark T. Keane
For the Task variable, participants either answered comprehension questions or provided an explanation of the outcome.