Search Results for author: Antonio Rago

Found 17 papers, 5 papers with code

Exploring the Effect of Explanation Content and Format on User Comprehension and Trust

no code implementations30 Aug 2024 Antonio Rago, Bence Palfi, Purin Sukpanichnant, Hannibal Nabli, Kavyesh Vivek, Olga Kostopoulou, James Kinross, Francesca Toni

In both studies we found a clear preference in terms of subjective comprehension and trust for occlusion-1 over SHAP explanations in general, when comparing based on content.

Advancing Interactive Explainable AI via Belief Change Theory

no code implementations13 Aug 2024 Antonio Rago, Maria Vanina Martinez

Finally, we analyse a core set of belief change postulates, discussing their suitability for our real world settings and pointing to particular challenges that may require the relaxation or reinterpretation of some of the theoretical assumptions underlying existing operators.

Argumentative Large Language Models for Explainable and Contestable Decision-Making

no code implementations3 May 2024 Gabriel Freedman, Adam Dejl, Deniz Gorur, Xiang Yin, Antonio Rago, Francesca Toni

Concretely, we introduce argumentative LLMs, a method utilising LLMs to construct argumentation frameworks, which then serve as the basis for formal reasoning in decision-making.

Claim Verification Decision Making +1

Interval Abstractions for Robust Counterfactual Explanations

1 code implementation21 Apr 2024 Junqi Jiang, Francesco Leofante, Antonio Rago, Francesca Toni

Counterfactual Explanations (CEs) have emerged as a major paradigm in explainable AI research, providing recourse recommendations for users affected by the decisions of machine learning models.

counterfactual Multi-class Classification

Can Large Language Models perform Relation-based Argument Mining?

no code implementations17 Feb 2024 Deniz Gorur, Antonio Rago, Francesca Toni

Argument mining (AM) is the process of automatically extracting arguments, their components and/or relations amongst arguments and components from text.

Argument Mining Relation

Robust Counterfactual Explanations in Machine Learning: A Survey

no code implementations2 Feb 2024 Junqi Jiang, Francesco Leofante, Antonio Rago, Francesca Toni

Counterfactual explanations (CEs) are advocated as being ideally suited to providing algorithmic recourse for subjects affected by the predictions of machine learning models.

counterfactual

Recourse under Model Multiplicity via Argumentative Ensembling (Technical Report)

1 code implementation22 Dec 2023 Junqi Jiang, Antonio Rago, Francesco Leofante, Francesca Toni

Model Multiplicity (MM) arises when multiple, equally performing machine learning models can be trained to solve the same prediction task.

counterfactual

Provably Robust and Plausible Counterfactual Explanations for Neural Networks via Robust Optimisation

1 code implementation22 Sep 2023 Junqi Jiang, Jianglin Lan, Francesco Leofante, Antonio Rago, Francesca Toni

In this work, we propose Provably RObust and PLAusible Counterfactual Explanations (PROPLACE), a method leveraging on robust optimisation techniques to address the aforementioned limitations in the literature.

counterfactual

Interactive Explanations by Conflict Resolution via Argumentative Exchanges

no code implementations27 Mar 2023 Antonio Rago, Hengzhi Li, Francesca Toni

As the field of explainable AI (XAI) is maturing, calls for interactive explanations for (the outputs of) AI models are growing, but the state-of-the-art predominantly focuses on static explanations.

counterfactual Explainable Artificial Intelligence (XAI)

Formalising the Robustness of Counterfactual Explanations for Neural Networks

1 code implementation31 Aug 2022 Junqi Jiang, Francesco Leofante, Antonio Rago, Francesca Toni

Existing attempts towards solving this problem are heuristic, and the robustness to model changes of the resulting CFXs is evaluated with only a small number of retrained models, failing to provide exhaustive guarantees.

counterfactual

Forecasting Argumentation Frameworks

no code implementations23 May 2022 Benjamin Irwin, Antonio Rago, Francesca Toni

We introduce Forecasting Argumentation Frameworks (FAFs), a novel argumentation-based methodology for forecasting informed by recent judgmental forecasting research.

Explaining Causal Models with Argumentation: the Case of Bi-variate Reinforcement

no code implementations23 May 2022 Antonio Rago, Pietro Baroni, Francesca Toni

Causal models are playing an increasingly important role in machine learning, particularly in the realm of explainable AI.

Argumentative XAI: A Survey

no code implementations24 May 2021 Kristijonas Čyras, Antonio Rago, Emanuele Albini, Pietro Baroni, Francesca Toni

Explainable AI (XAI) has been investigated for decades and, together with AI itself, has witnessed unprecedented growth in recent years.

Explainable Artificial Intelligence (XAI)

Influence-Driven Explanations for Bayesian Network Classifiers

no code implementations10 Dec 2020 Antonio Rago, Emanuele Albini, Pietro Baroni, Francesca Toni

One of the most pressing issues in AI in recent years has been the need to address the lack of explainability of many of its models.

counterfactual Relation

Deep Argumentative Explanations

no code implementations10 Dec 2020 Emanuele Albini, Piyawat Lertvittayakumjorn, Antonio Rago, Francesca Toni

Despite the recent, widespread focus on eXplainable AI (XAI), explanations computed by XAI methods tend to provide little insight into the functioning of Neural Networks (NNs).

Explainable Artificial Intelligence (XAI) Text Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.