no code implementations • 30 Aug 2024 • Antonio Rago, Bence Palfi, Purin Sukpanichnant, Hannibal Nabli, Kavyesh Vivek, Olga Kostopoulou, James Kinross, Francesca Toni
In both studies we found a clear preference in terms of subjective comprehension and trust for occlusion-1 over SHAP explanations in general, when comparing based on content.
no code implementations • 13 Aug 2024 • Antonio Rago, Maria Vanina Martinez
Finally, we analyse a core set of belief change postulates, discussing their suitability for our real world settings and pointing to particular challenges that may require the relaxation or reinterpretation of some of the theoretical assumptions underlying existing operators.
1 code implementation • 19 Jun 2024 • Xuehao Zhai, Junqi Jiang, Adam Dejl, Antonio Rago, Fangce Guo, Francesca Toni, Aruna Sivakumar
Urban land use inference is a critically important task that aids in city planning and policy-making.
no code implementations • 17 May 2024 • Francesco Leofante, Hamed Ayoobi, Adam Dejl, Gabriel Freedman, Deniz Gorur, Junqi Jiang, Guilherme Paulino-Passos, Antonio Rago, Anna Rapberger, Fabrizio Russo, Xiang Yin, Dekai Zhang, Francesca Toni
AI has become pervasive in recent years, but state-of-the-art approaches predominantly neglect the need for AI systems to be contestable.
no code implementations • 3 May 2024 • Gabriel Freedman, Adam Dejl, Deniz Gorur, Xiang Yin, Antonio Rago, Francesca Toni
Concretely, we introduce argumentative LLMs, a method utilising LLMs to construct argumentation frameworks, which then serve as the basis for formal reasoning in decision-making.
1 code implementation • 21 Apr 2024 • Junqi Jiang, Francesco Leofante, Antonio Rago, Francesca Toni
Counterfactual Explanations (CEs) have emerged as a major paradigm in explainable AI research, providing recourse recommendations for users affected by the decisions of machine learning models.
no code implementations • 17 Feb 2024 • Deniz Gorur, Antonio Rago, Francesca Toni
Argument mining (AM) is the process of automatically extracting arguments, their components and/or relations amongst arguments and components from text.
no code implementations • 2 Feb 2024 • Junqi Jiang, Francesco Leofante, Antonio Rago, Francesca Toni
Counterfactual explanations (CEs) are advocated as being ideally suited to providing algorithmic recourse for subjects affected by the predictions of machine learning models.
1 code implementation • 22 Dec 2023 • Junqi Jiang, Antonio Rago, Francesco Leofante, Francesca Toni
Model Multiplicity (MM) arises when multiple, equally performing machine learning models can be trained to solve the same prediction task.
1 code implementation • 22 Sep 2023 • Junqi Jiang, Jianglin Lan, Francesco Leofante, Antonio Rago, Francesca Toni
In this work, we propose Provably RObust and PLAusible Counterfactual Explanations (PROPLACE), a method leveraging on robust optimisation techniques to address the aforementioned limitations in the literature.
no code implementations • 27 Mar 2023 • Antonio Rago, Hengzhi Li, Francesca Toni
As the field of explainable AI (XAI) is maturing, calls for interactive explanations for (the outputs of) AI models are growing, but the state-of-the-art predominantly focuses on static explanations.
1 code implementation • 31 Aug 2022 • Junqi Jiang, Francesco Leofante, Antonio Rago, Francesca Toni
Existing attempts towards solving this problem are heuristic, and the robustness to model changes of the resulting CFXs is evaluated with only a small number of retrained models, failing to provide exhaustive guarantees.
no code implementations • 23 May 2022 • Benjamin Irwin, Antonio Rago, Francesca Toni
We introduce Forecasting Argumentation Frameworks (FAFs), a novel argumentation-based methodology for forecasting informed by recent judgmental forecasting research.
no code implementations • 23 May 2022 • Antonio Rago, Pietro Baroni, Francesca Toni
Causal models are playing an increasingly important role in machine learning, particularly in the realm of explainable AI.
no code implementations • 24 May 2021 • Kristijonas Čyras, Antonio Rago, Emanuele Albini, Pietro Baroni, Francesca Toni
Explainable AI (XAI) has been investigated for decades and, together with AI itself, has witnessed unprecedented growth in recent years.
no code implementations • 10 Dec 2020 • Antonio Rago, Emanuele Albini, Pietro Baroni, Francesca Toni
One of the most pressing issues in AI in recent years has been the need to address the lack of explainability of many of its models.
no code implementations • 10 Dec 2020 • Emanuele Albini, Piyawat Lertvittayakumjorn, Antonio Rago, Francesca Toni
Despite the recent, widespread focus on eXplainable AI (XAI), explanations computed by XAI methods tend to provide little insight into the functioning of Neural Networks (NNs).
Explainable Artificial Intelligence (XAI) Text Classification