1 code implementation • 10 Jul 2024 • Luca Marzari, Francesco Leofante, Ferdinando Cicalese, Alessandro Farinelli
We study the problem of assessing the robustness of counterfactual explanations for deep learning models.
no code implementations • 17 May 2024 • Francesco Leofante, Hamed Ayoobi, Adam Dejl, Gabriel Freedman, Deniz Gorur, Junqi Jiang, Guilherme Paulino-Passos, Antonio Rago, Anna Rapberger, Fabrizio Russo, Xiang Yin, Dekai Zhang, Francesca Toni
AI has become pervasive in recent years, but state-of-the-art approaches predominantly neglect the need for AI systems to be contestable.
1 code implementation • 21 Apr 2024 • Junqi Jiang, Francesco Leofante, Antonio Rago, Francesca Toni
Counterfactual Explanations (CEs) have emerged as a major paradigm in explainable AI research, providing recourse recommendations for users affected by the decisions of machine learning models.
no code implementations • 2 Feb 2024 • Junqi Jiang, Francesco Leofante, Antonio Rago, Francesca Toni
Counterfactual explanations (CEs) are advocated as being ideally suited to providing algorithmic recourse for subjects affected by the predictions of machine learning models.
1 code implementation • 22 Dec 2023 • Junqi Jiang, Antonio Rago, Francesco Leofante, Francesca Toni
Model Multiplicity (MM) arises when multiple, equally performing machine learning models can be trained to solve the same prediction task.
1 code implementation • 11 Dec 2023 • Francesco Leofante, Nico Potyka
Counterfactual explanations shed light on the decisions of black-box models by explaining how an input can be altered to obtain a favourable decision from the model (e. g., when a loan application has been rejected).
1 code implementation • 22 Sep 2023 • Junqi Jiang, Jianglin Lan, Francesco Leofante, Antonio Rago, Francesca Toni
In this work, we propose Provably RObust and PLAusible Counterfactual Explanations (PROPLACE), a method leveraging on robust optimisation techniques to address the aforementioned limitations in the literature.
1 code implementation • 31 Aug 2022 • Junqi Jiang, Francesco Leofante, Antonio Rago, Francesca Toni
Existing attempts towards solving this problem are heuristic, and the robustness to model changes of the resulting CFXs is evaluated with only a small number of retrained models, failing to provide exhaustive guarantees.
no code implementations • 14 Jul 2022 • Stefan Schupp, Francesco Leofante, Leander Behr, Erika Ábrahám, Armando Taccella
A swarm robotic system consists of a team of robots performing cooperative tasks without any centralized coordination.
no code implementations • 17 Mar 2020 • Dario Guidotti, Francesco Leofante, Luca Pulina, Armando Tacchella
Verification of deep neural networks has witnessed a recent surge of interest, fueled by success stories in diverse domains and by abreast concerns about safety and security in envisaged applications.
no code implementations • 19 Jun 2018 • Arthur Bit-Monnot, Francesco Leofante, Luca Pulina, Erika Abraham, Armando Tacchella
Smart factories are on the verge of becoming the new industrial paradigm, wherein optimization permeates all aspects of production, from concept generation to sales.
no code implementations • 25 May 2018 • Francesco Leofante, Nina Narodytska, Luca Pulina, Armando Tacchella
Neural networks are one of the most investigated and widely used techniques in Machine Learning.
no code implementations • 12 Nov 2017 • Francesco Leofante, Erika Ábrahám, Tim Niemueller, Gerhard Lakemeyer, Armando Tacchella
In manufacturing, the increasing involvement of autonomous robots in production processes poses new challenges on the production management.