no code implementations • 19 Feb 2024 • Milan Bhan, Jean-Noel Vittaut, Nicolas Chesneau, Marie-Jeanne Lesot
Incorporating natural language rationales in the prompt and In-Context Learning (ICL) has led to a significant improvement of Large Language Models (LLMs) performance.
1 code implementation • 29 Sep 2023 • Adam Rida, Marie-Jeanne Lesot, Xavier Renard, Christophe Marsala
Explainable AI (XAI) methods have mostly been built to investigate and shed light on single machine learning models and are not designed to capture and explain differences between multiple models effectively.
no code implementations • 10 May 2023 • Thibault Laugel, Adulam Jeyasothy, Marie-Jeanne Lesot, Christophe Marsala, Marcin Detyniecki
In the field of Explainable Artificial Intelligence (XAI), counterfactual examples explain to a user the predictions of a trained decision model by indicating the modifications to be made to the instance so as to change its associated prediction.
no code implementations • 24 Apr 2023 • Milan Bhan, Jean-Noel Vittaut, Nicolas Chesneau, Marie-Jeanne Lesot
Counterfactual examples explain a prediction by highlighting changes of instance that flip the outcome of a classifier.
no code implementations • 25 Apr 2022 • Adulam Jeyasothy, Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, Marcin Detyniecki
In the field of eXplainable Artificial Intelligence (XAI), post-hoc interpretability methods aim at explaining to a user the predictions of a trained decision model.
no code implementations • 26 Oct 2021 • Adam Faci, Marie-Jeanne Lesot, Claire Laudy
Conceptual Graphs (CGs) are a graph-based knowledge representation formalism.
no code implementations • 26 Oct 2021 • Adam Faci, Marie-Jeanne Lesot, Claire Laudy
Conceptual Graphs (CG) are a graph-based knowledge representation and reasoning formalism; fuzzy Conceptual Graphs (fCG) constitute an extension that enriches their expressiveness, exploiting the fuzzy set theory so as to relax their constraints at various levels.
1 code implementation • 22 Jul 2019 • Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard, Marcin Detyniecki
Post-hoc interpretability approaches have been proven to be powerful tools to generate explanations for the predictions made by a trained black-box model.
no code implementations • 11 Jun 2019 • Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, Marcin Detyniecki
Counterfactual post-hoc interpretability approaches have been proven to be useful tools to generate explanations for the predictions of a trained blackbox classifier.
no code implementations • 7 Sep 2018 • Xavier Renard, Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, Marcin Detyniecki
Machine learning models are increasingly used in the industry to make decisions such as credit insurance approval.
1 code implementation • 19 Jun 2018 • Thibault Laugel, Xavier Renard, Marie-Jeanne Lesot, Christophe Marsala, Marcin Detyniecki
Local surrogate models, to approximate the local decision boundary of a black-box classifier, constitute one approach to generate explanations for the rationale behind an individual prediction made by the back-box.
6 code implementations • 22 Dec 2017 • Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard, Marcin Detyniecki
In the context of post-hoc interpretability, this paper addresses the task of explaining the prediction of a classifier, considering the case where no information is available, neither on the classifier itself, nor on the processed data (neither the training nor the test data).