Search Results for author: Marie-Jeanne Lesot

Found 12 papers, 4 papers with code

Self-AMPLIFY: Improving Small Language Models with Self Post Hoc Explanations

no code implementations19 Feb 2024 Milan Bhan, Jean-Noel Vittaut, Nicolas Chesneau, Marie-Jeanne Lesot

Incorporating natural language rationales in the prompt and In-Context Learning (ICL) has led to a significant improvement of Large Language Models (LLMs) performance.

In-Context Learning

Dynamic Interpretability for Model Comparison via Decision Rules

1 code implementation29 Sep 2023 Adam Rida, Marie-Jeanne Lesot, Xavier Renard, Christophe Marsala

Explainable AI (XAI) methods have mostly been built to investigate and shed light on single machine learning models and are not designed to capture and explain differences between multiple models effectively.

Management Model Selection

Achieving Diversity in Counterfactual Explanations: a Review and Discussion

no code implementations10 May 2023 Thibault Laugel, Adulam Jeyasothy, Marie-Jeanne Lesot, Christophe Marsala, Marcin Detyniecki

In the field of Explainable Artificial Intelligence (XAI), counterfactual examples explain to a user the predictions of a trained decision model by indicating the modifications to be made to the instance so as to change its associated prediction.

counterfactual Explainable artificial intelligence +1

TIGTEC : Token Importance Guided TExt Counterfactuals

no code implementations24 Apr 2023 Milan Bhan, Jean-Noel Vittaut, Nicolas Chesneau, Marie-Jeanne Lesot

Counterfactual examples explain a prediction by highlighting changes of instance that flip the outcome of a classifier.

counterfactual Feature Importance

Integrating Prior Knowledge in Post-hoc Explanations

no code implementations25 Apr 2022 Adulam Jeyasothy, Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, Marcin Detyniecki

In the field of eXplainable Artificial Intelligence (XAI), post-hoc interpretability methods aim at explaining to a user the predictions of a trained decision model.

counterfactual Counterfactual Explanation +2

cgSpan: Pattern Mining in Conceptual Graphs

no code implementations26 Oct 2021 Adam Faci, Marie-Jeanne Lesot, Claire Laudy

Conceptual Graphs (CGs) are a graph-based knowledge representation formalism.

Fuzzy Conceptual Graphs: a comparative discussion

no code implementations26 Oct 2021 Adam Faci, Marie-Jeanne Lesot, Claire Laudy

Conceptual Graphs (CG) are a graph-based knowledge representation and reasoning formalism; fuzzy Conceptual Graphs (fCG) constitute an extension that enriches their expressiveness, exploiting the fuzzy set theory so as to relax their constraints at various levels.

The Dangers of Post-hoc Interpretability: Unjustified Counterfactual Explanations

1 code implementation22 Jul 2019 Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard, Marcin Detyniecki

Post-hoc interpretability approaches have been proven to be powerful tools to generate explanations for the predictions made by a trained black-box model.

counterfactual

Issues with post-hoc counterfactual explanations: a discussion

no code implementations11 Jun 2019 Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, Marcin Detyniecki

Counterfactual post-hoc interpretability approaches have been proven to be useful tools to generate explanations for the predictions of a trained blackbox classifier.

counterfactual

Defining Locality for Surrogates in Post-hoc Interpretablity

1 code implementation19 Jun 2018 Thibault Laugel, Xavier Renard, Marie-Jeanne Lesot, Christophe Marsala, Marcin Detyniecki

Local surrogate models, to approximate the local decision boundary of a black-box classifier, constitute one approach to generate explanations for the rationale behind an individual prediction made by the back-box.

Inverse Classification for Comparison-based Interpretability in Machine Learning

6 code implementations22 Dec 2017 Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard, Marcin Detyniecki

In the context of post-hoc interpretability, this paper addresses the task of explaining the prediction of a classifier, considering the case where no information is available, neither on the classifier itself, nor on the processed data (neither the training nor the test data).

BIG-bench Machine Learning Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.