1 code implementation • 29 Sep 2023 • Adam Rida, Marie-Jeanne Lesot, Xavier Renard, Christophe Marsala
Explainable AI (XAI) methods have mostly been built to investigate and shed light on single machine learning models and are not designed to capture and explain differences between multiple models effectively.
no code implementations • 9 Jul 2021 • Tom Vermeire, Thibault Laugel, Xavier Renard, David Martens, Marcin Detyniecki
Explainability is becoming an important requirement for organizations that make use of automated decision-making due to regulatory initiatives and a shift in public awareness.
no code implementations • 9 Jul 2021 • Rafael Poyiadzi, Xavier Renard, Thibault Laugel, Raul Santos-Rodriguez, Marcin Detyniecki
This paper analyses the fundamental ingredients behind surrogate explanations to provide a better understanding of their inner workings.
no code implementations • 10 Jun 2021 • Rafael Poyiadzi, Xavier Renard, Thibault Laugel, Raul Santos-Rodriguez, Marcin Detyniecki
In this work we review the similarities and differences amongst multiple methods, with a particular focus on what information they extract from the model, as this has large impact on the output: the explanation.
no code implementations • 12 Apr 2021 • Xavier Renard, Thibault Laugel, Marcin Detyniecki
This paper proposes to address this question by analyzing the prediction discrepancies in a pool of best-performing models trained on the same data.
no code implementations • 24 Dec 2020 • Yves Rychener, Xavier Renard, Djamé Seddah, Pascal Frossard, Marcin Detyniecki
NLP Interpretability aims to increase trust in model predictions.
1 code implementation • 24 Dec 2020 • Yves Rychener, Xavier Renard, Djamé Seddah, Pascal Frossard, Marcin Detyniecki
Current methods for Black-Box NLP interpretability, like LIME or SHAP, are based on altering the text to interpret by removing words and modeling the Black-Box response.
1 code implementation • 8 Nov 2019 • Vincent Ballet, Xavier Renard, Jonathan Aigrain, Thibault Laugel, Pascal Frossard, Marcin Detyniecki
Security of machine learning models is a concern as they may face adversarial attacks for unwarranted advantageous decisions.
1 code implementation • 22 Jul 2019 • Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard, Marcin Detyniecki
Post-hoc interpretability approaches have been proven to be powerful tools to generate explanations for the predictions made by a trained black-box model.
no code implementations • 4 Jun 2019 • Xavier Renard, Nicolas Woloszko, Jonathan Aigrain, Marcin Detyniecki
Interpretable surrogates of black-box predictors trained on high-dimensional tabular datasets can struggle to generate comprehensible explanations in the presence of correlated variables.
no code implementations • 7 Sep 2018 • Xavier Renard, Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, Marcin Detyniecki
Machine learning models are increasingly used in the industry to make decisions such as credit insurance approval.
1 code implementation • 19 Jun 2018 • Thibault Laugel, Xavier Renard, Marie-Jeanne Lesot, Christophe Marsala, Marcin Detyniecki
Local surrogate models, to approximate the local decision boundary of a black-box classifier, constitute one approach to generate explanations for the rationale behind an individual prediction made by the back-box.
6 code implementations • 22 Dec 2017 • Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard, Marcin Detyniecki
In the context of post-hoc interpretability, this paper addresses the task of explaining the prediction of a classifier, considering the case where no information is available, neither on the classifier itself, nor on the processed data (neither the training nor the test data).