Search Results for author: Xavier Renard

Found 13 papers, 6 papers with code

Dynamic Interpretability for Model Comparison via Decision Rules

1 code implementation29 Sep 2023 Adam Rida, Marie-Jeanne Lesot, Xavier Renard, Christophe Marsala

Explainable AI (XAI) methods have mostly been built to investigate and shed light on single machine learning models and are not designed to capture and explain differences between multiple models effectively.

Management Model Selection

How to choose an Explainability Method? Towards a Methodical Implementation of XAI in Practice

no code implementations9 Jul 2021 Tom Vermeire, Thibault Laugel, Xavier Renard, David Martens, Marcin Detyniecki

Explainability is becoming an important requirement for organizations that make use of automated decision-making due to regulatory initiatives and a shift in public awareness.

Decision Making Explainable Artificial Intelligence (XAI)

Understanding surrogate explanations: the interplay between complexity, fidelity and coverage

no code implementations9 Jul 2021 Rafael Poyiadzi, Xavier Renard, Thibault Laugel, Raul Santos-Rodriguez, Marcin Detyniecki

This paper analyses the fundamental ingredients behind surrogate explanations to provide a better understanding of their inner workings.

On the overlooked issue of defining explanation objectives for local-surrogate explainers

no code implementations10 Jun 2021 Rafael Poyiadzi, Xavier Renard, Thibault Laugel, Raul Santos-Rodriguez, Marcin Detyniecki

In this work we review the similarities and differences amongst multiple methods, with a particular focus on what information they extract from the model, as this has large impact on the output: the explanation.

Understanding Prediction Discrepancies in Machine Learning Classifiers

no code implementations12 Apr 2021 Xavier Renard, Thibault Laugel, Marcin Detyniecki

This paper proposes to address this question by analyzing the prediction discrepancies in a pool of best-performing models trained on the same data.

BIG-bench Machine Learning Fairness

On the Granularity of Explanations in Model Agnostic NLP Interpretability

1 code implementation24 Dec 2020 Yves Rychener, Xavier Renard, Djamé Seddah, Pascal Frossard, Marcin Detyniecki

Current methods for Black-Box NLP interpretability, like LIME or SHAP, are based on altering the text to interpret by removing words and modeling the Black-Box response.

Imperceptible Adversarial Attacks on Tabular Data

1 code implementation8 Nov 2019 Vincent Ballet, Xavier Renard, Jonathan Aigrain, Thibault Laugel, Pascal Frossard, Marcin Detyniecki

Security of machine learning models is a concern as they may face adversarial attacks for unwarranted advantageous decisions.

BIG-bench Machine Learning

The Dangers of Post-hoc Interpretability: Unjustified Counterfactual Explanations

1 code implementation22 Jul 2019 Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard, Marcin Detyniecki

Post-hoc interpretability approaches have been proven to be powerful tools to generate explanations for the predictions made by a trained black-box model.

counterfactual

Concept Tree: High-Level Representation of Variables for More Interpretable Surrogate Decision Trees

no code implementations4 Jun 2019 Xavier Renard, Nicolas Woloszko, Jonathan Aigrain, Marcin Detyniecki

Interpretable surrogates of black-box predictors trained on high-dimensional tabular datasets can struggle to generate comprehensible explanations in the presence of correlated variables.

Defining Locality for Surrogates in Post-hoc Interpretablity

1 code implementation19 Jun 2018 Thibault Laugel, Xavier Renard, Marie-Jeanne Lesot, Christophe Marsala, Marcin Detyniecki

Local surrogate models, to approximate the local decision boundary of a black-box classifier, constitute one approach to generate explanations for the rationale behind an individual prediction made by the back-box.

Inverse Classification for Comparison-based Interpretability in Machine Learning

6 code implementations22 Dec 2017 Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard, Marcin Detyniecki

In the context of post-hoc interpretability, this paper addresses the task of explaining the prediction of a classifier, considering the case where no information is available, neither on the classifier itself, nor on the processed data (neither the training nor the test data).

BIG-bench Machine Learning Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.