no code implementations • 16 Apr 2024 • Vincent Grari, Marcin Detyniecki
The reverse-engineered nature of traditional models complicates the enforcement of fairness and can lead to biased outcomes.
1 code implementation • 27 Oct 2023 • Vincent Grari, Thibault Laugel, Tatsunori Hashimoto, Sylvain Lamprier, Marcin Detyniecki
In the field of algorithmic fairness, significant attention has been put on group fairness criteria, such as Demographic Parity and Equalized Odds.
no code implementations • 10 May 2023 • Thibault Laugel, Adulam Jeyasothy, Marie-Jeanne Lesot, Christophe Marsala, Marcin Detyniecki
In the field of Explainable Artificial Intelligence (XAI), counterfactual examples explain to a user the predictions of a trained decision model by indicating the modifications to be made to the instance so as to change its associated prediction.
1 code implementation • 14 Feb 2023 • Natasa Krco, Thibault Laugel, Jean-Michel Loubes, Marcin Detyniecki
With comparable performances in fairness and accuracy, are the different bias mitigation approaches impacting a similar number of individuals?
no code implementations • 25 Apr 2022 • Adulam Jeyasothy, Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, Marcin Detyniecki
In the field of eXplainable Artificial Intelligence (XAI), post-hoc interpretability methods aim at explaining to a user the predictions of a trained decision model.
no code implementations • 24 Feb 2022 • Vincent Grari, Arthur Charpentier, Marcin Detyniecki
In this paper, we will show that (2) this can be generalized to multiple pricing factors (geographic, car type), (3) it perfectly adapted for a fairness context (since it allows to debias the set of pricing components): We extend this main idea to a general framework in which a single whole pricing model is trained by generating the geographic and car pricing components needed to predict the pure premium while mitigating the unwanted bias according to the desired metric.
1 code implementation • 10 Sep 2021 • Vincent Grari, Sylvain Lamprier, Marcin Detyniecki
In recent years, most fairness strategies in machine learning models focus on mitigating unwanted biases by assuming that the sensitive information is observed.
no code implementations • 9 Jul 2021 • Tom Vermeire, Thibault Laugel, Xavier Renard, David Martens, Marcin Detyniecki
Explainability is becoming an important requirement for organizations that make use of automated decision-making due to regulatory initiatives and a shift in public awareness.
no code implementations • 9 Jul 2021 • Rafael Poyiadzi, Xavier Renard, Thibault Laugel, Raul Santos-Rodriguez, Marcin Detyniecki
This paper analyses the fundamental ingredients behind surrogate explanations to provide a better understanding of their inner workings.
no code implementations • 10 Jun 2021 • Rafael Poyiadzi, Xavier Renard, Thibault Laugel, Raul Santos-Rodriguez, Marcin Detyniecki
In this work we review the similarities and differences amongst multiple methods, with a particular focus on what information they extract from the model, as this has large impact on the output: the explanation.
no code implementations • 3 May 2021 • Boris Ruf, Marcin Detyniecki
To implement fair machine learning in a sustainable way, choosing the right fairness objective is key.
no code implementations • 12 Apr 2021 • Xavier Renard, Thibault Laugel, Marcin Detyniecki
This paper proposes to address this question by analyzing the prediction discrepancies in a pool of best-performing models trained on the same data.
no code implementations • 9 Apr 2021 • Boris Ruf, Marcin Detyniecki
Most fair regression algorithms mitigate bias towards sensitive sub populations and therefore improve fairness at group level.
no code implementations • 16 Feb 2021 • Boris Ruf, Marcin Detyniecki
Fairness is a concept of justice.
1 code implementation • 24 Dec 2020 • Yves Rychener, Xavier Renard, Djamé Seddah, Pascal Frossard, Marcin Detyniecki
Current methods for Black-Box NLP interpretability, like LIME or SHAP, are based on altering the text to interpret by removing words and modeling the Black-Box response.
no code implementations • 24 Dec 2020 • Yves Rychener, Xavier Renard, Djamé Seddah, Pascal Frossard, Marcin Detyniecki
NLP Interpretability aims to increase trust in model predictions.
no code implementations • 14 Sep 2020 • Boris Ruf, Marcin Detyniecki
The possible risk that AI systems could promote discrimination by reproducing and enforcing unwanted bias in data has been broadly discussed in research and society.
1 code implementation • 7 Sep 2020 • Vincent Grari, Oualid El Hajouji, Sylvain Lamprier, Marcin Detyniecki
We leverage recent work which has been done to estimate this coefficient by learning deep neural network transformations and use it as a minmax game to penalize the intrinsic bias in a multi dimensional latent representation.
1 code implementation • 30 Aug 2020 • Vincent Grari, Sylvain Lamprier, Marcin Detyniecki
In recent years, fairness has become an important topic in the machine learning research community.
no code implementations • 27 Aug 2020 • Fernando Molano Ortiz, Matteo Sammarco, Luís Henrique M. K. Costa, Marcin Detyniecki
Whereas a very large number of sensors are available in the automotive field, currently just a few of them, mostly proprioceptive ones, are used in telematics, automotive insurance, and mobility safety research.
no code implementations • 15 Mar 2020 • Boris Ruf, Chaouki Boutharouite, Marcin Detyniecki
The potential risk of AI systems unintentionally embedding and reproducing bias has attracted the attention of machine learning practitioners and society at large.
1 code implementation • 13 Nov 2019 • Vincent Grari, Boris Ruf, Sylvain Lamprier, Marcin Detyniecki
The approach incorporates at each iteration the gradient of the neural network directly in the gradient tree boosting.
1 code implementation • 12 Nov 2019 • Vincent Grari, Boris Ruf, Sylvain Lamprier, Marcin Detyniecki
Second, by minimizing the HGR directly with an adversarial neural network architecture.
1 code implementation • 8 Nov 2019 • Vincent Ballet, Xavier Renard, Jonathan Aigrain, Thibault Laugel, Pascal Frossard, Marcin Detyniecki
Security of machine learning models is a concern as they may face adversarial attacks for unwarranted advantageous decisions.
no code implementations • 10 Oct 2019 • Boris Ruf, Matteo Sammarco, Marcin Detyniecki
Towards conversational agents that are capable of handling more complex questions on contractual conditions, formalizing contract statements in a machine readable way is crucial.
no code implementations • 25 Sep 2019 • Jonathan Aigrain, Marcin Detyniecki
Despite having excellent performances for a wide variety of tasks, modern neural networks are unable to provide a prediction with a reliable confidence estimate which would allow to detect misclassifications.
1 code implementation • 22 Jul 2019 • Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard, Marcin Detyniecki
Post-hoc interpretability approaches have been proven to be powerful tools to generate explanations for the predictions made by a trained black-box model.
no code implementations • 11 Jun 2019 • Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, Marcin Detyniecki
Counterfactual post-hoc interpretability approaches have been proven to be useful tools to generate explanations for the predictions of a trained blackbox classifier.
no code implementations • 4 Jun 2019 • Xavier Renard, Nicolas Woloszko, Jonathan Aigrain, Marcin Detyniecki
Interpretable surrogates of black-box predictors trained on high-dimensional tabular datasets can struggle to generate comprehensible explanations in the presence of correlated variables.
1 code implementation • 22 May 2019 • Jonathan Aigrain, Marcin Detyniecki
Despite having excellent performances for a wide variety of tasks, modern neural networks are unable to provide a reliable confidence value allowing to detect misclassifications.
no code implementations • 7 Sep 2018 • Xavier Renard, Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, Marcin Detyniecki
Machine learning models are increasingly used in the industry to make decisions such as credit insurance approval.
1 code implementation • 19 Jun 2018 • Thibault Laugel, Xavier Renard, Marie-Jeanne Lesot, Christophe Marsala, Marcin Detyniecki
Local surrogate models, to approximate the local decision boundary of a black-box classifier, constitute one approach to generate explanations for the rationale behind an individual prediction made by the back-box.
6 code implementations • 22 Dec 2017 • Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard, Marcin Detyniecki
In the context of post-hoc interpretability, this paper addresses the task of explaining the prediction of a classifier, considering the case where no information is available, neither on the classifier itself, nor on the processed data (neither the training nor the test data).