Search Results for author: Marcin Detyniecki

Found 33 papers, 13 papers with code

OptiGrad: A Fair and more Efficient Price Elasticity Optimization via a Gradient Based Learning

no code implementations16 Apr 2024 Vincent Grari, Marcin Detyniecki

The reverse-engineered nature of traditional models complicates the enforcement of fairness and can lead to biased outcomes.

Fairness

On the Fairness ROAD: Robust Optimization for Adversarial Debiasing

1 code implementation27 Oct 2023 Vincent Grari, Thibault Laugel, Tatsunori Hashimoto, Sylvain Lamprier, Marcin Detyniecki

In the field of algorithmic fairness, significant attention has been put on group fairness criteria, such as Demographic Parity and Equalized Odds.

Attribute Fairness

Achieving Diversity in Counterfactual Explanations: a Review and Discussion

no code implementations10 May 2023 Thibault Laugel, Adulam Jeyasothy, Marie-Jeanne Lesot, Christophe Marsala, Marcin Detyniecki

In the field of Explainable Artificial Intelligence (XAI), counterfactual examples explain to a user the predictions of a trained decision model by indicating the modifications to be made to the instance so as to change its associated prediction.

counterfactual Explainable artificial intelligence +1

When Mitigating Bias is Unfair: A Comprehensive Study on the Impact of Bias Mitigation Algorithms

1 code implementation14 Feb 2023 Natasa Krco, Thibault Laugel, Jean-Michel Loubes, Marcin Detyniecki

With comparable performances in fairness and accuracy, are the different bias mitigation approaches impacting a similar number of individuals?

Fairness

Integrating Prior Knowledge in Post-hoc Explanations

no code implementations25 Apr 2022 Adulam Jeyasothy, Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, Marcin Detyniecki

In the field of eXplainable Artificial Intelligence (XAI), post-hoc interpretability methods aim at explaining to a user the predictions of a trained decision model.

counterfactual Counterfactual Explanation +2

A Fair Pricing Model via Adversarial Learning

no code implementations24 Feb 2022 Vincent Grari, Arthur Charpentier, Marcin Detyniecki

In this paper, we will show that (2) this can be generalized to multiple pricing factors (geographic, car type), (3) it perfectly adapted for a fairness context (since it allows to debias the set of pricing components): We extend this main idea to a general framework in which a single whole pricing model is trained by generating the geographic and car pricing components needed to predict the pure premium while mitigating the unwanted bias according to the desired metric.

Fairness

Fairness without the sensitive attribute via Causal Variational Autoencoder

1 code implementation10 Sep 2021 Vincent Grari, Sylvain Lamprier, Marcin Detyniecki

In recent years, most fairness strategies in machine learning models focus on mitigating unwanted biases by assuming that the sensitive information is observed.

Attribute Fairness

How to choose an Explainability Method? Towards a Methodical Implementation of XAI in Practice

no code implementations9 Jul 2021 Tom Vermeire, Thibault Laugel, Xavier Renard, David Martens, Marcin Detyniecki

Explainability is becoming an important requirement for organizations that make use of automated decision-making due to regulatory initiatives and a shift in public awareness.

Decision Making Explainable Artificial Intelligence (XAI)

Understanding surrogate explanations: the interplay between complexity, fidelity and coverage

no code implementations9 Jul 2021 Rafael Poyiadzi, Xavier Renard, Thibault Laugel, Raul Santos-Rodriguez, Marcin Detyniecki

This paper analyses the fundamental ingredients behind surrogate explanations to provide a better understanding of their inner workings.

On the overlooked issue of defining explanation objectives for local-surrogate explainers

no code implementations10 Jun 2021 Rafael Poyiadzi, Xavier Renard, Thibault Laugel, Raul Santos-Rodriguez, Marcin Detyniecki

In this work we review the similarities and differences amongst multiple methods, with a particular focus on what information they extract from the model, as this has large impact on the output: the explanation.

Explaining how your AI system is fair

no code implementations3 May 2021 Boris Ruf, Marcin Detyniecki

To implement fair machine learning in a sustainable way, choosing the right fairness objective is key.

Decision Making Fairness

Understanding Prediction Discrepancies in Machine Learning Classifiers

no code implementations12 Apr 2021 Xavier Renard, Thibault Laugel, Marcin Detyniecki

This paper proposes to address this question by analyzing the prediction discrepancies in a pool of best-performing models trained on the same data.

BIG-bench Machine Learning Fairness

Implementing Fair Regression In The Real World

no code implementations9 Apr 2021 Boris Ruf, Marcin Detyniecki

Most fair regression algorithms mitigate bias towards sensitive sub populations and therefore improve fairness at group level.

Fairness regression

On the Granularity of Explanations in Model Agnostic NLP Interpretability

1 code implementation24 Dec 2020 Yves Rychener, Xavier Renard, Djamé Seddah, Pascal Frossard, Marcin Detyniecki

Current methods for Black-Box NLP interpretability, like LIME or SHAP, are based on altering the text to interpret by removing words and modeling the Black-Box response.

Active Fairness Instead of Unawareness

no code implementations14 Sep 2020 Boris Ruf, Marcin Detyniecki

The possible risk that AI systems could promote discrimination by reproducing and enforcing unwanted bias in data has been broadly discussed in research and society.

Fairness

Learning Unbiased Representations via Rényi Minimization

1 code implementation7 Sep 2020 Vincent Grari, Oualid El Hajouji, Sylvain Lamprier, Marcin Detyniecki

We leverage recent work which has been done to estimate this coefficient by learning deep neural network transformations and use it as a minmax game to penalize the intrinsic bias in a multi dimensional latent representation.

Attribute Fairness

Adversarial Learning for Counterfactual Fairness

1 code implementation30 Aug 2020 Vincent Grari, Sylvain Lamprier, Marcin Detyniecki

In recent years, fairness has become an important topic in the machine learning research community.

Attribute counterfactual +1

Vehicle Telematics Via Exteroceptive Sensors: A Survey

no code implementations27 Aug 2020 Fernando Molano Ortiz, Matteo Sammarco, Luís Henrique M. K. Costa, Marcin Detyniecki

Whereas a very large number of sensors are available in the automotive field, currently just a few of them, mostly proprioceptive ones, are used in telematics, automotive insurance, and mobility safety research.

Getting Fairness Right: Towards a Toolbox for Practitioners

no code implementations15 Mar 2020 Boris Ruf, Chaouki Boutharouite, Marcin Detyniecki

The potential risk of AI systems unintentionally embedding and reproducing bias has attracted the attention of machine learning practitioners and society at large.

Fairness

Fair Adversarial Gradient Tree Boosting

1 code implementation13 Nov 2019 Vincent Grari, Boris Ruf, Sylvain Lamprier, Marcin Detyniecki

The approach incorporates at each iteration the gradient of the neural network directly in the gradient tree boosting.

Attribute Fairness +1

Fairness-Aware Neural Réyni Minimization for Continuous Features

1 code implementation12 Nov 2019 Vincent Grari, Boris Ruf, Sylvain Lamprier, Marcin Detyniecki

Second, by minimizing the HGR directly with an adversarial neural network architecture.

Fairness

Imperceptible Adversarial Attacks on Tabular Data

1 code implementation8 Nov 2019 Vincent Ballet, Xavier Renard, Jonathan Aigrain, Thibault Laugel, Pascal Frossard, Marcin Detyniecki

Security of machine learning models is a concern as they may face adversarial attacks for unwarranted advantageous decisions.

BIG-bench Machine Learning

Contract Statements Knowledge Service for Chatbots

no code implementations10 Oct 2019 Boris Ruf, Matteo Sammarco, Marcin Detyniecki

Towards conversational agents that are capable of handling more complex questions on contractual conditions, formalizing contract statements in a machine readable way is crucial.

Chatbot

How the Softmax Activation Hinders the Detection of Adversarial and Out-of-Distribution Examples in Neural Networks

no code implementations25 Sep 2019 Jonathan Aigrain, Marcin Detyniecki

Despite having excellent performances for a wide variety of tasks, modern neural networks are unable to provide a prediction with a reliable confidence estimate which would allow to detect misclassifications.

The Dangers of Post-hoc Interpretability: Unjustified Counterfactual Explanations

1 code implementation22 Jul 2019 Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard, Marcin Detyniecki

Post-hoc interpretability approaches have been proven to be powerful tools to generate explanations for the predictions made by a trained black-box model.

counterfactual

Issues with post-hoc counterfactual explanations: a discussion

no code implementations11 Jun 2019 Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, Marcin Detyniecki

Counterfactual post-hoc interpretability approaches have been proven to be useful tools to generate explanations for the predictions of a trained blackbox classifier.

counterfactual

Concept Tree: High-Level Representation of Variables for More Interpretable Surrogate Decision Trees

no code implementations4 Jun 2019 Xavier Renard, Nicolas Woloszko, Jonathan Aigrain, Marcin Detyniecki

Interpretable surrogates of black-box predictors trained on high-dimensional tabular datasets can struggle to generate comprehensible explanations in the presence of correlated variables.

Detecting Adversarial Examples and Other Misclassifications in Neural Networks by Introspection

1 code implementation22 May 2019 Jonathan Aigrain, Marcin Detyniecki

Despite having excellent performances for a wide variety of tasks, modern neural networks are unable to provide a reliable confidence value allowing to detect misclassifications.

Defining Locality for Surrogates in Post-hoc Interpretablity

1 code implementation19 Jun 2018 Thibault Laugel, Xavier Renard, Marie-Jeanne Lesot, Christophe Marsala, Marcin Detyniecki

Local surrogate models, to approximate the local decision boundary of a black-box classifier, constitute one approach to generate explanations for the rationale behind an individual prediction made by the back-box.

Inverse Classification for Comparison-based Interpretability in Machine Learning

6 code implementations22 Dec 2017 Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard, Marcin Detyniecki

In the context of post-hoc interpretability, this paper addresses the task of explaining the prediction of a classifier, considering the case where no information is available, neither on the classifier itself, nor on the processed data (neither the training nor the test data).

BIG-bench Machine Learning Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.