Search Results for author: Agustin Picard

Found 6 papers, 5 papers with code

TaCo: Targeted Concept Removal in Output Embeddings for NLP via Information Theory and Explainability

1 code implementation11 Dec 2023 Fanny Jourdan, Louis Béthune, Agustin Picard, Laurent Risser, Nicholas Asher

In evaluation, we show that the proposed post-hoc approach significantly reduces gender-related associations in NLP models while preserving the overall performance and functionality of the models.

Fairness

Unlocking Feature Visualization for Deeper Networks with MAgnitude Constrained Optimization

1 code implementation11 Jun 2023 Thomas Fel, Thibaut Boissin, Victor Boutin, Agustin Picard, Paul Novello, Julien Colin, Drew Linsley, Tom Rousseau, Rémi Cadène, Laurent Gardes, Thomas Serre

However, its widespread adoption has been limited due to a reliance on tricks to generate interpretable images, and corresponding challenges in scaling it to deeper neural networks.

COCKATIEL: COntinuous Concept ranKed ATtribution with Interpretable ELements for explaining neural net classifiers on NLP tasks

1 code implementation11 May 2023 Fanny Jourdan, Agustin Picard, Thomas Fel, Laurent Risser, Jean Michel Loubes, Nicholas Asher

COCKATIEL is a novel, post-hoc, concept-based, model-agnostic XAI technique that generates meaningful explanations from the last layer of a neural net model trained on an NLP classification task by using Non-Negative Matrix Factorization (NMF) to discover the concepts the model leverages to make predictions and by exploiting a Sensitivity Analysis to estimate accurately the importance of each of these concepts for the model.

Explainable Artificial Intelligence (XAI) Sentiment Analysis

CRAFT: Concept Recursive Activation FacTorization for Explainability

1 code implementation CVPR 2023 Thomas Fel, Agustin Picard, Louis Bethune, Thibaut Boissin, David Vigouroux, Julien Colin, Rémi Cadène, Thomas Serre

However, recent research has exposed the limited practical value of these methods, attributed in part to their narrow focus on the most prominent regions of an image -- revealing "where" the model looks, but failing to elucidate "what" the model sees in those areas.

A survey of Identification and mitigation of Machine Learning algorithmic biases in Image Analysis

no code implementations10 Oct 2022 Laurent Risser, Agustin Picard, Lucas Hervier, Jean-Michel Loubes

Contrarily to societal applications where a set of proxy variables can be provided by the common sense or by regulations to draw the attention on potential risks, industrial and safety-critical applications are most of the times sailing blind.

Common Sense Reasoning Fairness

Cannot find the paper you are looking for? You can Submit a new open access paper.