1 code implementation • 5 Feb 2024 • Gianluigi Lopardo, Frederic Precioso, Damien Garreau
Attention-based architectures, in particular transformers, are at the heart of a technological revolution.
1 code implementation • 30 Oct 2023 • Gianluigi Lopardo, Frederic Precioso, Damien Garreau
Interpretability is essential for machine learning models to be trusted and deployed in critical domains.
no code implementations • 15 Mar 2023 • Gianluigi Lopardo, Frederic Precioso, Damien Garreau
In many scenarios, the interpretability of machine learning models is a highly required but difficult task.
1 code implementation • 4 Jul 2022 • Gianluigi Lopardo, Damien Garreau
Complex machine learning algorithms are used more and more often in critical tasks involving text data, leading to the development of interpretability methods.
1 code implementation • 27 May 2022 • Gianluigi Lopardo, Frederic Precioso, Damien Garreau
For text data, it proposes to explain a decision by highlighting a small set of words (an anchor) such that the model to explain has similar outputs when they are present in a document.
1 code implementation • 16 Nov 2021 • Gianluigi Lopardo, Damien Garreau, Frederic Precioso, Greger Ottosson
To explain such decisions, we propose the Semi-Model-Agnostic Contextual Explainer (SMACE), a new interpretability method that combines a geometric approach for decision rules with existing interpretability methods for machine learning models to generate an intuitive feature ranking tailored to the end user.