1 code implementation • 9 Aug 2024 • Giorgio Visani, Vincenzo Stanzione, Damien Garreau
The explainability of machine learning algorithms is crucial, and numerous methods have emerged recently.
1 code implementation • 2 Apr 2024 • Magamed Taimeskhanov, Ronan Sicre, Damien Garreau
CAM-based methods are widely-used post-hoc interpretability method that produce a saliency map to explain the decision of an image classification model.
1 code implementation • 5 Feb 2024 • Gianluigi Lopardo, Frederic Precioso, Damien Garreau
Attention-based architectures, in particular transformers, are at the heart of a technological revolution.
1 code implementation • 29 Nov 2023 • Pierre-Alexandre Mattei, Damien Garreau
More precisely, in that case, the average loss of the ensemble is a decreasing function of the number of models.
1 code implementation • 30 Oct 2023 • Gianluigi Lopardo, Frederic Precioso, Damien Garreau
Interpretability is essential for machine learning models to be trusted and deployed in critical domains.
1 code implementation • 1 Jun 2023 • Hidde Fokkema, Damien Garreau, Tim van Erven
Algorithmic recourse provides explanations that help users overturn an unfavorable decision by a machine learning system.
no code implementations • 15 Mar 2023 • Gianluigi Lopardo, Frederic Precioso, Damien Garreau
In many scenarios, the interpretability of machine learning models is a highly required but difficult task.
no code implementations • 9 Mar 2023 • Rémi Catellier, Samuel Vaiter, Damien Garreau
A fundamental issue in machine learning is the robustness of the model with respect to changes in the input.
no code implementations • 6 Dec 2022 • Hugo Henri Joseph Senetaire, Damien Garreau, Jes Frellsen, Pierre-Alexandre Mattei
The model parameters can be learned via maximum likelihood, and the method can be adapted to any predictor network architecture and any type of prediction problem.
1 code implementation • 4 Jul 2022 • Gianluigi Lopardo, Damien Garreau
Complex machine learning algorithms are used more and more often in critical tasks involving text data, leading to the development of interpretability methods.
1 code implementation • 27 May 2022 • Gianluigi Lopardo, Frederic Precioso, Damien Garreau
For text data, it proposes to explain a decision by highlighting a small set of words (an anchor) such that the model to explain has similar outputs when they are present in a document.
1 code implementation • 23 Jan 2022 • Damien Garreau
Quickshift is a popular algorithm for image segmentation, used as a preprocessing step in many applications.
1 code implementation • 16 Nov 2021 • Gianluigi Lopardo, Damien Garreau, Frederic Precioso, Greger Ottosson
To explain such decisions, we propose the Semi-Model-Agnostic Contextual Explainer (SMACE), a new interpretability method that combines a geometric approach for decision rules with existing interpretability methods for machine learning models to generate an intuitive feature ranking tailored to the end user.
1 code implementation • 11 Feb 2021 • Damien Garreau, Dina Mardaoui
As a consequence of this analysis, we uncover a connection between LIME and integrated gradients, another explanation method.
1 code implementation • 23 Oct 2020 • Dina Mardaoui, Damien Garreau
In this paper, we provide a first theoretical analysis of LIME for text data.
1 code implementation • 25 Aug 2020 • Damien Garreau, Ulrike Von Luxburg
As an example, for linear functions we show that LIME has the desirable property to provide explanations that are proportional to the coefficients of the function to explain and to ignore coordinates that are not used by the function to explain.
no code implementations • 10 Jan 2020 • Damien Garreau, Ulrike Von Luxburg
We derive closed-form expressions for the coefficients of the interpretable model when the function to explain is linear.
no code implementations • NeurIPS 2018 • Cheng Tang, Damien Garreau, Ulrike Von Luxburg
As a consequence, even highly randomized trees can lead to inconsistent forests if no subsampling is used, which implies that some of the commonly used setups for random forests can be inconsistent.
no code implementations • ICML 2018 • Siavash Haghiri, Damien Garreau, Ulrike Von Luxburg
Assume we are given a set of items from a general metric space, but we neither have access to the representation of the data nor to the distances between data points.
1 code implementation • 21 May 2018 • Nicolas Keriven, Damien Garreau, Iacopo Poli
We consider the problem of detecting abrupt changes in the distribution of a multi-dimensional time series, with limited computing power and memory.
1 code implementation • 23 Jul 2017 • Damien Garreau, Wittawat Jitkrittum, Motonobu Kanagawa
In kernel methods, the median heuristic has been widely used as a way of setting the bandwidth of RBF kernels.
no code implementations • NeurIPS 2014 • Damien Garreau, Rémi Lajugie, Sylvain Arlot, Francis Bach
The learning examples for this task are time series for which the true alignment is known.